id
stringlengths 6
15
| question_type
stringclasses 1
value | question
stringlengths 15
683
| choices
listlengths 4
4
| answer
stringclasses 5
values | explanation
stringclasses 481
values | prompt
stringlengths 1.75k
10.9k
|
---|---|---|---|---|---|---|
sciq-10090
|
multiple_choice
|
Normal blood is comprised of nearly half erythrocytes, which is another word for what cells?
|
[
"red blood cells",
"plateletes",
"monocytes",
"white blood cells"
] |
A
|
Relavent Documents:
Document 0:::
White blood cells, also called leukocytes or immune cells also called immunocytes, are cells of the immune system that are involved in protecting the body against both infectious disease and foreign invaders. White blood cells include three main subtypes; granulocytes, lymphocytes and monocytes.
All white blood cells are produced and derived from multipotent cells in the bone marrow known as hematopoietic stem cells. Leukocytes are found throughout the body, including the blood and lymphatic system. All white blood cells have nuclei, which distinguishes them from the other blood cells, the anucleated red blood cells (RBCs) and platelets. The different white blood cells are usually classified by cell lineage (myeloid cells or lymphoid cells). White blood cells are part of the body's immune system. They help the body fight infection and other diseases. Types of white blood cells are granulocytes (neutrophils, eosinophils, and basophils), and agranulocytes (monocytes, and lymphocytes (T cells and B cells)). Myeloid cells (myelocytes) include neutrophils, eosinophils, mast cells, basophils, and monocytes. Monocytes are further subdivided into dendritic cells and macrophages. Monocytes, macrophages, and neutrophils are phagocytic. Lymphoid cells (lymphocytes) include T cells (subdivided into helper T cells, memory T cells, cytotoxic T cells), B cells (subdivided into plasma cells and memory B cells), and natural killer cells. Historically, white blood cells were classified by their physical characteristics (granulocytes and agranulocytes), but this classification system is less frequently used now. Produced in the bone marrow, white blood cells defend the body against infections and disease. An excess of white blood cells is usually due to infection or inflammation. Less commonly, a high white blood cell count could indicate certain blood cancers or bone marrow disorders.
The number of leukocytes in the blood is often an indicator of disease, and thus the white blood
Document 1:::
A splenocyte can be any one of the different white blood cell types as long as it is situated in the spleen or purified from splenic tissue.
Splenocytes consist of a variety of cell populations such as T and B lymphocytes, dendritic cells and macrophages, which have different immune functions.
Document 2:::
Reticulocytosis is a condition where there is an increase in reticulocytes, immature red blood cells.
It is commonly seen in anemia. They are seen on blood films when the bone marrow is highly active in an attempt to replace red blood cell loss such as in haemolytic anaemia or haemorrhage.
External links
Histology
Abnormal clinical and laboratory findings for RBCs
Document 3:::
The red pulp of the spleen is composed of connective tissue known also as the cords of Billroth and many splenic sinusoids that are engorged with blood, giving it a red color. Its primary function is to filter the blood of antigens, microorganisms, and defective or worn-out red blood cells.
The spleen is made of red pulp and white pulp, separated by the marginal zone; 76-79% of a normal spleen is red pulp. Unlike white pulp, which mainly contains lymphocytes such as T cells, red pulp is made up of several different types of blood cells, including platelets, granulocytes, red blood cells, and plasma.
The red pulp also acts as a large reservoir for monocytes. These monocytes are found in clusters in the Billroth's cords (red pulp cords). The population of monocytes in this reservoir is greater than the total number of monocytes present in circulation. They can be rapidly mobilised to leave the spleen and assist in tackling ongoing infections.
Sinusoids
The splenic sinusoids, are wide vessels that drain into pulp veins which themselves drain into trabecular veins. Gaps in the endothelium lining the sinusoids mechanically filter blood cells as they enter the spleen. Worn-out or abnormal red cells attempting to squeeze through the narrow intercellular spaces become badly damaged, and are subsequently devoured by macrophages in the red pulp. In addition to clearing aged red blood cells, the sinusoids also filter out cellular debris, particles that could clutter up the bloodstream.
Cells found in red pulp
Red pulp consists of a dense network of fine reticular fiber, continuous with those of the splenic trabeculae, to which are applied flat, branching cells. The meshes of the reticulum are filled with blood:
White blood cells are found to be in larger proportion than they are in ordinary blood.
Large rounded cells, termed splenic cells, are also seen; these are capable of ameboid movement, and often contain pigment and red-blood corpuscles in their interior.
The cell
Document 4:::
Platelets or thrombocytes (from Greek θρόμβος, "clot" and κύτος, "cell") are a component of blood whose function (along with the coagulation factors) is to react to bleeding from blood vessel injury by clumping, thereby initiating a blood clot. Platelets have no cell nucleus; they are fragments of cytoplasm derived from the megakaryocytes of the bone marrow or lung, which then enter the circulation. Platelets are found only in mammals, whereas in other vertebrates (e.g. birds, amphibians), thrombocytes circulate as intact mononuclear cells.
One major function of platelets is to contribute to hemostasis: the process of stopping bleeding at the site of interrupted endothelium. They gather at the site and, unless the interruption is physically too large, they plug the hole. First, platelets attach to substances outside the interrupted endothelium: adhesion. Second, they change shape, turn on receptors and secrete chemical messengers: activation. Third, they connect to each other through receptor bridges: aggregation. Formation of this platelet plug (primary hemostasis) is associated with activation of the coagulation cascade, with resultant fibrin deposition and linking (secondary hemostasis). These processes may overlap: the spectrum is from a predominantly platelet plug, or "white clot" to a predominantly fibrin, or "red clot" or the more typical mixture. Some would add the subsequent retraction and platelet inhibition as fourth and fifth steps to the completion of the process and still others would add a sixth step, wound repair. Platelets also participate in both innate and adaptive intravascular immune responses.
Structure
Structure
Structurally the platelet can be divided into four zones, from peripheral to innermost:
Peripheral zone – is rich in glycoproteins required for platelet adhesion, activation and aggregation. For example, GPIb/IX/V; GPVI; GPIIb/IIIa.
Sol-gel zone – is rich in microtubules and microfilaments, allowing the platelets to maintain their
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Normal blood is comprised of nearly half erythrocytes, which is another word for what cells?
A. red blood cells
B. plateletes
C. monocytes
D. white blood cells
Answer:
|
|
sciq-7770
|
multiple_choice
|
The loss of too much blood may lead to shock of what system?
|
[
"circulatory",
"heart",
"nervous",
"Circular"
] |
A
|
Relavent Documents:
Document 0:::
End organ damage usually refers to damage occurring in major organs fed by the circulatory system (heart, kidneys, brain, eyes) which can sustain damage due to uncontrolled hypertension, hypotension, or hypovolemia.
Evidence of hypertensive damage
In the context of hypertension, features include:
Heart — evidence on electrocardiogram screening of the heart muscle thickening (but may also be seen on chest X-ray) suggesting left ventricular hypertrophy) or by echocardiography of less efficient function (left ventricular failure).
Brain- hypertensive encephalopathy, hemorrhagic stroke, subarachnoid hemorrhage, confusion, loss of consciousness, eclampsia, seizures, or transient ischemic attack.
Kidney — leakage of protein into the urine (albuminuria or proteinuria), or reduced renal function, hypertensive nephropathy, acute renal failure, or glomerulonephritis.
Eye — evidence upon fundoscopic examination of hypertensive retinopathy, retinal hemorrhage, papilledema and blindness.
Peripheral arteries — peripheral vascular disease and chronic lower limb ischemia.
Evidence of shock
In the context of poor end organ perfusion, features include:
Kidney — poor urine output (less than 0.5 mL/kg), low glomerular filtration rate.
Skin — pallor or mottled appearance, capillary refill > 2 secs, cool limbs.
Brain — obtundation or disorientation to time, person, and place. The Glasgow Coma Scale may be used to quantify altered consciousness.
Gut — absent bowel sounds, ileus
Document 1:::
The blood circulatory system is a system of organs that includes the heart, blood vessels, and blood which is circulated throughout the entire body of a human or other vertebrate. It includes the cardiovascular system, or vascular system, that consists of the heart and blood vessels (from Greek kardia meaning heart, and from Latin vascula meaning vessels). The circulatory system has two divisions, a systemic circulation or circuit, and a pulmonary circulation or circuit. Some sources use the terms cardiovascular system and vascular system interchangeably with the circulatory system.
The network of blood vessels are the great vessels of the heart including large elastic arteries, and large veins; other arteries, smaller arterioles, capillaries that join with venules (small veins), and other veins. The circulatory system is closed in vertebrates, which means that the blood never leaves the network of blood vessels. Some invertebrates such as arthropods have an open circulatory system. Diploblasts such as sponges, and comb jellies lack a circulatory system.
Blood is a fluid consisting of plasma, red blood cells, white blood cells, and platelets; it is circulated around the body carrying oxygen and nutrients to the tissues and collecting and disposing of waste materials. Circulated nutrients include proteins and minerals and other components include hemoglobin, hormones, and gases such as oxygen and carbon dioxide. These substances provide nourishment, help the immune system to fight diseases, and help maintain homeostasis by stabilizing temperature and natural pH.
In vertebrates, the lymphatic system is complementary to the circulatory system. The lymphatic system carries excess plasma (filtered from the circulatory system capillaries as interstitial fluid between cells) away from the body tissues via accessory routes that return excess fluid back to blood circulation as lymph. The lymphatic system is a subsystem that is essential for the functioning of the bloo
Document 2:::
Cardiovascular physiology is the study of the cardiovascular system, specifically addressing the physiology of the heart ("cardio") and blood vessels ("vascular").
These subjects are sometimes addressed separately, under the names cardiac physiology and circulatory physiology.
Although the different aspects of cardiovascular physiology are closely interrelated, the subject is still usually divided into several subtopics.
Heart
Cardiac output (= heart rate * stroke volume. Can also be calculated with Fick principle, palpating method.)
Stroke volume (= end-diastolic volume − end-systolic volume)
Ejection fraction (= stroke volume / end-diastolic volume)
Cardiac output is mathematically ` to systole
Inotropic, chronotropic, and dromotropic states
Cardiac input (= heart rate * suction volume Can be calculated by inverting terms in Fick principle)
Suction volume (= end-systolic volume + end-diastolic volume)
Injection fraction (=suction volume / end-systolic volume)
Cardiac input is mathematically ` to diastole
Electrical conduction system of the heart
Electrocardiogram
Cardiac marker
Cardiac action potential
Frank–Starling law of the heart
Wiggers diagram
Pressure volume diagram
Regulation of blood pressure
Baroreceptor
Baroreflex
Renin–angiotensin system
Renin
Angiotensin
Juxtaglomerular apparatus
Aortic body and carotid body
Autoregulation
Cerebral Autoregulation
Hemodynamics
Under most circumstances, the body attempts to maintain a steady mean arterial pressure.
When there is a major and immediate decrease (such as that due to hemorrhage or standing up), the body can increase the following:
Heart rate
Total peripheral resistance (primarily due to vasoconstriction of arteries)
Inotropic state
In turn, this can have a significant impact upon several other variables:
Stroke volume
Cardiac output
Pressure
Pulse pressure (systolic pressure - diastolic pressure)
Mean arterial pressure (usually approximated with diastolic pressure +
Document 3:::
In haemodynamics, the body must respond to physical activities, external temperature, and other factors by homeostatically adjusting its blood flow to deliver nutrients such as oxygen and glucose to stressed tissues and allow them to function. Haemodynamic response (HR) allows the rapid delivery of blood to active neuronal tissues. The brain consumes large amounts of energy but does not have a reservoir of stored energy substrates. Since higher processes in the brain occur almost constantly, cerebral blood flow is essential for the maintenance of neurons, astrocytes, and other cells of the brain. This coupling between neuronal activity and blood flow is also referred to as neurovascular coupling.
Vascular anatomy overview
In order to understand how blood is delivered to cranial tissues, it is important to understand the vascular anatomy of the space itself. Large cerebral arteries in the brain split into smaller arterioles, also known as pial arteries. These consist of endothelial cells and smooth muscle cells, and as these pial arteries further branch and run deeper into the brain, they associate with glial cells, namely astrocytes. The intracerebral arterioles and capillaries are unlike systemic arterioles and capillaries in that they do not readily allow substances to diffuse through them; they are connected by tight junctions in order to form the blood brain barrier (BBB). Endothelial cells, smooth muscle, neurons, astrocytes, and pericytes work together in the brain order to maintain the BBB while still delivering nutrients to tissues and adjusting blood flow in the intracranial space to maintain homeostasis. As they work as a functional neurovascular unit, alterations in their interactions at the cellular level can impair HR in the brain and lead to deviations in normal nervous function.
Mechanisms
Various cell types play a role in HR, including astrocytes, smooth muscle cells, endothelial cells of blood vessels, and pericytes. These cells control whether th
Document 4:::
Animal models of stroke are procedures undertaken in animals (including non-human primates) intending to provoke pathophysiological states that are similar to those of human stroke to study basic processes or potential therapeutic interventions in this disease. Aim is the extension of the knowledge on and/or the improvement of medical treatment of human stroke.
Classification by cause
The term stroke subsumes cerebrovascular disorders of different etiologies, featuring diverse pathophysiological processes. Thus, for each stroke etiology one or more animal models have been developed:
Animal models of ischemic stroke
Animal models of intracerebral hemorrhage
Animal models of subarachnoid hemorrhage and cerebral vasospasm
Animal models of sinus vein thrombosis
Transferability of animal results to human stroke
Although multiple therapies have proven to be effective in animals, only very few have done so in human patients. Reasons for this are (Dirnagl 1999):
Side effects: Many highly potent neuroprotective drugs display side effects which inhibit the application of effective doses in patients (e.g. MK-801)
Delay: Whereas in animal studies the time of incidence onset is known and therapy can be started early, patients often present with delay and unclear time of symptom onset
“Age and associated illnesses: Most experimental studies are conducted on healthy, young animals under rigorously controlled laboratory conditions. However, the typical stroke patient is elderly with numerous risk factors and complicating diseases (for example, diabetes, hypertension and heart diseases)” (Dirnagl 1999)
Morphological and functional differences between the brain of humans and animals: Although the basic mechanisms of stroke are identical between humans and other mammals, there are differences.
Evaluation of efficacy: In animals, treatment effects are mostly measured as a reduction of lesion volume, whereas in human studies functional evaluation (which reflects the severity of disabi
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The loss of too much blood may lead to shock of what system?
A. circulatory
B. heart
C. nervous
D. Circular
Answer:
|
|
sciq-1537
|
multiple_choice
|
One of the simplest machines is the lever, which is a rigid bar pivoted at a fixed place called what?
|
[
"caliper",
"wheel",
"sling",
"fulcrum"
] |
D
|
Relavent Documents:
Document 0:::
The compound lever is a simple machine operating on the premise that the resistance from one lever in a system of levers acts as effort for the next, and thus the applied force is transferred from one lever to the next. Almost all scales use some sort of compound lever to work. Other examples include nail clippers and piano keys.
Mechanical advantage
A lever arm uses the fulcrum to lift the load using and intensifying an applied force. In practice, conditions may prevent the use of a single lever to accomplish the desired result, e.g., a restricted space, the inconvenient location of the point of delivery of the resultant force, or the prohibitive length of the lever arm needed. In these conditions, combinations of simple levers, called compound levers, are used. Compound levers can be constructed from first, second and/or third-order levers. In all types of compound lever, the rule is that force multiplied by the force arm equals the weight multiplied by the weight arm. The output from one lever becomes the input for the next lever in the system, and so the advantage is magnified.
The figure on the left illustrates a compound lever formed from two first-class levers, along with a short derivation of how to compute the mechanical advantage. With the dimensions shown, the mechanical advantage, W/F can be calculated as meaning that an applied force of 1 pound (or 1 kg) could lift a weight of 7.5 lb (or 7.5 kg).
Alternatively, if the position of the fulcrum on lever AA' were moved so that and then the mechanical advantage W/F is calculated as meaning that an applied force will lift an equivalent weight and there is no mechanical advantage. This is not usually the goal of a compound lever system, though in rare situations the geometry may suit a specific purpose.
The distances used in calculation of mechanical advantage are measured perpendicular to the force. In the example of a nail clipper on the right (a compound lever made of a class 2 and a class 3 le
Document 1:::
Machine element or hardware refers to an elementary component of a machine. These elements consist of three basic types:
structural components such as frame members, bearings, axles, splines, fasteners, seals, and lubricants,
mechanisms that control movement in various ways such as gear trains, belt or chain drives, linkages, cam and follower systems, including brakes and clutches, and
control components such as buttons, switches, indicators, sensors, actuators and computer controllers.
While generally not considered to be a machine element, the shape, texture and color of covers are an important part of a machine that provide a styling and operational interface between the mechanical components of a machine and its users.
Machine elements are basic mechanical parts and features used as the building blocks of most machines. Most are standardized to common sizes, but customs are also common for specialized applications.
Machine elements may be features of a part (such as screw threads or integral plain bearings) or they may be discrete parts in and of themselves such as wheels, axles, pulleys, rolling-element bearings, or gears. All of the simple machines may be described as machine elements, and many machine elements incorporate concepts of one or more simple machines. For example, a leadscrew incorporates a screw thread, which is an inclined plane wrapped around a cylinder.
Many mechanical design, invention, and engineering tasks involve a knowledge of various machine elements and an intelligent and creative combining of these elements into a component or assembly that fills a need (serves an application).
Structural elements
Beams,
Struts,
Bearings,
Fasteners
Keys,
Splines,
Cotter pin,
Seals
Machine guardings
Mechanical elements
Engine,
Electric motor,
Actuator,
Shafts,
Couplings
Belt,
Chain,
Cable drives,
Gear train,
Clutch,
Brake,
Flywheel,
Cam,
follower systems,
Linkage,
Simple machine
Types
Shafts
Document 2:::
A machine tool is a machine for handling or machining metal or other rigid materials, usually by cutting, boring, grinding, shearing, or other forms of deformations. Machine tools employ some sort of tool that does the cutting or shaping. All machine tools have some means of constraining the workpiece and provide a guided movement of the parts of the machine. Thus, the relative movement between the workpiece and the cutting tool (which is called the toolpath) is controlled or constrained by the machine to at least some extent, rather than being entirely "offhand" or "freehand". It is a power-driven metal cutting machine which assists in managing the needed relative motion between cutting tool and the job that changes the size and shape of the job material.
The precise definition of the term machine tool varies among users, as discussed below. While all machine tools are "machines that help people to make things", not all factory machines are machine tools.
Today machine tools are typically powered other than by the human muscle (e.g., electrically, hydraulically, or via line shaft), used to make manufactured parts (components) in various ways that include cutting or certain other kinds of deformation.
With their inherent precision, machine tools enabled the economical production of interchangeable parts.
Nomenclature and key concepts, interrelated
Many historians of technology consider that true machine tools were born when the toolpath first became guided by the machine itself in some way, at least to some extent, so that direct, freehand human guidance of the toolpath (with hands, feet, or mouth) was no longer the only guidance used in the cutting or forming process. In this view of the definition, the term, arising at a time when all tools up till then had been hand tools, simply provided a label for "tools that were machines instead of hand tools". Early lathes, those prior to the late medieval period, and modern woodworking lathes and potter's wheels m
Document 3:::
A simple machine that exhibits mechanical advantage is called a mechanical advantage device - e.g.:
Lever: The beam shown is in static equilibrium around the fulcrum. This is due to the moment created by vector force "A" counterclockwise (moment A*a) being in equilibrium with the moment created by vector force "B" clockwise (moment B*b). The relatively low vector force "B" is translated in a relatively high vector force "A". The force is thus increased in the ratio of the forces A : B, which is equal to the ratio of the distances to the fulcrum b : a. This ratio is called the mechanical advantage. This idealised situation does not take into account friction.
Wheel and axle motion (e.g. screwdrivers, doorknobs): A wheel is essentially a lever with one arm the distance between the axle and the outer point of the wheel, and the other the radius of the axle. Typically this is a fairly large difference, leading to a proportionately large mechanical advantage. This allows even simple wheels with wooden axles running in wooden blocks to still turn freely, because their friction is overwhelmed by the rotational force of the wheel multiplied by the mechanical advantage.
A block and tackle of multiple pulleys creates mechanical advantage, by having the flexible material looped over several pulleys in turn. Adding more loops and pulleys increases the mechanical advantage.
Screw: A screw is essentially an inclined plane wrapped around a cylinder. The run over the rise of this inclined plane is the mechanical advantage of a screw.
Pulleys
Consider lifting a weight with rope and pulleys. A rope looped through a pulley attached to a fixed spot, e.g. a barn roof rafter, and attached to the weight is called a single pulley. It has a mechanical advantage (MA) = 1 (assuming frictionless bearings in the pulley), moving no mechanical advantage (or disadvantage) however advantageous the change in direction may be.
A single movable pulley has an MA of 2 (assuming frictionless be
Document 4:::
Mechanical engineering is a discipline centered around the concept of using force multipliers, moving components, and machines. It utilizes knowledge of mathematics, physics, materials sciences, and engineering technologies. It is one of the oldest and broadest of the engineering disciplines.
Dawn of civilization to early middle ages
Engineering arose in early civilization as a general discipline for the creation of large scale structures such as irrigation, architecture, and military projects. Advances in food production through irrigation allowed a portion of the population to become specialists in Ancient Babylon.
All six of the classic simple machines were known in the ancient Near East. The wedge and the inclined plane (ramp) were known since prehistoric times. The wheel, along with the wheel and axle mechanism, was invented in Mesopotamia (modern Iraq) during the 5th millennium BC. The lever mechanism first appeared around 5,000 years ago in the Near East, where it was used in a simple balance scale, and to move large objects in ancient Egyptian technology. The lever was also used in the shadoof water-lifting device, the first crane machine, which appeared in Mesopotamia circa 3000 BC, and then in ancient Egyptian technology circa 2000 BC. The earliest evidence of pulleys date back to Mesopotamia in the early 2nd millennium BC, and ancient Egypt during the Twelfth Dynasty (1991-1802 BC). The screw, the last of the simple machines to be invented, first appeared in Mesopotamia during the Neo-Assyrian period (911-609) BC. The Egyptian pyramids were built using three of the six simple machines, the inclined plane, the wedge, and the lever, to create structures like the Great Pyramid of Giza.
The Assyrians were notable in their use of metallurgy and incorporation of iron weapons. Many of their advancements were in military equipment. They were not the first to develop them, but did make advancements on the wheel and the chariot. They made use of pivot-able axl
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
One of the simplest machines is the lever, which is a rigid bar pivoted at a fixed place called what?
A. caliper
B. wheel
C. sling
D. fulcrum
Answer:
|
|
ai2_arc-147
|
multiple_choice
|
Where does the energy from an earthquake originate?
|
[
"from a sudden increase in solar radiation striking Earth",
"from the Moon's gravitational pull during a close orbit",
"from rocks under stress shifting deep inside Earth",
"from the weight of sediments pushing down on bedrock"
] |
C
|
Relavent Documents:
Document 0:::
The Human-Induced Earthquake Database (HiQuake) is an online database that documents all reported cases of induced seismicity proposed on scientific grounds. It is the most complete compilation of its kind and is freely available to download via the associated website. The database is periodically updated to correct errors, revise existing entries, and add new entries reported in new scientific papers and reports. Suggestions for revisions and new entries can be made via the associated website.
History
In 2016, Nederlandse Aardolie Maatschappij funded a team of researchers from Durham University and Newcastle University to conduct a full review of induced seismicity. This review formed part of a scientific workshop aimed at estimating the maximum possible magnitude earthquake that might be induced by conventional gas production in the Groningen gas field.
The resulting database from the review was publicly released online on the 26 January 2017. The database was accompanied by the publication of two scientific papers, the more detailed of which is freely available online.
Document 1:::
Seismic moment is a quantity used by seismologists to measure the size of an earthquake. The scalar seismic moment is defined by the equation
, where
is the shear modulus of the rocks involved in the earthquake (in pascals (Pa), i.e. newtons per square meter)
is the area of the rupture along the geologic fault where the earthquake occurred (in square meters), and
is the average slip (displacement offset between the two sides of the fault) on (in meters).
thus has dimensions of torque, measured in newton meters. The connection between seismic moment and a torque is natural in the body-force equivalent representation of seismic sources as a double-couple (a pair of force couples with opposite torques): the seismic moment is the torque of each of the two couples. Despite having the same dimensions as energy, seismic moment is not a measure of energy. The relations between seismic moment, potential energy drop and radiated energy are indirect and approximative.
The seismic moment of an earthquake is typically estimated using whatever information is available to constrain its factors. For modern earthquakes, moment is usually estimated from ground motion recordings of earthquakes known as seismograms. For earthquakes that occurred in times before modern instruments were available, moment may be estimated from geologic estimates of the size of the fault rupture and the slip.
Seismic moment is the basis of the moment magnitude scale introduced by Hiroo Kanamori, which is often used to compare the size of different earthquakes and is especially useful for comparing the sizes of large (great) earthquakes.
The seismic moment is not restricted to earthquakes. For a more general seismic source described by a seismic moment tensor (a symmetric tensor, but not necessarily a double couple tensor), the seismic moment is
See also
Richter magnitude scale
Moment magnitude scale
Sources
.
.
.
.
Seismology measurement
Moment (physics)
Document 2:::
The moment magnitude scale (MMS; denoted explicitly with or Mw, and generally implied with use of a single M for magnitude) is a measure of an earthquake's magnitude ("size" or strength) based on its seismic moment. It was defined in a 1979 paper by Thomas C. Hanks and Hiroo Kanamori. Similar to the local magnitude/Richter scale () defined by Charles Francis Richter in 1935, it uses a logarithmic scale; small earthquakes have approximately the same magnitudes on both scales. Despite the difference, news media often says "Richter scale" when referring to the moment magnitude scale.
Moment magnitude () is considered the authoritative magnitude scale for ranking earthquakes by size. It is more directly related to the energy of an earthquake than other scales, and does not saturatethat is, it does not underestimate magnitudes as other scales do in certain conditions. It has become the standard scale used by seismological authorities like the U.S. Geological Survey for reporting large earthquakes (typically M > 4), replacing the local magnitude () and surface wave magnitude () scales. Subtypes of the moment magnitude scale (, etc.) reflect different ways of estimating the seismic moment.
History
Richter scale: the original measure of earthquake magnitude
At the beginning of the twentieth century, very little was known about how earthquakes happen, how seismic waves are generated and propagate through the Earth's crust, and what information they carry about the earthquake rupture process; the first magnitude scales were therefore empirical. The initial step in determining earthquake magnitudes empirically came in 1931 when the Japanese seismologist Kiyoo Wadati showed that the maximum amplitude of an earthquake's seismic waves diminished with distance at a certain rate. Charles F. Richter then worked out how to adjust for epicentral distance (and some other factors) so that the logarithm of the amplitude of the seismograph trace could be used as a measure of "magnit
Document 3:::
Energy class – also called energy class K or K-class , and denoted by K (from the Russian класс) – is a measure of the force or magnitude of local and regional earthquakes used in countries of the former Soviet Union, and Cuba and Mongolia. K is nominally the logarithm of seismic energy (in Joules) radiated by an earthquake, as expressed in the formula K = log ES. Values of K in the range of 12 to 15 correspond approximately to the range of 4.5 to 6 in other magnitude scales; a magnitude 6.0 quake will register between 13 and 14.5 on various K-class scales.
The energy class system was developed by seismologists of the Soviet Tadzhikskaya Complex [Interdisciplinary] Seismological Expedition established in the remote Garm (Tajikistan) region of Central Asia in 1954 after several devastating earthquakes in that area.
The Garm region is one of the most seismically active regions of the former Soviet Union, with up to 5,000 earthquakes per year. The volume of processing needed, and the rudimentary state of seismological equipment and methods at that time, led the expedition workers to develop new equipment and methods. V. I. Bune is credited with developing a scale based on an earthquake's seismic energy, although S. L. Solov'ev seems to have made major contributions. (In contrast to the "Richter" and other magnitude scales developed by Western seismologists, which estimate the magnitude from the amplitude of some portion of the seismic waves generated, an indirect measure of seismic energy.)
However, proper estimation of ES requires more sophisticated tools than were available at the time, and Bune's method was unworkable. A more practical revision was presented by T. G. Rautian in 1958 and 1960; by 1961 K-class was being used across the USSR. A key change was to estimate ES on the basis of peak amplitude of the seismic waves – particularly, the sum of maximum P-wave and maximum S-wave – within the first three seconds. As a result, K-class became a kind of local magn
Document 4:::
In seismology and other areas involving elastic waves, S waves, secondary waves, or shear waves (sometimes called elastic S waves) are a type of elastic wave and are one of the two main types of elastic body waves, so named because they move through the body of an object, unlike surface waves.
S waves are transverse waves, meaning that the direction of particle movement of an S wave is perpendicular to the direction of wave propagation, and the main restoring force comes from shear stress. Therefore, S waves cannot propagate in liquids with zero (or very low) viscosity; however, they may propagate in liquids with high viscosity.
The name secondary wave comes from the fact that they are the second type of wave to be detected by an earthquake seismograph, after the compressional primary wave, or P wave, because S waves travel more slowly in solids. Unlike P waves, S waves cannot travel through the molten outer core of the Earth, and this causes a shadow zone for S waves opposite to their origin. They can still propagate through the solid inner core: when a P wave strikes the boundary of molten and solid cores at an oblique angle, S waves will form and propagate in the solid medium. When these S waves hit the boundary again at an oblique angle, they will in turn create P waves that propagate through the liquid medium. This property allows seismologists to determine some physical properties of the Earth's inner core.
History
In 1830, the mathematician Siméon Denis Poisson presented to the French Academy of Sciences an essay ("memoir") with a theory of the propagation of elastic waves in solids. In his memoir, he states that an earthquake would produce two different waves: one having a certain speed and the other having a speed . At a sufficient distance from the source, when they can be considered plane waves in the region of interest, the first kind consists of expansions and compressions in the direction perpendicular to the wavefront (that is, parallel to the
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Where does the energy from an earthquake originate?
A. from a sudden increase in solar radiation striking Earth
B. from the Moon's gravitational pull during a close orbit
C. from rocks under stress shifting deep inside Earth
D. from the weight of sediments pushing down on bedrock
Answer:
|
|
sciq-3148
|
multiple_choice
|
A plane mirror has a flat reflective surface and forms only which kind of images?
|
[
"virtual",
"enlarged",
"spherical",
"reduced"
] |
A
|
Relavent Documents:
Document 0:::
A plane mirror is a mirror with a flat (planar) reflective surface. For light rays striking a plane mirror, the angle of reflection equals the angle of incidence. The angle of the incidence is the angle between the incident ray and the surface normal (an imaginary line perpendicular to the surface). Therefore, the angle of reflection is the angle between the reflected ray and the normal and a collimated beam of light does not spread out after reflection from a plane mirror, except for diffraction effects.
A plane mirror makes an image of objects in front of the mirror; these images appear to be behind the plane in which the mirror lies. A straight line drawn from part of an object to the corresponding part of its image makes a right angle with, and is bisected by, the surface of the plane mirror. The image formed by a plane mirror is virtual (meaning that the light rays do not actually come from the image) it is not real image (meaning that the light rays do actually come from the image). it is always upright, and of the same shape and size as the object it is reflecting. A virtual image is a copy of an object formed at the location from which the light rays appear to come. Actually, the image formed in the mirror is a perverted image (Perversion), there is a misconception among people about having confused with perverted and laterally-inverted image. If a person is reflected in a plane mirror, the image of his right hand appears to be the left hand of the image.
Plane mirrors are the only type of mirror for which a object produces an image that is virtual, erect and of the same size as the object in all cases irrespective of the shape, size and distance from mirror of the object however same is possible for other types of mirror (concave and convex) but only for a specific conditions . However the focal length of a plane mirror is infinity; its optical power is zero.
Using the mirror equation, where is the object distance, is the image distance, and is the
Document 1:::
A mirror image (in a plane mirror) is a reflected duplication of an object that appears almost identical, but is reversed in the direction perpendicular to the mirror surface. As an optical effect it results from reflection off from substances such as a mirror or water. It is also a concept in geometry and can be used as a conceptualization process for 3-D structures.
In geometry and geometrical optics
In two dimensions
In geometry, the mirror image of an object or two-dimensional figure is the virtual image formed by reflection in a plane mirror; it is of the same size as the original object, yet different, unless the object or figure has reflection symmetry (also known as a P-symmetry).
Two-dimensional mirror images can be seen in the reflections of mirrors or other reflecting surfaces, or on a printed surface seen inside-out. If we first look at an object that is effectively two-dimensional (such as the writing on a card) and then turn the card to face a mirror, the object turns through an angle of 180° and we see a left-right reversal in the mirror. In this example, it is the change in orientation rather than the mirror itself that causes the observed reversal. Another example is when we stand with our backs to the mirror and face an object that is in front of the mirror. Then we compare the object with its reflection by turning ourselves 180°, towards the mirror. Again we perceive a left-right reversal due to a change in our orientation. So, in these examples the mirror does not actually cause the observed reversals.
In three dimensions
The concept of reflection can be extended to three-dimensional objects, including the inside parts, even if they are not transparent. The term then relates to structural as well as visual aspects. A three-dimensional object is reversed in the direction perpendicular to the mirror surface. In physics, mirror images are investigated in the subject called geometrical optics. More fundamentally in geometry and mathematics they
Document 2:::
The study of image formation encompasses the radiometric and geometric processes by which 2D images of 3D objects are formed. In the case of digital images, the image formation process also includes analog to digital conversion and sampling.
Imaging
The imaging process is a mapping of an object to an image plane. Each point on the image corresponds to a point on the object. An illuminated object will scatter light toward a lens and the lens will collect and focus the light to create the image. The ratio of the height of the image to the height of the object is the magnification. The spatial extent of the image surface and the focal length of the lens determines the field of view of the lens. Image formation of mirror these have a center of curvature and its focal length of the mirror is half of the center of curvature.
Illumination
An object may be illuminated by the light from an emitting source such as the sun, a light bulb or a Light Emitting Diode. The light incident on the object is reflected in a manner dependent on the surface properties of the object. For rough surfaces, the reflected light is scattered in a manner described by the Bi-directional Reflectance Distribution Function (BRDF) of the surface. The BRDF of a surface is the ratio of the exiting power per square meter per steradian (radiance) to the incident power per square meter (irradiance). The BRDF typically varies with angle and may vary with wavelength, but a specific important case is a surface that has constant BRDF. This surface type is referred to as Lambertian and the magnitude of the BRDF is R/π, where R is the reflectivity of the surface. The portion of scattered light that propagates toward the lens is collected by the entrance pupil of the imaging lens over the field of view.
Field of view and imagery
The Field of view of a lens is limited by the size of the image plane and the focal length of the lens. The relationship between a location on the image and a location on t
Document 3:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 4:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A plane mirror has a flat reflective surface and forms only which kind of images?
A. virtual
B. enlarged
C. spherical
D. reduced
Answer:
|
|
sciq-5519
|
multiple_choice
|
Tin is oxidized at the anode, while silver ion is reduced at?
|
[
"cathode",
"anodyne",
"iodine",
"gamma"
] |
A
|
Relavent Documents:
Document 0:::
A stannide can refer to an intermetallic compound containing tin combined with one or more other metals; an anion consisting solely of tin atoms or a compound containing such an anion, or, in the field of organometallic chemistry an ionic compound containing an organotin anion (e.g.see an alternative name for such a compound is stannanide.)
Binary alkali and alkaline earth stannides
When tin is combined with an alkali or alkaline earth metal some of the compounds formed have ionic structures containing monatomic or polyatomic tin anions (Zintl ions), such as Sn4− in Mg2Sn or in K4Sn9.
Even with these metals not all of the compounds formed can be considered to be ionic with localised bonding, for example Sr3Sn5, a metallic compound, contains {Sn5} square pyramidal units.
Ternary alkali and alkaline earth stannides
Ternary (where there is an alkali or alkaline earth metal, a transition metal as well as tin e.g. LiRh3Sn5 and MgRuSn4) have been investigated.
Other metal stannides
Binary (involving one other metal) and ternary (involving two other metals) intermetallic stannides have been investigated. Niobium stannide, Nb3Sn is perhaps the best known superconducting tin intermetallics. This is more commonly called "niobium-tin".
Document 1:::
This is a list of the sizes, shapes, and general characteristics of some common primary and secondary battery types in household, automotive and light industrial use.
The complete nomenclature for a battery specifies size, chemistry, terminal arrangement, and special characteristics. The same physically interchangeable cell size or battery size may have widely different characteristics; physical interchangeability is not the sole factor in substituting a battery.
The full battery designation identifies not only the size, shape and terminal layout of the battery but also the chemistry (and therefore the voltage per cell) and the number of cells in the battery. For example, a CR123 battery is always LiMnO2 ('Lithium') chemistry, in addition to its unique size.
The following tables give the common battery chemistry types for the current common sizes of batteries. See Battery chemistry for a list of other electrochemical systems.
Cylindrical batteries
Rectangular batteries
Camera batteries
As well as other types, digital and film cameras often use specialized primary batteries to produce a compact product. Flashlights and portable electronic devices may also use these types.
Button cells – coin, watch
Lithium cells
Coin-shaped cells are thin compared to their diameter. Polarity is usually stamped on the metal casing.
The IEC prefix "CR" denotes lithium manganese dioxide chemistry. Since LiMnO2 cells produce 3 volts there are no widely available alternative chemistries for a lithium coin battery. The "BR" prefix indicates a round lithium/carbon monofluoride cell. See lithium battery for discussion of the different performance characteristics. One LiMnO2 cell can replace two alkaline or silver-oxide cells.
IEC designation numbers indicate the physical dimensions of the cylindrical cell. Cells less than one centimeter in height are assigned four-digit numbers, where the first two digits are the diameter in millimeters, while the last two digits are the height in
Document 2:::
It also reduces copper(II) to copper(I).
Solutions of tin(II) chloride can also serve simply as a source of Sn2+ ions, which can form other tin(II) compounds via precipitation reactions. For example, rea
Document 3:::
Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams.
Course content
E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are:
Electrostatics
Conductors, capacitors, and dielectrics
Electric circuits
Magnetic fields
Electromagnetism.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with
Document 4:::
A solvated electron is a free electron in (solvated in) a solution, and is the smallest possible anion. Solvated electrons occur widely. Often, discussions of solvated electrons focus on their solutions in ammonia, which are stable for days, but solvated electrons also occur in water and other solvents in fact, in any solvent that mediates outer-sphere electron transfer. The solvated electron is responsible for a great deal of radiation chemistry.
Ammonia solutions
Liquid ammonia will dissolve all of the alkali metals and other electropositive metals such as Ca, Sr, Ba, Eu, and Yb (also Mg using an electrolytic process), giving characteristic blue solutions. For alkali metals in liquid ammonia, the solution is blue when dilute and copper-colored when more concentrated (> 3 molar). These solutions conduct electricity. The blue colour of the solution is due to ammoniated electrons, which absorb energy in the visible region of light. The diffusivity of the solvated electron in liquid ammonia can be determined using potential-step chronoamperometry.
Solvated electrons in ammonia are the anions of salts called electrides.
Na + 6 NH3 → [Na(NH3)6]+e−
The reaction is reversible: evaporation of the ammonia solution produces a film of metallic sodium.
Case study: Li in NH3
A lithium–ammonia solution at −60 °C is saturated at about 15 mol% metal (MPM). When the concentration is increased in this range electrical conductivity increases from 10−2 to 104 Ω−1cm−1 (larger than liquid mercury). At around 8 MPM, a "transition to the metallic state" (TMS) takes place (also called a "metal-to-nonmetal transition" (MNMT)). At 4 MPM a liquid-liquid phase separation takes place: the less dense gold-colored phase becomes immiscible from a denser blue phase. Above 8 MPM the solution is bronze/gold-colored. In the same concentration range the overall density decreases by 30%.
Other solvents
Alkali metals also dissolve in some small primary amines, such as methylamine and ethylami
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Tin is oxidized at the anode, while silver ion is reduced at?
A. cathode
B. anodyne
C. iodine
D. gamma
Answer:
|
|
sciq-4111
|
multiple_choice
|
All prokaryotic and some eukaryotic organisms reproduce through what method, where a parent passes all of its genetic material to the next generation?
|
[
"microscopic reproduction",
"sexual reproduction",
"organic reproduction",
"asexual reproduction"
] |
D
|
Relavent Documents:
Document 0:::
Sexual reproduction is a type of reproduction that involves a complex life cycle in which a gamete (haploid reproductive cells, such as a sperm or egg cell) with a single set of chromosomes combines with another gamete to produce a zygote that develops into an organism composed of cells with two sets of chromosomes (diploid). This is typical in animals, though the number of chromosome sets and how that number changes in sexual reproduction varies, especially among plants, fungi, and other eukaryotes.
Sexual reproduction is the most common life cycle in multicellular eukaryotes, such as animals, fungi and plants. Sexual reproduction also occurs in some unicellular eukaryotes. Sexual reproduction does not occur in prokaryotes, unicellular organisms without cell nuclei, such as bacteria and archaea. However, some processes in bacteria, including bacterial conjugation, transformation and transduction, may be considered analogous to sexual reproduction in that they incorporate new genetic information. Some proteins and other features that are key for sexual reproduction may have arisen in bacteria, but sexual reproduction is believed to have developed in an ancient eukaryotic ancestor.
In eukaryotes, diploid precursor cells divide to produce haploid cells in a process called meiosis. In meiosis, DNA is replicated to produce a total of four copies of each chromosome. This is followed by two cell divisions to generate haploid gametes. After the DNA is replicated in meiosis, the homologous chromosomes pair up so that their DNA sequences are aligned with each other. During this period before cell divisions, genetic information is exchanged between homologous chromosomes in genetic recombination. Homologous chromosomes contain highly similar but not identical information, and by exchanging similar but not identical regions, genetic recombination increases genetic diversity among future generations.
During sexual reproduction, two haploid gametes combine into one diploid ce
Document 1:::
In biology and genetics, the germline is the population of a multicellular organism's cells that pass on their genetic material to the progeny (offspring). In other words, they are the cells that form the egg, sperm and the fertilised egg. They are usually differentiated to perform this function and segregated in a specific place away from other bodily cells.
As a rule, this passing-on happens via a process of sexual reproduction; typically it is a process that includes systematic changes to the genetic material, changes that arise during recombination, meiosis and fertilization for example. However, there are many exceptions across multicellular organisms, including processes and concepts such as various forms of apomixis, autogamy, automixis, cloning or parthenogenesis. The cells of the germline are called germ cells. For example, gametes such as a sperm and an egg are germ cells. So are the cells that divide to produce gametes, called gametocytes, the cells that produce those, called gametogonia, and all the way back to the zygote, the cell from which an individual develops.
In sexually reproducing organisms, cells that are not in the germline are called somatic cells. According to this view, mutations, recombinations and other genetic changes in the germline may be passed to offspring, but a change in a somatic cell will not be. This need not apply to somatically reproducing organisms, such as some Porifera and many plants. For example, many varieties of citrus, plants in the Rosaceae and some in the Asteraceae, such as Taraxacum, produce seeds apomictically when somatic diploid cells displace the ovule or early embryo.
In an earlier stage of genetic thinking, there was a clear distinction between germline and somatic cells. For example, August Weismann proposed and pointed out, a germline cell is immortal in the sense that it is part of a lineage that has reproduced indefinitely since the beginning of life and, barring accident, could continue doing so indef
Document 2:::
Gametogenesis is a biological process by which diploid or haploid precursor cells undergo cell division and differentiation to form mature haploid gametes. Depending on the biological life cycle of the organism, gametogenesis occurs by meiotic division of diploid gametocytes into various gametes, or by mitosis. For example, plants produce gametes through mitosis in gametophytes. The gametophytes grow from haploid spores after sporic meiosis. The existence of a multicellular, haploid phase in the life cycle between meiosis and gametogenesis is also referred to as alternation of generations.
It is the biological process of gametogenesis; cells that are haploid or diploid divide to create other cells. matured haploid gametes. It can take place either through mitosis or meiotic division of diploid gametocytes into different depending on an organism's biological life cycle, gametes. For instance, gametophytes in plants undergo mitosis to produce gametes. Both male and female have different forms.
In animals
Animals produce gametes directly through meiosis from diploid mother cells in organs called gonads (testis in males and ovaries in females). In mammalian germ cell development, sexually dimorphic gametes differentiates into primordial germ cells from pluripotent cells during initial mammalian development. Males and females of a species that reproduce sexually have different forms of gametogenesis:
spermatogenesis (male): Immature germ cells are produced in a man's testes. To mature into sperms, males' immature germ cells, or spermatogonia, go through spermatogenesis during adolescence. Spermatogonia are diploid cells that become larger as they divide through mitosis. These primary spermatocytes. These diploid cells undergo meiotic division to create secondary spermatocytes. These secondary spermatocytes undergo a second meiotic division to produce immature sperms or spermatids. These spermatids undergo spermiogenesis in order to develop into sperm. LH, FSH, GnRH
Document 3:::
Apicomplexans, a group of intracellular parasites, have life cycle stages that allow them to survive the wide variety of environments they are exposed to during their complex life cycle. Each stage in the life cycle of an apicomplexan organism is typified by a cellular variety with a distinct morphology and biochemistry.
Not all apicomplexa develop all the following cellular varieties and division methods. This presentation is intended as an outline of a hypothetical generalised apicomplexan organism.
Methods of asexual replication
Apicomplexans (sporozoans) replicate via ways of multiple fission (also known as schizogony). These ways include , and , although the latter is sometimes referred to as schizogony, despite its general meaning.
Merogony is an asexually reproductive process of apicomplexa. After infecting a host cell, a trophozoite (see glossary below) increases in size while repeatedly replicating its nucleus and other organelles. During this process, the organism is known as a or . Cytokinesis next subdivides the multinucleated schizont into numerous identical daughter cells called merozoites (see glossary below), which are released into the blood when the host cell ruptures. Organisms whose life cycles rely on this process include Theileria, Babesia, Plasmodium, and Toxoplasma gondii.
Sporogony is a type of sexual and asexual reproduction. It involves karyogamy, the formation of a zygote, which is followed by meiosis and multiple fission. This results in the production of sporozoites.
Other forms of replication include and .
Endodyogeny is a process of asexual reproduction, favoured by parasites such as Toxoplasma gondii. It involves an unusual process in which two daughter cells are produced inside a mother cell, which is then consumed by the offspring prior to their separation.
Endopolygeny is the division into several organisms at once by internal budding.
Glossary of cell types
Infectious stages
A (ancient Greek , seed + , animal) is th
Document 4:::
Germ-Soma Differentiation is the process by which organisms develop distinct germline and somatic cells. The development of cell differentiation has been one of the critical aspects of the evolution of multicellularity and sexual reproduction in organisms. Multicellularity has evolved upwards of 25 times, and due to this there is great possibility that multiple factors have shaped the differentiation of cells. There are three general types of cells: germ cells, somatic cells, and stem cells. Germ cells lead to the production of gametes, while somatic cells perform all other functions within the body. Within the broad category of somatic cells, there is further specialization as cells become specified to certain tissues and functions. In addition, stem cell are undifferentiated cells which can develop into a specialized cell and are the earliest type of cell in a cell lineage. Due to the differentiation in function, somatic cells are found ony in multicellular organisms, as in unicellular ones the purposes of somatic and germ cells are consolidated in one cell.
All organisms with germ-soma differentiation are eukaryotic, and represent an added level of specialization to multicellular organisms. Pure germ-soma differentiation has developed in a select number of eukaryotes (called Weismannists), included in this category are vertebrates and arthropods- however land plants, green algae, red algae, brown algae, and fungi have partial differentiation. While a significant portion of organisms with germ-soma differentiation are asexual, this distinction has been imperative in the development of sexual reproduction; the specialization of certain cells into germ cells is fundamental for meiosis and recombination.
Weismann barrier
The strict division between somatic and germ cells is called the Weismann barrier, in which genetic information passed onto offspring is found only in germ cells. This occurs only in select organisms, however some without a Weismann barrier do pre
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
All prokaryotic and some eukaryotic organisms reproduce through what method, where a parent passes all of its genetic material to the next generation?
A. microscopic reproduction
B. sexual reproduction
C. organic reproduction
D. asexual reproduction
Answer:
|
|
sciq-10587
|
multiple_choice
|
The mouth, stomach, esophagus, small intestine, and large intestine are all part of what organ system?
|
[
"respiratory",
"lymphatic",
"muscular",
"digestive"
] |
D
|
Relavent Documents:
Document 0:::
A biological system is a complex network which connects several biologically relevant entities. Biological organization spans several scales and are determined based different structures depending on what the system is. Examples of biological systems at the macro scale are populations of organisms. On the organ and tissue scale in mammals and other animals, examples include the circulatory system, the respiratory system, and the nervous system. On the micro to the nanoscopic scale, examples of biological systems are cells, organelles, macromolecular complexes and regulatory pathways. A biological system is not to be confused with a living system, such as a living organism.
Organ and tissue systems
These specific systems are widely studied in human anatomy and are also present in many other animals.
Respiratory system: the organs used for breathing, the pharynx, larynx, bronchi, lungs and diaphragm.
Digestive system: digestion and processing food with salivary glands, oesophagus, stomach, liver, gallbladder, pancreas, intestines, rectum and anus.
Cardiovascular system (heart and circulatory system): pumping and channeling blood to and from the body and lungs with heart, blood and blood vessels.
Urinary system: kidneys, ureters, bladder and urethra involved in fluid balance, electrolyte balance and excretion of urine.
Integumentary system: skin, hair, fat, and nails.
Skeletal system: structural support and protection with bones, cartilage, ligaments and tendons.
Endocrine system: communication within the body using hormones made by endocrine glands such as the hypothalamus, pituitary gland, pineal body or pineal gland, thyroid, parathyroid and adrenals, i.e., adrenal glands.
Lymphatic system: structures involved in the transfer of lymph between tissues and the blood stream; includes the lymph and the nodes and vessels. The lymphatic system includes functions including immune responses and development of antibodies.
Immune system: protects the organism from
Document 1:::
The esophagus (American English) or oesophagus (British English, see spelling differences; both ; : (o)esophagi or (o)esophaguses), colloquially known also as the food pipe or gullet, is an organ in vertebrates through which food passes, aided by peristaltic contractions, from the pharynx to the stomach. The esophagus is a fibromuscular tube, about long in adults, that travels behind the trachea and heart, passes through the diaphragm, and empties into the uppermost region of the stomach. During swallowing, the epiglottis tilts backwards to prevent food from going down the larynx and lungs. The word oesophagus is from Ancient Greek οἰσοφάγος (oisophágos), from οἴσω (oísō), future form of φέρω (phérō, “I carry”) + ἔφαγον (éphagon, “I ate”).
The wall of the esophagus from the lumen outwards consists of mucosa, submucosa (connective tissue), layers of muscle fibers between layers of fibrous tissue, and an outer layer of connective tissue. The mucosa is a stratified squamous epithelium of around three layers of squamous cells, which contrasts to the single layer of columnar cells of the stomach. The transition between these two types of epithelium is visible as a zig-zag line. Most of the muscle is smooth muscle although striated muscle predominates in its upper third. It has two muscular rings or sphincters in its wall, one at the top and one at the bottom. The lower sphincter helps to prevent reflux of acidic stomach content. The esophagus has a rich blood supply and venous drainage. Its smooth muscle is innervated by involuntary nerves (sympathetic nerves via the sympathetic trunk and parasympathetic nerves via the vagus nerve) and in addition voluntary nerves (lower motor neurons) which are carried in the vagus nerve to innervate its striated muscle.
The esophagus passes through the thoracic cavity into the diaphragm into the stomach.
Document 2:::
Splanchnology is the study of the visceral organs, i.e. digestive, urinary, reproductive and respiratory systems.
The term derives from the Neo-Latin splanchno-, from the Greek σπλάγχνα, meaning "viscera". More broadly, splanchnology includes all the components of the Neuro-Endo-Immune (NEI) Supersystem. An organ (or viscus) is a collection of tissues joined in a structural unit to serve a common function. In anatomy, a viscus is an internal organ, and viscera is the plural form. Organs consist of different tissues, one or more of which prevail and determine its specific structure and function. Functionally related organs often cooperate to form whole organ systems.
Viscera are the soft organs of the body. There are organs and systems of organs that differ in structure and development but they are united for the performance of a common function. Such functional collection of mixed organs, form an organ system. These organs are always made up of special cells that support its specific function. The normal position and function of each visceral organ must be known before the abnormal can be ascertained.
Healthy organs all work together cohesively and gaining a better understanding of how, helps to maintain a healthy lifestyle. Some functions cannot be accomplished only by one organ. That is why organs form complex systems. The system of organs is a collection of homogeneous organs, which have a common plan of structure, function, development, and they are connected to each other anatomically and communicate through the NEI supersystem.
Document 3:::
The gastrointestinal wall of the gastrointestinal tract is made up of four layers of specialised tissue. From the inner cavity of the gut (the lumen) outwards, these are:
Mucosa
Submucosa
Muscular layer
Serosa or adventitia
The mucosa is the innermost layer of the gastrointestinal tract. It surrounds the lumen of the tract and comes into direct contact with digested food (chyme). The mucosa itself is made up of three layers: the epithelium, where most digestive, absorptive and secretory processes occur; the lamina propria, a layer of connective tissue, and the muscularis mucosae, a thin layer of smooth muscle.
The submucosa contains nerves including the submucous plexus (also called Meissner's plexus), blood vessels and elastic fibres with collagen, that stretches with increased capacity but maintains the shape of the intestine.
The muscular layer surrounds the submucosa. It comprises layers of smooth muscle in longitudinal and circular orientation that also helps with continued bowel movements (peristalsis) and the movement of digested material out of and along the gut. In between the two layers of muscle lies the myenteric plexus (also called Auerbach's plexus).
The serosa/adventitia are the final layers. These are made up of loose connective tissue and coated in mucus so as to prevent any friction damage from the intestine rubbing against other tissue. The serosa is present if the tissue is within the peritoneum, and the adventitia if the tissue is retroperitoneal.
Structure
When viewed under the microscope, the gastrointestinal wall has a consistent general form, but with certain parts differing along its course.
Mucosa
The mucosa is the innermost layer of the gastrointestinal tract. It surrounds the cavity (lumen) of the tract and comes into direct contact with digested food (chyme). The mucosa is made up of three layers:
The epithelium is the innermost layer. It is where most digestive, absorptive and secretory processes occur.
The lamina propr
Document 4:::
The organs of Bojanus or Bojanus organs are excretory glands that serve the function of kidneys in some of the molluscs. In other words, these are metanephridia that are found in some molluscs, for example in the bivalves. Some other molluscs have another type of organ for excretion called Keber's organ.
The Bojanus organ is named after Ludwig Heinrich Bojanus, who first described it. The excretory system of a bivalve consists of a pair of kidneys called the organ of bojanus. These are situated one of each side of the body below the pericardium. Each kidney consist of 2 part (1)- glandular part (2)- a thin walled ciliated urinary bladder.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The mouth, stomach, esophagus, small intestine, and large intestine are all part of what organ system?
A. respiratory
B. lymphatic
C. muscular
D. digestive
Answer:
|
|
sciq-6769
|
multiple_choice
|
What unit of pressure is named for the scientist whose discoveries about pressure in fluids led to a law of the same name?
|
[
"joule",
"pascal",
"ohm",
"newton"
] |
B
|
Relavent Documents:
Document 0:::
This is a list of scientific equations named after people (eponymous equations).
See also
Eponym
List of eponymous laws
List of laws in science
List of equations
Scientific constants named after people
Scientific phenomena named after people
Scientific laws named after people
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
This is a list of topics that are included in high school physics curricula or textbooks.
Mathematical Background
SI Units
Scalar (physics)
Euclidean vector
Motion graphs and derivatives
Pythagorean theorem
Trigonometry
Motion and forces
Motion
Force
Linear motion
Linear motion
Displacement
Speed
Velocity
Acceleration
Center of mass
Mass
Momentum
Newton's laws of motion
Work (physics)
Free body diagram
Rotational motion
Angular momentum (Introduction)
Angular velocity
Centrifugal force
Centripetal force
Circular motion
Tangential velocity
Torque
Conservation of energy and momentum
Energy
Conservation of energy
Elastic collision
Inelastic collision
Inertia
Moment of inertia
Momentum
Kinetic energy
Potential energy
Rotational energy
Electricity and magnetism
Ampère's circuital law
Capacitor
Coulomb's law
Diode
Direct current
Electric charge
Electric current
Alternating current
Electric field
Electric potential energy
Electron
Faraday's law of induction
Ion
Inductor
Joule heating
Lenz's law
Magnetic field
Ohm's law
Resistor
Transistor
Transformer
Voltage
Heat
Entropy
First law of thermodynamics
Heat
Heat transfer
Second law of thermodynamics
Temperature
Thermal energy
Thermodynamic cycle
Volume (thermodynamics)
Work (thermodynamics)
Waves
Wave
Longitudinal wave
Transverse waves
Transverse wave
Standing Waves
Wavelength
Frequency
Light
Light ray
Speed of light
Sound
Speed of sound
Radio waves
Harmonic oscillator
Hooke's law
Reflection
Refraction
Snell's law
Refractive index
Total internal reflection
Diffraction
Interference (wave propagation)
Polarization (waves)
Vibrating string
Doppler effect
Gravity
Gravitational potential
Newton's law of universal gravitation
Newtonian constant of gravitation
See also
Outline of physics
Physics education
Document 3:::
The CRC Handbook of Chemistry and Physics is a comprehensive one-volume reference resource for science research. First published in 1914, it is currently () in its 103rd edition, published in 2022. It is sometimes nicknamed the "Rubber Bible" or the "Rubber Book", as CRC originally stood for "Chemical Rubber Company".
As late as the 1962–1963 edition (3604 pages) the Handbook contained myriad information for every branch of science and engineering. Sections in that edition include: Mathematics, Properties and Physical Constants, Chemical Tables, Properties of Matter, Heat, Hygrometric and Barometric Tables, Sound, Quantities and Units, and Miscellaneous. Earlier editions included sections such as "Antidotes of Poisons", "Rules for Naming Organic Compounds", "Surface Tension of Fused Salts", "Percent Composition of Anti-Freeze Solutions", "Spark-gap Voltages", "Greek Alphabet", "Musical Scales", "Pigments and Dyes", "Comparison of Tons and Pounds", "Twist Drill and Steel Wire Gauges" and "Properties of the Earth's Atmosphere at Elevations up to 160 Kilometers". Later editions focus almost exclusively on chemistry and physics topics and eliminated much of the more "common" information.
Contents by edition
22nd–44th Editions
Section A: Mathematical Tables
Section B: Properties and Physical Constants
Section C: General Chemical Tables/Specific Gravity and Properties of Matter
Section D: Heat and Hygrometry/Sound/Electricity and Magnetism/Light
Section E: Quantities and Units/Miscellaneous
Index
45th–70th Editions
Section A: Mathematical Tables
Section B: Elements and Inorganic Compounds
Section C: Organic Compounds
Section D: General Chemical
Section E: General Physical Constants
Section F: Miscellaneous
Index
71st–102nd Editions
Section 1: Basic Constants, Units, and Conversion Factors
Section 2: Symbols, Terminology, and Nomenclature
Section 3: Physical Constants of Organic Compounds
Section 4: Properties of the Elements and Inorganic Com
Document 4:::
Hydrodynamica (Latin for Hydrodynamics) is a book published by Daniel Bernoulli in 1738. The title of this book eventually christened the field of fluid mechanics as hydrodynamics.
The book deals with fluid mechanics and is organized around the idea of conservation of energy, as received from Christiaan Huygens's formulation of this principle. The book describes the theory of water flowing through a tube and of water flowing from a hole in a container. In doing so, Bernoulli explained the nature of hydrodynamic pressure and discovered the role of loss of vis viva in fluid flow, which would later be known as the Bernoulli principle. The book also discusses hydraulic machines and introduces the notion of work and efficiency of a machine. In the tenth chapter, Bernoulli discussed the first model of the kinetic theory of gases. Assuming that heat increases the velocity of the gas particles, he demonstrated that the pressure of air is proportional to kinetic energy of gas particles, thus making the temperature of gas proportional to this kinetic energy as well.
Notes
Bibliography
1738 books
Physics books
Mathematics books
1730s in science
Mathematics literature
18th-century Latin books
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What unit of pressure is named for the scientist whose discoveries about pressure in fluids led to a law of the same name?
A. joule
B. pascal
C. ohm
D. newton
Answer:
|
|
sciq-6234
|
multiple_choice
|
What results when a warm air mass runs into a cold air mass?
|
[
"warm front",
"cool front",
"dry front",
"rough front"
] |
A
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 2:::
In fluid dynamics, convective mixing is the vertical transport of a fluid and its properties. In many important ocean and atmospheric phenomena, convection is driven by density differences in the fluid, e.g. the sinking of cold, dense water in polar regions of the world's oceans; and the rising of warm, less-dense air during the formation of cumulonimbus clouds and hurricanes.
See also
Atmospheric convection
Bénard cells
Churchill–Bernstein equation
Double diffusive convection
Heat transfer
Heat conduction
Thermal radiation
Heat pipe
Laser-heated pedestal growth
Nusselt number
Thermomagnetic convection
Document 3:::
Flux describes any effect that appears to pass or travel (whether it actually moves or not) through a surface or substance. Flux is a concept in applied mathematics and vector calculus which has many applications to physics. For transport phenomena, flux is a vector quantity, describing the magnitude and direction of the flow of a substance or property. In vector calculus flux is a scalar quantity, defined as the surface integral of the perpendicular component of a vector field over a surface.
Terminology
The word flux comes from Latin: fluxus means "flow", and fluere is "to flow". As fluxion, this term was introduced into differential calculus by Isaac Newton.
The concept of heat flux was a key contribution of Joseph Fourier, in the analysis of heat transfer phenomena. His seminal treatise Théorie analytique de la chaleur (The Analytical Theory of Heat), defines fluxion as a central quantity and proceeds to derive the now well-known expressions of flux in terms of temperature differences across a slab, and then more generally in terms of temperature gradients or differentials of temperature, across other geometries. One could argue, based on the work of James Clerk Maxwell, that the transport definition precedes the definition of flux used in electromagnetism. The specific quote from Maxwell is:
According to the transport definition, flux may be a single vector, or it may be a vector field / function of position. In the latter case flux can readily be integrated over a surface. By contrast, according to the electromagnetism definition, flux is the integral over a surface; it makes no sense to integrate a second-definition flux for one would be integrating over a surface twice. Thus, Maxwell's quote only makes sense if "flux" is being used according to the transport definition (and furthermore is a vector field rather than single vector). This is ironic because Maxwell was one of the major developers of what we now call "electric flux" and "magnetic flux" accor
Document 4:::
Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams.
Course content
E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are:
Electrostatics
Conductors, capacitors, and dielectrics
Electric circuits
Magnetic fields
Electromagnetism.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What results when a warm air mass runs into a cold air mass?
A. warm front
B. cool front
C. dry front
D. rough front
Answer:
|
|
sciq-974
|
multiple_choice
|
Heterotrophic organisms use organic compounds, usually from other organisms, as a source of what basic element of life?
|
[
"hydrogen",
"carbon",
"oxygen",
"monoxide"
] |
B
|
Relavent Documents:
Document 0:::
Heterotrophic nutrition is a mode of nutrition in which organisms depend upon other organisms for food to survive. They can't make their own food like Green plants. Heterotrophic organisms have to take in all the organic substances they need to survive.
All animals, certain types of fungi, and non-photosynthesizing plants are heterotrophic. In contrast, green plants, red algae, brown algae, and cyanobacteria are all autotrophs, which use photosynthesis to produce their own food from sunlight. Some fungi may be saprotrophic, meaning they will extracellularly secrete enzymes onto their food to be broken down into smaller, soluble molecules which can diffuse back into the fungus.
Description
All eukaryotes except for green plants and algae are unable to manufacture their own food: They obtain food from other organisms. This mode of nutrition is also known as heterotrophic nutrition.
All heterotrophs (except blood and gut parasites) have to convert solid food into soluble compounds which are capable of being absorbed (digestion). Then the soluble products of digestion for the organism are being broken down for the release of energy (respiration). All heterotrophs depend on autotrophs for their nutrition. Heterotrophic organisms have only four types of nutrition.
Footnotes
Document 1:::
The molecules that an organism uses as its carbon source for generating biomass are referred to as "carbon sources" in biology. It is possible for organic or inorganic sources of carbon. Heterotrophs must use organic molecules as both are a source of carbon and energy, in contrast to autotrophs, which can use inorganic materials as both a source of carbon and an abiotic source of energy, such as, for instance, inorganic chemical energy or light (photoautotrophs) (chemolithotrophs).
The carbon cycle, which begins with a carbon source that is inorganic, such as carbon dioxide and progresses through the carbon fixation process, includes the biological use of carbon as one of its components.[1]
Types of organism by carbon source
Heterotrophs
Autotrophs
Document 2:::
In ecology, primary production is the synthesis of organic compounds from atmospheric or aqueous carbon dioxide. It principally occurs through the process of photosynthesis, which uses light as its source of energy, but it also occurs through chemosynthesis, which uses the oxidation or reduction of inorganic chemical compounds as its source of energy. Almost all life on Earth relies directly or indirectly on primary production. The organisms responsible for primary production are known as primary producers or autotrophs, and form the base of the food chain. In terrestrial ecoregions, these are mainly plants, while in aquatic ecoregions algae predominate in this role. Ecologists distinguish primary production as either net or gross, the former accounting for losses to processes such as cellular respiration, the latter not.
Overview
Primary production is the production of chemical energy in organic compounds by living organisms. The main source of this energy is sunlight but a minute fraction of primary production is driven by lithotrophic organisms using the chemical energy of inorganic molecules.Regardless of its source, this energy is used to synthesize complex organic molecules from simpler inorganic compounds such as carbon dioxide () and water (H2O). The following two equations are simplified representations of photosynthesis (top) and (one form of) chemosynthesis (bottom):
+ H2O + light → CH2O + O2
+ O2 + 4 H2S → CH2O + 4 S + 3 H2O
In both cases, the end point is a polymer of reduced carbohydrate, (CH2O)n, typically molecules such as glucose or other sugars. These relatively simple molecules may be then used to further synthesise more complicated molecules, including proteins, complex carbohydrates, lipids, and nucleic acids, or be respired to perform work. Consumption of primary producers by heterotrophic organisms, such as animals, then transfers these organic molecules (and the energy stored within them) up the food web, fueling all of the Earth'
Document 3:::
A nutrient is a substance used by an organism to survive, grow, and reproduce. The requirement for dietary nutrient intake applies to animals, plants, fungi, and protists. Nutrients can be incorporated into cells for metabolic purposes or excreted by cells to create non-cellular structures, such as hair, scales, feathers, or exoskeletons. Some nutrients can be metabolically converted to smaller molecules in the process of releasing energy, such as for carbohydrates, lipids, proteins, and fermentation products (ethanol or vinegar), leading to end-products of water and carbon dioxide. All organisms require water. Essential nutrients for animals are the energy sources, some of the amino acids that are combined to create proteins, a subset of fatty acids, vitamins and certain minerals. Plants require more diverse minerals absorbed through roots, plus carbon dioxide and oxygen absorbed through leaves. Fungi live on dead or living organic matter and meet nutrient needs from their host.
Different types of organisms have different essential nutrients. Ascorbic acid (vitamin C) is essential, meaning it must be consumed in sufficient amounts, to humans and some other animal species, but some animals and plants are able to synthesize it. Nutrients may be organic or inorganic: organic compounds include most compounds containing carbon, while all other chemicals are inorganic. Inorganic nutrients include nutrients such as iron, selenium, and zinc, while organic nutrients include, among many others, energy-providing compounds and vitamins.
A classification used primarily to describe nutrient needs of animals divides nutrients into macronutrients and micronutrients. Consumed in relatively large amounts (grams or ounces), macronutrients (carbohydrates, fats, proteins, water) are primarily used to generate energy or to incorporate into tissues for growth and repair. Micronutrients are needed in smaller amounts (milligrams or micrograms); they have subtle biochemical and physiologi
Document 4:::
Biotic material or biological derived material is any material that originates from living organisms. Most such materials contain carbon and are capable of decay.
The earliest life on Earth arose at least 3.5 billion years ago. Earlier physical evidences of life include graphite, a biogenic substance, in 3.7 billion-year-old metasedimentary rocks discovered in southwestern Greenland, as well as, "remains of biotic life" found in 4.1 billion-year-old rocks in Western Australia. Earth's biodiversity has expanded continually except when interrupted by mass extinctions. Although scholars estimate that over 99 percent of all species of life (over five billion) that ever lived on Earth are extinct, there are still an estimated 10–14 million extant species, of which about 1.2 million have been documented and over 86% have not yet been described.
Examples of biotic materials are wood, straw, humus, manure, bark, crude oil, cotton, spider silk, chitin, fibrin, and bone.
The use of biotic materials, and processed biotic materials (bio-based material) as alternative natural materials, over synthetics is popular with those who are environmentally conscious because such materials are usually biodegradable, renewable, and the processing is commonly understood and has minimal environmental impact. However, not all biotic materials are used in an environmentally friendly way, such as those that require high levels of processing, are harvested unsustainably, or are used to produce carbon emissions.
When the source of the recently living material has little importance to the product produced, such as in the production of biofuels, biotic material is simply called biomass. Many fuel sources may have biological sources, and may be divided roughly into fossil fuels, and biofuel.
In soil science, biotic material is often referred to as organic matter. Biotic materials in soil include glomalin, Dopplerite and humic acid. Some biotic material may not be considered to be organic matte
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Heterotrophic organisms use organic compounds, usually from other organisms, as a source of what basic element of life?
A. hydrogen
B. carbon
C. oxygen
D. monoxide
Answer:
|
|
sciq-4662
|
multiple_choice
|
In type 2 diabetes, body cells do not respond to normal amounts of what hormone?
|
[
"glucose",
"hemoglobin",
"insulin",
"estrogen"
] |
C
|
Relavent Documents:
Document 0:::
The insulin transduction pathway is a biochemical pathway by which insulin increases the uptake of glucose into fat and muscle cells and reduces the synthesis of glucose in the liver and hence is involved in maintaining glucose homeostasis. This pathway is also influenced by fed versus fasting states, stress levels, and a variety of other hormones.
When carbohydrates are consumed, digested, and absorbed the pancreas senses the subsequent rise in blood glucose concentration and releases insulin to promote uptake of glucose from the bloodstream. When insulin binds to the insulin receptor, it leads to a cascade of cellular processes that promote the usage or, in some cases, the storage of glucose in the cell. The effects of insulin vary depending on the tissue involved, e.g., insulin is most important in the uptake of glucose by muscle and adipose tissue.
This insulin signal transduction pathway is composed of trigger mechanisms (e.g., autophosphorylation mechanisms) that serve as signals throughout the cell. There is also a counter mechanism in the body to stop the secretion of insulin beyond a certain limit. Namely, those counter-regulatory mechanisms are glucagon and epinephrine. The process of the regulation of blood glucose (also known as glucose homeostasis) also exhibits oscillatory behavior.
On a pathological basis, this topic is crucial to understanding certain disorders in the body such as diabetes, hyperglycemia and hypoglycemia.
Transduction pathway
The functioning of a signal transduction pathway is based on extra-cellular signaling that in turn creates a response that causes other subsequent responses, hence creating a chain reaction, or cascade. During the course of signaling, the cell uses each response for accomplishing some kind of a purpose along the way. Insulin secretion mechanism is a common example of signal transduction pathway mechanism.
Insulin is produced by the pancreas in a region called Islets of Langerhans. In the islets of Langerha
Document 1:::
Diabetes mellitus (DM) is a type of metabolic disease characterized by hyperglycemia. It is caused by either defected insulin secretion or damaged biological function, or both. The high-level blood glucose for a long time will lead to dysfunction of a variety of tissues.
Type 2 diabetes is a progressive condition in which the body becomes resistant to the normal effects of insulin and/or gradually loses the capacity to produce enough insulin in the pancreas.
Pre-diabetes means that the blood sugar level is higher than normal but not yet high enough to be type 2 diabetes.
Gestational diabetes is a condition in which a woman without diabetes develops high blood sugar levels during pregnancy.
Type 2 diabetes mellitus and prediabetes are associated with changes in levels of metabolic markers, these markers could serve as potential prognostic or therapeutic targets for patients with prediabetes or Type 2 diabetes mellitus.
Metabolic markers
Oxytocin (OXT)
Omentin
Endothelin-1
Nesfatin-1
Irisin
Betatrophin
Hepatocyte growth factor (HGF)
Fibroblast growth factor
-Biomarkers with insulin-sensitizing properties (irisin, omentin, oxytocin)
-Biomarkers of metabolic dysfunction (HGF, Nesfatin and Betatrophin)
Biomarkers with insulin-sensitizing properties
Oxytocin
Oxytocin (OXT), a hormone most commonly associated with labor and lactation, may have a wide variety of physiological and pathological functions, which makes Oxytocin and its receptor potential targets for drug therapy.
OXT may have positive metabolic effects; this is based on the change in glucose metabolism, lipid profile, and insulin sensitivity. It may modify glucose uptake and insulin sensitivity both through direct and indirect effects. It may also cause regenerative changes in diabetic pancreatic islet cells. So, the activation of the OXT receptor pathway by infusion of OXT, OXT analogues, or OXT agonists may represent a promising approach for the management of obesity and related metabolic d
Document 2:::
The insulin concentration in blood increases after meals and gradually returns to basal levels during the next 1–2 hours. However, the basal insulin level is not stable. It oscillates with a regular period of 3-6 min. After a meal the amplitude of these oscillations increases but the periodicity remains constant. The oscillations are believed to be important for insulin sensitivity by preventing downregulation of insulin receptors in target cells. Such downregulation underlies insulin resistance, which is common in type 2 diabetes. It would therefore be advantageous to administer insulin to diabetic patients in a manner mimicking the natural oscillations. The insulin oscillations are generated by pulsatile release of the hormone from the pancreas. Insulin originates from beta cells located in the islets of Langerhans. Since each islet contains up to 2000 beta cells and there are one million islets in the pancreas it is apparent that pulsatile secretion requires sophisticated synchronization both within and among the islets of Langerhans.
Mechanism
Pulsatile insulin secretion from individual beta cells is driven by oscillation of the calcium concentration in the cells. In beta cells lacking contact, the periodicity of these oscillations is rather variable (2-10 min). However, within an islet of Langerhans the oscillations become synchronized by electrical coupling between closely located beta cells that are connected by gap junctions, and the periodicity is more uniform (3-6 min).
Pulsatile insulin release from the entire pancreas requires that secretion is synchronized between 1 million islets within a 25 cm long organ. Much like the cardiac pacemaker, the pancreas is connected to cranial nerve 10, and others, but the oscillations are accomplished by intrapancreatic neurons and do not require neural input from the brain. It is not entirely clear which neural factors account for this synchronization but ATP as well as the gasses NO and CO may be involved. The effe
Document 3:::
The following is a list of hormones found in Homo sapiens. Spelling is not uniform for many hormones. For example, current North American and international usage uses estrogen and gonadotropin, while British usage retains the Greek digraph in oestrogen and favours the earlier spelling gonadotrophin.
Hormone listing
Steroid
Document 4:::
Neutral Protamine Hagedorn (NPH) insulin, also known as isophane insulin, is an intermediate-acting insulin given to help control blood sugar levels in people with diabetes. It is used by injection under the skin once to twice a day. Onset of effects is typically in 90 minutes and they last for 24 hours. Versions are available that come premixed with a short-acting insulin, such as regular insulin.
The common side effect is low blood sugar. Other side effects may include pain or skin changes at the sites of injection, low blood potassium, and allergic reactions. Use during pregnancy is relatively safe for the fetus. NPH insulin is made by mixing regular insulin and protamine in exact proportions with zinc and phenol such that a neutral-pH is maintained and crystals form. There are human and pig insulin based versions.
Protamine insulin was first created in 1936 and NPH insulin in 1946. It is on the World Health Organization's List of Essential Medicines. NPH is an abbreviation for "neutral protamine Hagedorn". In 2020, insulin isophane was the 221st most commonly prescribed medication in the United States, with more than 2million prescriptions. In 2020, the combination of human insulin with insulin isophane was the 246th most commonly prescribed medication in the United States, with more than 1million prescriptions.
Medical uses
NPH insulin is cloudy and has an onset of 1–3 hours. Its peak is 6–8 hours and its duration is up to 24 hours.
It has an intermediate duration of action, meaning longer than that of regular and rapid-acting insulin, and shorter than long acting insulins (ultralente, glargine or detemir). A recent Cochrane systematic review compared the effects of NPH insulin to other insulin analogues (insulin detemir, insulin glargine, insulin degludec) in both children and adults with Type 1 diabetes. Insulin detemir appeared provide a lower risk of severe hyperglycemia compared to NPH insulin, however this finding was inconsistent across included stu
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
In type 2 diabetes, body cells do not respond to normal amounts of what hormone?
A. glucose
B. hemoglobin
C. insulin
D. estrogen
Answer:
|
|
sciq-2335
|
multiple_choice
|
What is the name of a reactant in an enzymatic reaction?
|
[
"Scar",
"substrate",
"membrane",
"tissues"
] |
B
|
Relavent Documents:
Document 0:::
In molecular biology, biosynthesis is a multi-step, enzyme-catalyzed process where substrates are converted into more complex products in living organisms. In biosynthesis, simple compounds are modified, converted into other compounds, or joined to form macromolecules. This process often consists of metabolic pathways. Some of these biosynthetic pathways are located within a single cellular organelle, while others involve enzymes that are located within multiple cellular organelles. Examples of these biosynthetic pathways include the production of lipid membrane components and nucleotides. Biosynthesis is usually synonymous with anabolism.
The prerequisite elements for biosynthesis include: precursor compounds, chemical energy (e.g. ATP), and catalytic enzymes which may need coenzymes (e.g. NADH, NADPH). These elements create monomers, the building blocks for macromolecules. Some important biological macromolecules include: proteins, which are composed of amino acid monomers joined via peptide bonds, and DNA molecules, which are composed of nucleotides joined via phosphodiester bonds.
Properties of chemical reactions
Biosynthesis occurs due to a series of chemical reactions. For these reactions to take place, the following elements are necessary:
Precursor compounds: these compounds are the starting molecules or substrates in a reaction. These may also be viewed as the reactants in a given chemical process.
Chemical energy: chemical energy can be found in the form of high energy molecules. These molecules are required for energetically unfavorable reactions. Furthermore, the hydrolysis of these compounds drives a reaction forward. High energy molecules, such as ATP, have three phosphates. Often, the terminal phosphate is split off during hydrolysis and transferred to another molecule.
Catalysts: these may be for example metal ions or coenzymes and they catalyze a reaction by increasing the rate of the reaction and lowering the activation energy.
In the sim
Document 1:::
This is a list of articles that describe particular biomolecules or types of biomolecules.
A
For substances with an A- or α- prefix such as
α-amylase, please see the parent page (in this case Amylase).
A23187 (Calcimycin, Calcium Ionophore)
Abamectine
Abietic acid
Acetic acid
Acetylcholine
Actin
Actinomycin D
Adenine
Adenosmeme
Adenosine diphosphate (ADP)
Adenosine monophosphate (AMP)
Adenosine triphosphate (ATP)
Adenylate cyclase
Adiponectin
Adonitol
Adrenaline, epinephrine
Adrenocorticotropic hormone (ACTH)
Aequorin
Aflatoxin
Agar
Alamethicin
Alanine
Albumins
Aldosterone
Aleurone
Alpha-amanitin
Alpha-MSH (Melaninocyte stimulating hormone)
Allantoin
Allethrin
α-Amanatin, see Alpha-amanitin
Amino acid
Amylase (also see α-amylase)
Anabolic steroid
Anandamide (ANA)
Androgen
Anethole
Angiotensinogen
Anisomycin
Antidiuretic hormone (ADH)
Anti-Müllerian hormone (AMH)
Arabinose
Arginine
Argonaute
Ascomycin
Ascorbic acid (vitamin C)
Asparagine
Aspartic acid
Asymmetric dimethylarginine
ATP synthase
Atrial-natriuretic peptide (ANP)
Auxin
Avidin
Azadirachtin A – C35H44O16
B
Bacteriocin
Beauvericin
beta-Hydroxy beta-methylbutyric acid
beta-Hydroxybutyric acid
Bicuculline
Bilirubin
Biopolymer
Biotin (Vitamin H)
Brefeldin A
Brassinolide
Brucine
Butyric acid
C
Document 2:::
Reactions
The reactions of the MEP pathway are as follows, taken primarily from Eisenreich and co-workers, excep
Document 3:::
The metabolome refers to the complete set of small-molecule chemicals found within a biological sample. The biological sample can be a cell, a cellular organelle, an organ, a tissue, a tissue extract, a biofluid or an entire organism. The small molecule chemicals found in a given metabolome may include both endogenous metabolites that are naturally produced by an organism (such as amino acids, organic acids, nucleic acids, fatty acids, amines, sugars, vitamins, co-factors, pigments, antibiotics, etc.) as well as exogenous chemicals (such as drugs, environmental contaminants, food additives, toxins and other xenobiotics) that are not naturally produced by an organism.
In other words, there is both an endogenous metabolome and an exogenous metabolome. The endogenous metabolome can be further subdivided to include a "primary" and a "secondary" metabolome (particularly when referring to plant or microbial metabolomes). A primary metabolite is directly involved in the normal growth, development, and reproduction. A secondary metabolite is not directly involved in those processes, but usually has important ecological function. Secondary metabolites may include pigments, antibiotics or waste products derived from partially metabolized xenobiotics. The study of the metabolome is called metabolomics.
Origins
The word metabolome appears to be a blending of the words "metabolite" and "chromosome". It was constructed to imply that metabolites are indirectly encoded by genes or act on genes and gene products. The term "metabolome" was first used in 1998 and was likely coined to match with existing biological terms referring to the complete set of genes (the genome), the complete set of proteins (the proteome) and the complete set of transcripts (the transcriptome). The first book on metabolomics was published in 2003. The first journal dedicated to metabolomics (titled simply "Metabolomics") was launched in 2005 and is currently edited by Prof. Roy Goodacre. Some of the m
Document 4:::
Biochemistry or biological chemistry is the study of chemical processes within and relating to living organisms. A sub-discipline of both chemistry and biology, biochemistry may be divided into three fields: structural biology, enzymology, and metabolism. Over the last decades of the 20th century, biochemistry has become successful at explaining living processes through these three disciplines. Almost all areas of the life sciences are being uncovered and developed through biochemical methodology and research. Biochemistry focuses on understanding the chemical basis which allows biological molecules to give rise to the processes that occur within living cells and between cells, in turn relating greatly to the understanding of tissues and organs, as well as organism structure and function. Biochemistry is closely related to molecular biology, which is the study of the molecular mechanisms of biological phenomena.
Much of biochemistry deals with the structures, bonding, functions, and interactions of biological macromolecules, such as proteins, nucleic acids, carbohydrates, and lipids. They provide the structure of cells and perform many of the functions associated with life. The chemistry of the cell also depends upon the reactions of small molecules and ions. These can be inorganic (for example, water and metal ions) or organic (for example, the amino acids, which are used to synthesize proteins). The mechanisms used by cells to harness energy from their environment via chemical reactions are known as metabolism. The findings of biochemistry are applied primarily in medicine, nutrition and agriculture. In medicine, biochemists investigate the causes and cures of diseases. Nutrition studies how to maintain health and wellness and also the effects of nutritional deficiencies. In agriculture, biochemists investigate soil and fertilizers, with the goal of improving crop cultivation, crop storage, and pest control. In recent decades, biochemical principles a
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the name of a reactant in an enzymatic reaction?
A. Scar
B. substrate
C. membrane
D. tissues
Answer:
|
|
sciq-7467
|
multiple_choice
|
Ecosystem dynamics include more than the flow of energy and recycling of matter. ecosystems are also dynamic because they?
|
[
"recreate exactly alike",
"never move",
"stay the same",
"change through time"
] |
D
|
Relavent Documents:
Document 0:::
Earth system science (ESS) is the application of systems science to the Earth. In particular, it considers interactions and 'feedbacks', through material and energy fluxes, between the Earth's sub-systems' cycles, processes and "spheres"—atmosphere, hydrosphere, cryosphere, geosphere, pedosphere, lithosphere, biosphere, and even the magnetosphere—as well as the impact of human societies on these components. At its broadest scale, Earth system science brings together researchers across both the natural and social sciences, from fields including ecology, economics, geography, geology, glaciology, meteorology, oceanography, climatology, paleontology, sociology, and space science. Like the broader subject of systems science, Earth system science assumes a holistic view of the dynamic interaction between the Earth's spheres and their many constituent subsystems fluxes and processes, the resulting spatial organization and time evolution of these systems, and their variability, stability and instability. Subsets of Earth System science include systems geology and systems ecology, and many aspects of Earth System science are fundamental to the subjects of physical geography and climate science.
Definition
The Science Education Resource Center, Carleton College, offers the following description: "Earth System science embraces chemistry, physics, biology, mathematics and applied sciences in transcending disciplinary boundaries to treat the Earth as an integrated system. It seeks a deeper understanding of the physical, chemical, biological and human interactions that determine the past, current and future states of the Earth. Earth System science provides a physical basis for understanding the world in which we live and upon which humankind seeks to achieve sustainability".
Earth System science has articulated four overarching, definitive and critically important features of the Earth System, which include:
Variability: Many of the Earth System's natural 'modes' and variab
Document 1:::
The Institute for Biodiversity and Ecosystem Dynamics (IBED) is one of the ten research institutes of the Faculty of Science of the Universiteit van Amsterdam. IBED employs more than 100 researchers, with PhD students and Postdocs forming a majority, and 30 supporting staff. The total annual budget is around 10 m€, of which more than 40 per cent comes from external grants and contracts. The main output consist of publications in peer reviewed journals and books (on average 220 per year). Each year around 15 PhD students defend their thesis and obtain their degree from the Universiteit van Amsterdam. The institute is managed by a general director appointed by the Dean of the Faculty for a period of five years, assisted by a business manager.
Mission statement
The mission of the Institute for Biodiversity and Ecosystem Dynamics is to increase our insights in the functioning and biodiversity of ecosystems in all their complexity. Knowledge of the interactions between living organisms and processes in their physical and chemical environment is essential for a better understanding of the dynamics of ecosystems at different temporal and spatial scales.
Organization of IBED Research
IBED research is organized in the following three themes:
Theme I: Biodiversity and Evolution
The main question of Theme I research is how patterns in biodiversity can be explained from underlying processes: speciation and extinction, dispersal and the (dis)appearance of geographical barriers, reproductive isolation and hybridisation of taxa. Modern reconstructions of the history of life on earth rely heavily on analyses of DNA data that contain the footprints of the past. Research related to human-made effects on biodiversity includes the identification of endangered biodiversity hotspots affected by global change, potential risks of an escape of transgenes from crops to wild species, and the consequences of habitat fragmentation for the viability and genetic diversity of populations and
Document 2:::
Pattern-oriented modeling (POM) is an approach to bottom-up complex systems analysis that was developed to model complex ecological and agent-based systems. A goal of POM is to make ecological modeling more rigorous and comprehensive.
A traditional ecosystem model attempts to approximate the real system as closely as possible. POM proponents posit that an ecosystem is so information-rich that an ecosystem model will inevitably either leave out relevant information or become over-parameterized and lose predictive power. Through a focus on only the relevant patterns in the real system, POM offers a meaningful alternative to the traditional approach.
An attempt to mimic the scientific method, POM requires the researcher to begin with a pattern found in the real system, posit hypotheses to explain the pattern, and then develop predictions that can be tested. A model used to determine the original pattern may not be used to test the researcher's predictions. Through this focus on the pattern, the model can be constructed to include only information relevant to the question at hand.
POM is also characterized by an effort to identify the appropriate temporal and spatial scale at which to study a pattern, and to avoid the assumption that a single process might explain a pattern at multiple temporal or spatial scales. It does, however, offer the opportunity to look explicitly at how processes at multiple scales might be driving a particular pattern.
A look at the trade-offs between model complexity and payoff can be considered in the framework of the Medawar zone. The model is considered too simple if it addresses a single problem (e.g., the explanation behind a single pattern), whereas it will be considered too complex if it incorporates all the available biological data. The Medawar zone, where the payoff in what is learned is greatest, is at an intermediate level of model complexity.
Usage
Pattern-oriented modeling has been used to test a priori hypotheses on how he
Document 3:::
Microbial population biology is the application of the principles of population biology to microorganisms.
Distinguishing from other biological disciplines
Microbial population biology, in practice, is the application of population ecology and population genetics toward understanding the ecology and evolution of bacteria, archaebacteria, microscopic fungi (such as yeasts), additional microscopic eukaryotes (e.g., "protozoa" and algae), and viruses.
Microbial population biology also encompasses the evolution and ecology of community interactions (community ecology) between microorganisms, including microbial coevolution and predator-prey interactions. In addition, microbial population biology considers microbial interactions with more macroscopic organisms (e.g., host-parasite interactions), though strictly this should be more from the perspective of the microscopic rather than the macroscopic organism. A good deal of microbial population biology may be described also as microbial evolutionary ecology. On the other hand, typically microbial population biologists (unlike microbial ecologists) are less concerned with questions of the role of microorganisms in ecosystem ecology, which is the study of nutrient cycling and energy movement between biotic as well as abiotic components of ecosystems.
Microbial population biology can include aspects of molecular evolution or phylogenetics. Strictly, however, these emphases should be employed toward understanding issues of microbial evolution and ecology rather than as a means of understanding more universal truths applicable to both microscopic and macroscopic organisms. The microorganisms in such endeavors consequently should be recognized as organisms rather than simply as molecular or evolutionary reductionist model systems. Thus, the study of RNA in vitro evolution is not microbial population biology and nor is the in silico generation of phylogenies of otherwise non-microbial sequences, even if aspects of either may
Document 4:::
Plant ecology is a subdiscipline of ecology that studies the distribution and abundance of plants, the effects of environmental factors upon the abundance of plants, and the interactions among plants and between plants and other organisms. Examples of these are the distribution of temperate deciduous forests in North America, the effects of drought or flooding upon plant survival, and competition among desert plants for water, or effects of herds of grazing animals upon the composition of grasslands.
A global overview of the Earth's major vegetation types is provided by O.W. Archibold. He recognizes 11 major vegetation types: tropical forests, tropical savannas, arid regions (deserts), Mediterranean ecosystems, temperate forest ecosystems, temperate grasslands, coniferous forests, tundra (both polar and high mountain), terrestrial wetlands, freshwater ecosystems and coastal/marine systems. This breadth of topics shows the complexity of plant ecology, since it includes plants from floating single-celled algae up to large canopy forming trees.
One feature that defines plants is photosynthesis. Photosynthesis is the process of a chemical reactions to create glucose and oxygen, which is vital for plant life. One of the most important aspects of plant ecology is the role plants have played in creating the oxygenated atmosphere of earth, an event that occurred some 2 billion years ago. It can be dated by the deposition of banded iron formations, distinctive sedimentary rocks with large amounts of iron oxide. At the same time, plants began removing carbon dioxide from the atmosphere, thereby initiating the process of controlling Earth's climate. A long term trend of the Earth has been toward increasing oxygen and decreasing carbon dioxide, and many other events in the Earth's history, like the first movement of life onto land, are likely tied to this sequence of events.
One of the early classic books on plant ecology was written by J.E. Weaver and F.E. Clements. It
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Ecosystem dynamics include more than the flow of energy and recycling of matter. ecosystems are also dynamic because they?
A. recreate exactly alike
B. never move
C. stay the same
D. change through time
Answer:
|
|
sciq-10641
|
multiple_choice
|
Plants are complex organisms with tissues organized into what?
|
[
"systems",
"carbons",
"organs",
"families"
] |
C
|
Relavent Documents:
Document 0:::
Plant life-form schemes constitute a way of classifying plants alternatively to the ordinary species-genus-family scientific classification. In colloquial speech, plants may be classified as trees, shrubs, herbs (forbs and graminoids), etc. The scientific use of life-form schemes emphasizes plant function in the ecosystem and that the same function or "adaptedness" to the environment may be achieved in a number of ways, i.e. plant species that are closely related phylogenetically may have widely different life-form, for example Adoxa moschatellina and Sambucus nigra are from the same family, but the former is a small herbaceous plant and the latter is a shrub or tree. Conversely, unrelated species may share a life-form through convergent evolution.
While taxonomic classification is concerned with the production of natural classifications (being natural understood either in philosophical basis for pre-evolutionary thinking, or phylogenetically as non-polyphyletic), plant life form classifications uses other criteria than naturalness, like morphology, physiology and ecology.
Life-form and growth-form are essentially synonymous concepts, despite attempts to restrict the meaning of growth-form to types differing in shoot architecture. Most life form schemes are concerned with vascular plants only. Plant construction types may be used in a broader sense to encompass planktophytes, benthophytes (mainly algae) and terrestrial plants.
A popular life-form scheme is the Raunkiær system.
History
One of the earliest attempts to classify the life-forms of plants and animals was made by Aristotle, whose writings are lost. His pupil, Theophrastus, in Historia Plantarum (c. 350 BC), was the first who formally recognized plant habits: trees, shrubs and herbs.
Some earlier authors (e.g., Humboldt, 1806) did classify species according to physiognomy, but were explicit about the entities being merely practical classes without any relation to plant function. A marked exception was
Document 1:::
In biology, tissue is a historically derived biological organizational level between cells and a complete organ. A tissue is therefore often thought of as an assembly of similar cells and their extracellular matrix from the same embryonic origin that together carry out a specific function. Organs are then formed by the functional grouping together of multiple tissues.
Biological organisms follow this hierarchy:
Cells < Tissue < Organ < Organ System < Organism
The English word "tissue" derives from the French word "tissu", the past participle of the verb tisser, "to weave".
The study of tissues is known as histology or, in connection with disease, as histopathology. Xavier Bichat is considered as the "Father of Histology". Plant histology is studied in both plant anatomy and physiology. The classical tools for studying tissues are the paraffin block in which tissue is embedded and then sectioned, the histological stain, and the optical microscope. Developments in electron microscopy, immunofluorescence, and the use of frozen tissue-sections have enhanced the detail that can be observed in tissues. With these tools, the classical appearances of tissues can be examined in health and disease, enabling considerable refinement of medical diagnosis and prognosis.
Plant tissue
In plant anatomy, tissues are categorized broadly into three tissue systems: the epidermis, the ground tissue, and the vascular tissue.
Epidermis – Cells forming the outer surface of the leaves and of the young plant body.
Vascular tissue – The primary components of vascular tissue are the xylem and phloem. These transport fluids and nutrients internally.
Ground tissue – Ground tissue is less differentiated than other tissues. Ground tissue manufactures nutrients by photosynthesis and stores reserve nutrients.
Plant tissues can also be divided differently into two types:
Meristematic tissues
Permanent tissues.
Meristematic tissue
Meristematic tissue consists of actively dividing cell
Document 2:::
Plant anatomy or Phytotomy is the general term for the study of the internal structure of plants. Originally it included plant morphology, the description of the physical form and external structure of plants, but since the mid-20th century plant anatomy has been considered a separate field referring only to internal plant structure. Plant anatomy is now frequently investigated at the cellular level, and often involves the sectioning of tissues and microscopy.
Structural divisions
Some studies of plant anatomy use a systems approach, organized on the basis of the plant's activities, such as nutrient transport, flowering, pollination, embryogenesis or seed development. Others are more classically divided into the following structural categories:
Flower anatomy, including study of the Calyx, Corolla, Androecium, and Gynoecium
Leaf anatomy, including study of the Epidermis, stomata and Palisade cells
Stem anatomy, including Stem structure and vascular tissues, buds and shoot apex
Fruit/Seed anatomy, including structure of the Ovule, Seed, Pericarp and Accessory fruit
Wood anatomy, including structure of the Bark, Cork, Xylem, Phloem, Vascular cambium, Heartwood and sapwood and branch collar
Root anatomy, including structure of the Root, root tip, endodermis
History
About 300 BC Theophrastus wrote a number of plant treatises, only two of which survive, Enquiry into Plants (Περὶ φυτῶν ἱστορία), and On the Causes of Plants (Περὶ φυτῶν αἰτιῶν). He developed concepts of plant morphology and classification, which did not withstand the scientific scrutiny of the Renaissance.
A Swiss physician and botanist, Gaspard Bauhin, introduced binomial nomenclature into plant taxonomy. He published Pinax theatri botanici in 1596, which was the first to use this convention for naming of species. His criteria for classification included natural relationships, or 'affinities', which in many cases were structural.
It was in the late 1600s that plant anatomy became refined int
Document 3:::
Plant ontology (PO) is a collection of ontologies developed by the Plant Ontology Consortium. These ontologies describe anatomical structures and growth and developmental stages across Viridiplantae. The PO is intended for multiple applications, including genetics, genomics, phenomics, and development, taxonomy and systematics, semantic applications and education.
Project Members
Oregon State University
New York Botanical Garden
L. H. Bailey Hortorium at Cornell University
Ensembl
SoyBase
SSWAP
SGN
Gramene
The Arabidopsis Information Resource (TAIR)
MaizeGDB
University of Missouri at St. Louis
Missouri Botanical Garden
See also
Generic Model Organism Database
Open Biomedical Ontologies
OBO Foundry
Document 4:::
Plants are the eukaryotes that form the kingdom Plantae; they are predominantly photosynthetic. This means that they obtain their energy from sunlight, using chloroplasts derived from endosymbiosis with cyanobacteria to produce sugars from carbon dioxide and water, using the green pigment chlorophyll. Exceptions are parasitic plants that have lost the genes for chlorophyll and photosynthesis, and obtain their energy from other plants or fungi.
Historically, as in Aristotle's biology, the plant kingdom encompassed all living things that were not animals, and included algae and fungi. Definitions have narrowed since then; current definitions exclude the fungi and some of the algae. By the definition used in this article, plants form the clade Viridiplantae (green plants), which consists of the green algae and the embryophytes or land plants (hornworts, liverworts, mosses, lycophytes, ferns, conifers and other gymnosperms, and flowering plants). A definition based on genomes includes the Viridiplantae, along with the red algae and the glaucophytes, in the clade Archaeplastida.
There are about 380,000 known species of plants, of which the majority, some 260,000, produce seeds. They range in size from single cells to the tallest trees. Green plants provide a substantial proportion of the world's molecular oxygen; the sugars they create supply the energy for most of Earth's ecosystems; other organisms, including animals, either consume plants directly or rely on organisms which do so.
Grain, fruit, and vegetables are basic human foods and have been domesticated for millennia. People use plants for many purposes, such as building materials, ornaments, writing materials, and, in great variety, for medicines. The scientific study of plants is known as botany, a branch of biology.
Definition
Taxonomic history
All living things were traditionally placed into one of two groups, plants and animals. This classification dates from Aristotle (384–322 BC), who distinguished d
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Plants are complex organisms with tissues organized into what?
A. systems
B. carbons
C. organs
D. families
Answer:
|
|
sciq-3598
|
multiple_choice
|
The primary substance that human cells, and ultimately human beings, are made up of is what?
|
[
"gas",
"air",
"water",
"oil"
] |
C
|
Relavent Documents:
Document 0:::
Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence.
Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism.
Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry.
See also
Cell (biology)
Cell biology
Biomolecule
Organelle
Tissue (biology)
External links
https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm
Document 1:::
The cell is the basic structural and functional unit of all forms of life. Every cell consists of cytoplasm enclosed within a membrane, and contains many macromolecules such as proteins, DNA and RNA, as well as many small molecules of nutrients and metabolites. The term comes from the Latin word meaning 'small room'.
Cells can acquire specified function and carry out various tasks within the cell such as replication, DNA repair, protein synthesis, and motility. Cells are capable of specialization and mobility within the cell.
Most plant and animal cells are only visible under a light microscope, with dimensions between 1 and 100 micrometres. Electron microscopy gives a much higher resolution showing greatly detailed cell structure. Organisms can be classified as unicellular (consisting of a single cell such as bacteria) or multicellular (including plants and animals). Most unicellular organisms are classed as microorganisms.
The study of cells and how they work has led to many other studies in related areas of biology, including: discovery of DNA, cancer systems biology, aging and developmental biology.
Cell biology is the study of cells, which were discovered by Robert Hooke in 1665, who named them for their resemblance to cells inhabited by Christian monks in a monastery. Cell theory, first developed in 1839 by Matthias Jakob Schleiden and Theodor Schwann, states that all organisms are composed of one or more cells, that cells are the fundamental unit of structure and function in all living organisms, and that all cells come from pre-existing cells. Cells emerged on Earth about 4 billion years ago.
Discovery
With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as i
Document 2:::
This list of life sciences comprises the branches of science that involve the scientific study of life – such as microorganisms, plants, and animals including human beings. This science is one of the two major branches of natural science, the other being physical science, which is concerned with non-living matter. Biology is the overall natural science that studies life, with the other life sciences as its sub-disciplines.
Some life sciences focus on a specific type of organism. For example, zoology is the study of animals, while botany is the study of plants. Other life sciences focus on aspects common to all or many life forms, such as anatomy and genetics. Some focus on the micro-scale (e.g. molecular biology, biochemistry) other on larger scales (e.g. cytology, immunology, ethology, pharmacy, ecology). Another major branch of life sciences involves understanding the mindneuroscience. Life sciences discoveries are helpful in improving the quality and standard of life and have applications in health, agriculture, medicine, and the pharmaceutical and food science industries. For example, it has provided information on certain diseases which has overall aided in the understanding of human health.
Basic life science branches
Biology – scientific study of life
Anatomy – study of form and function, in plants, animals, and other organisms, or specifically in humans
Astrobiology – the study of the formation and presence of life in the universe
Bacteriology – study of bacteria
Biotechnology – study of combination of both the living organism and technology
Biochemistry – study of the chemical reactions required for life to exist and function, usually a focus on the cellular level
Bioinformatics – developing of methods or software tools for storing, retrieving, organizing and analyzing biological data to generate useful biological knowledge
Biolinguistics – the study of the biology and evolution of language.
Biological anthropology – the study of humans, non-hum
Document 3:::
Adult stem cells are undifferentiated cells, found throughout the body after development, that multiply by cell division to replenish dying cells and regenerate damaged tissues. Also known as somatic stem cells (from Greek σωματικóς, meaning of the body), they can be found in juvenile, adult animals, and humans, unlike embryonic stem cells.
Scientific interest in adult stem cells is centered around two main characteristics. The first of which is their ability to divide or self-renew indefinitely, and the second their ability to generate all the cell types of the organ from which they originate, potentially regenerating the entire organ from a few cells. Unlike embryonic stem cells, the use of human adult stem cells in research and therapy is not considered to be controversial, as they are derived from adult tissue samples rather than human embryos designated for scientific research. The main functions of adult stem cells are to replace cells that are at risk of possibly dying as a result of disease or injury and to maintain a state of homeostasis within the cell. There are three main methods to determine if the adult stem cell is capable of becoming a specialized cell. The adult stem cell can be labeled in vivo and tracked, it can be isolated and then transplanted back into the organism, and it can be isolated in vivo and manipulated with growth hormones. They have mainly been studied in humans and model organisms such as mice and rats.
Structure
Defining properties
A stem cell possesses two properties:
Self-renewal is the ability to go through numerous cycles of cell division while still maintaining its undifferentiated state. Stem cells can replicate several times and can result in the formation of two stem cells, one stem cell more differentiated than the other, or two differentiated cells.
Multipotency or multidifferentiative potential is the ability to generate progeny of several distinct cell types, (for example glial cells and neurons) as opposed to u
Document 4:::
Like the nucleus, whether to include the vacuole in the protoplasm concept is controversial.
Terminology
Besides "protoplasm", many other related terms and distinctions were used for the cell contents over time. These were as follows:
Urschleim (Oken, 1802, 1809),
Protoplasma (Purkinje, 1840, von Mohl, 1846),
Primordialschlauch (primordial utricle, von Mohl, 1846),
sarcode (Dujardin, 1835, 1841),
Cytoplasma (Kölliker, 1863),
Hautschicht/Körnerschicht (ectoplasm/endoplasm, Pringsheim, 1854; Hofmeister, 1867),
Grundsubstanz (ground substance, Cienkowski, 1863),
metaplasm/protoplasm (Hanstein, 1868),
deutoplasm/protoplasm (van Beneden, 1870),
bioplasm (Beale, 1872),
paraplasm/protoplasm (Kupffer, 1875),
inter-filar substance theory (Velten, 1876)
Hyaloplasma (Pfeffer, 1877),
Protoplast (Hanstein, 1880),
Enchylema/Hyaloplasma (Hanstein, 1880),
Kleinkörperchen or Mikrosomen (small bodies or microsomes, Hanstein, 1882),
paramitome (Flemming, 1882),
Idioplasma (Nageli, 1884),
Zwischensu
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The primary substance that human cells, and ultimately human beings, are made up of is what?
A. gas
B. air
C. water
D. oil
Answer:
|
|
sciq-4432
|
multiple_choice
|
What type of cholesterol is commonly referred to as bad?
|
[
"hdl",
"ldl",
"unsaturated",
"insulin"
] |
B
|
Relavent Documents:
Document 0:::
This list consists of common foods with their cholesterol content recorded in milligrams per 100 grams (3.5 ounces) of food.
Functions
Cholesterol is a sterol, a steroid-like lipid made by animals, including humans. The human body makes one-eighth to one-fourth teaspoons of pure cholesterol daily. A cholesterol level of 5.5 millimoles per litre or below is recommended for an adult. The rise of cholesterol in the body can give a condition in which excessive cholesterol is deposited in artery walls called atherosclerosis. This condition blocks the blood flow to vital organs which can result in high blood pressure or stroke.
Cholesterol is not always bad. It's a vital part of the cell wall and a precursor to substances such as brain matter and some sex hormones. There are some types of cholesterol which are beneficial to the heart and blood vessels. High-density lipoprotein is commonly called "good" cholesterol. These lipoproteins help in the removal of cholesterol from the cells, which is then transported back to the liver where it is disintegrated and excreted as waste or broken down into parts.
Cholesterol content of various foods
See also
Nutrition
Plant stanol ester
Fatty acid
Document 1:::
Atherosclerosis is a pattern of the disease arteriosclerosis, characterized by development of abnormalities called lesions in walls of arteries. These lesions may lead to narrowing of the arteries' walls due to buildup of atheromatous plaques. At onset there are usually no symptoms, but if they develop, symptoms generally begin around middle age. In severe cases, it can result in coronary artery disease, stroke, peripheral artery disease, or kidney disorders, depending on which body parts(s) the affected arteries are located in the body.
The exact cause of atherosclerosis is unknown and is proposed to be multifactorial. Risk factors include abnormal cholesterol levels, elevated levels of inflammatory biomarkers, high blood pressure, diabetes, smoking (both active and passive smoking), obesity, genetic factors, family history, lifestyle habits, and an unhealthy diet. Plaque is made up of fat, cholesterol, calcium, and other substances found in the blood. The narrowing of arteries limits the flow of oxygen-rich blood to parts of the body. Diagnosis is based upon a physical exam, electrocardiogram, and exercise stress test, among others.
Prevention is generally by eating a healthy diet, exercising, not smoking, and maintaining a normal weight. Treatment of established disease may include medications to lower cholesterol such as statins, blood pressure medication, or medications that decrease clotting, such as aspirin. A number of procedures may also be carried out such as percutaneous coronary intervention, coronary artery bypass graft, or carotid endarterectomy.
Atherosclerosis generally starts when a person is young and worsens with age. Almost all people are affected to some degree by the age of 65. It is the number one cause of death and disability in developed countries. Though it was first described in 1575, there is evidence that the condition occurred in people more than 5,000 years ago.
Signs and symptoms
Atherosclerosis is asymptomatic for decades because
Document 2:::
Lipidology is the scientific study of lipids. Lipids are a group of biological macromolecules that have a multitude of functions in the body. Clinical studies on lipid metabolism in the body have led to developments in therapeutic lipidology for disorders such as cardiovascular disease.
History
Compared to other biomedical fields, lipidology was long-neglected as the handling of oils, smears, and greases was unappealing to scientists and lipid separation was difficult. It was not until 2002 that lipidomics, the study of lipid networks and their interaction with other molecules, appeared in the scientific literature. Attention to the field was bolstered by the introduction of chromatography, spectrometry, and various forms of spectroscopy to the field, allowing lipids to be isolated and analyzed. The field was further popularized following the cytologic application of the electron microscope, which led scientists to find that many metabolic pathways take place within, along, and through the cell membrane - the properties of which are strongly influenced by lipid composition.
Clinical lipidology
The Framingham Heart Study and other epidemiological studies have found a correlation between lipoproteins and cardiovascular disease (CVD). Lipoproteins are generally a major target of study in lipidology since lipids are transported throughout the body in the form of lipoproteins.
A class of lipids known as phospholipids help make up what is known as lipoproteins, and a type of lipoprotein is called high density lipoprotein (HDL). A high concentration of high density lipoproteins-cholesterols (HDL-C) have what is known as a vasoprotective effect on the body, a finding that correlates with an enhanced cardiovascular effect. There is also a correlation between those with diseases such as chronic kidney disease, coronary artery disease, or diabetes mellitus and the possibility of low vasoprotective effect from HDL.
Another factor of CVD that is often overlooked involves the
Document 3:::
An unsaturated fat is a fat or fatty acid in which there is at least one double bond within the fatty acid chain. A fatty acid chain is monounsaturated if it contains one double bond, and polyunsaturated if it contains more than one double bond.
A saturated fat has no carbon to carbon double bonds, so the maximum possible number of hydrogens bonded to the carbons, and is "saturated" with hydrogen atoms. To form carbon to carbon double bonds, hydrogen atoms are removed from the carbon chain. In cellular metabolism, unsaturated fat molecules contain less energy (i.e., fewer calories) than an equivalent amount of saturated fat. The greater the degree of unsaturation in a fatty acid (i.e., the more double bonds in the fatty acid) the more vulnerable it is to lipid peroxidation (rancidity). Antioxidants can protect unsaturated fat from lipid peroxidation.
Composition of common fats
In chemical analysis, fats are broken down to their constituent fatty acids, which can be analyzed in various ways. In one approach, fats undergo transesterification to give fatty acid methyl esters (FAMEs), which are amenable to separation and quantitation using by gas chromatography. Classically, unsaturated isomers were separated and identified by argentation thin-layer chromatography.
The saturated fatty acid components are almost exclusively stearic (C18) and palmitic acids (C16). Monounsaturated fats are almost exclusively oleic acid. Linolenic acid comprises most of the triunsaturated fatty acid component.
Chemistry and nutrition
Although polyunsaturated fats are protective against cardiac arrhythmias, a study of post-menopausal women with a relatively low fat intake showed that polyunsaturated fat is positively associated with progression of coronary atherosclerosis, whereas monounsaturated fat is not. This probably is an indication of the greater vulnerability of polyunsaturated fats to lipid peroxidation, against which vitamin E has been shown to be protective.
Examples
Document 4:::
The low-density lipoprotein receptor gene family codes for a class of structurally related cell surface receptors that fulfill diverse biological functions in different organs, tissues, and cell types. The role that is most commonly associated with this evolutionarily ancient family is cholesterol homeostasis (maintenance of appropriate concentration of cholesterol). In humans, excess cholesterol in the blood is captured by low-density lipoprotein (LDL) and removed by the liver via endocytosis of the LDL receptor. Recent evidence indicates that the members of the LDL receptor gene family are active in the cell signalling pathways between specialized cells in many, if not all, multicellular organisms.
There are seven members of the LDLR family in mammals, namely:
LDLR
VLDL receptor (VLDLR)
ApoER2, or LRP8
Low density lipoprotein receptor-related protein 4
also known as multiple epidermal growth factor (EGF) repeat-containing protein (MEGF7)
LDLR-related protein 1
LDLR-related protein 1b
Megalin.
Human proteins containing this domain
Listed below are human proteins containing low-density lipoprotein receptor domains:
Class A
C6; C7; 8A; 8B; C9; CD320; CFI;
CORIN; DGCR2; HSPG2; LDLR; LDLRAD2; LDLRAD3; LRP1; LRP10;
LRP11; LRP12; LRP1B; LRP2; LRP3; LRP4; LRP5; LRP6;
LRP8; MAMDC4; MFRP; PRSS7; RXFP1; RXFP2; SORL1; SPINT1;
SSPO; ST14; TMPRSS4; TMPRSS6; TMPRSS7; TMPRSS9 (serase-1B); VLDLR;
Class B
EGF; LDLR; LRP1; LRP10; LRP1B; LRP2; LRP4; LRP5;
LRP5L; LRP6; LRP8; NID1; NID2; SORL1; VLDLR;
See also
Soluble low-density lipoprotein receptor-related protein (sLRP) - impaired function is related to Alzheimer's disease.
Structure
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of cholesterol is commonly referred to as bad?
A. hdl
B. ldl
C. unsaturated
D. insulin
Answer:
|
|
sciq-2938
|
multiple_choice
|
We divide up the earth's seas into five what, which are really all interconnected?
|
[
"lakes",
"continents",
"ecosystems",
"oceans"
] |
D
|
Relavent Documents:
Document 0:::
Aquatic science is the study of the various bodies of water that make up our planet including oceanic and freshwater environments. Aquatic scientists study the movement of water, the chemistry of water, aquatic organisms, aquatic ecosystems, the movement of materials in and out of aquatic ecosystems, and the use of water by humans, among other things. Aquatic scientists examine current processes as well as historic processes, and the water bodies that they study can range from tiny areas measured in millimeters to full oceans. Moreover, aquatic scientists work in Interdisciplinary groups. For example, a physical oceanographer might work with a biological oceanographer to understand how physical processes, such as tropical cyclones or rip currents, affect organisms in the Atlantic Ocean. Chemists and biologists, on the other hand, might work together to see how the chemical makeup of a certain body of water affects the plants and animals that reside there. Aquatic scientists can work to tackle global problems such as global oceanic change and local problems, such as trying to understand why a drinking water supply in a certain area is polluted.
There are two main fields of study that fall within the field of aquatic science. These fields of study include oceanography and limnology.
Oceanography
Oceanography refers to the study of the physical, chemical, and biological characteristics of oceanic environments. Oceanographers study the history, current condition, and future of the planet's oceans. They also study marine life and ecosystems, ocean circulation, plate tectonics, the geology of the seafloor, and the chemical and physical properties of the ocean.
Oceanography is interdisciplinary. For example, there are biological oceanographers and marine biologists. These scientists specialize in marine organisms. They study how these organisms develop, their relationship with one another, and how they interact and adapt to their environment. Biological oceanographers
Document 1:::
The borders of the oceans are the limits of Earth's oceanic waters. The definition and number of oceans can vary depending on the adopted criteria. The principal divisions (in descending order of area) of the five oceans are the Pacific Ocean, Atlantic Ocean, Indian Ocean, Southern (Antarctic) Ocean, and Arctic Ocean. Smaller regions of the oceans are called seas, gulfs, bays, straits, and other terms. Geologically, an ocean is an area of oceanic crust covered by water.
See also the list of seas article for the seas included in each ocean area.
Overview
Though generally described as several separate oceans, the world's oceanic waters constitute one global, interconnected body of salt water sometimes referred to as the World Ocean or Global Ocean. This concept of a continuous body of water with relatively free interchange among its parts is of fundamental importance to oceanography.
The major oceanic divisions are defined in part by the continents, various archipelagos, and other criteria. The principal divisions (in descending order of area) are the: Pacific Ocean, Atlantic Ocean, Indian Ocean, Southern (Antarctic) Ocean, and Arctic Ocean. Smaller regions of the oceans are called seas, gulfs, bays, straits, and other terms.
Geologically, an ocean is an area of oceanic crust covered by water. Oceanic crust is the thin layer of solidified volcanic basalt that covers the Earth's mantle. Continental crust is thicker but less dense. From this perspective, the Earth has three oceans: the World Ocean, the Caspian Sea, and the Black Sea. The latter two were formed by the collision of Cimmeria with Laurasia. The Mediterranean Sea is at times a discrete ocean because tectonic plate movement has repeatedly broken its connection to the World Ocean through the Strait of Gibraltar. The Black Sea is connected to the Mediterranean through the Bosporus, but the Bosporus is a natural canal cut through continental rock some 7,000 years ago, rather than a piece of oceanic sea floo
Document 2:::
Maritime sociology is a sub-discipline of sociology studying the relationship of human societies and cultures to the oceans and the marine environment as well as related social processes. Subjects studied by maritime sociology are human activities at and with the sea such as seafaring, fisheries, maritime and coastal tourism, off-shore extraction, deep-sea mining, or marine environmental conservation. Institutions and discourses related to those activities are also studied by the sub-discipline. Another area of study is the societal-natural relations in the marine realm such as, for instance, the problem of over-fishing or the social consequences of climate change. In sum, maritime sociology conceptualizes the oceans as a social rather than a merely natural space.
Relation to other sociological disciplines
Maritime sociological research is often closely related, uses theories and methods of and collaborates with other sub-disciplines such as the sociology of work or environmental sociology.
Schools and institutions
Although it is almost as old as the sociological discipline itself, maritime sociology has not been institutionalized to any great extent to date and is practiced by various more or less independent schools around the world. Currently, there are efforts within the research community to establish maritime sociology as an independent sub-discipline.
The Polish universities of Szczecin, Gdansk, and Poznan are national centers of research mainly in the sociology of maritime professions. After the second world war, when the Polish coastline had increased significantly due to the outcome of the war, maritime matters became important subject of political and scientific discourse in Poland leading to the establishment of maritime sociology.
From 1985 to 1992, there was a working group at the Institute of Sociology at Christian-Albrechts-University in Kiel, Germany that aimed to establish maritime sociology in Germany.
Several Chinese universities (Ocean U
Document 3:::
The Malaspina circumnavigation expedition was an interdisciplinary research project to assess the impact of global change on the oceans and explore their biodiversity. The 250 scientists on board the Hespérides and Sarmiento de Gamboa embarked on an eight-month expedition (starting in December 2010) scientific research with training for young researchers - advancing marine science and fostering the public understanding of science.
The project was under the umbrella of the Spanish Ministry of Science and Innovation's Consolider – Ingenio 2010 programme and was led by the Spanish National Research Council (CSIC) with the support of the Spanish Navy. It is named after the original scientific Malaspina Expedition between 1789 and 1794, that was commanded by Alejandro Malaspina. Due to Malaspina's involvement in a conspiracy to overthrow the Spanish government, he was jailed upon his return and a large part of the expedition's reports and collections were put away unpublished, not to see the light again until late in the 20th century.
Objectives
Assessing the impact of global change on the oceans
Global change relates to the impact of human activities on the functioning of the biosphere. These include activities which, although performed locally, have effects on the functioning of the earth's system as a whole.
The ocean plays a central role in regulating the planet's climate and is its biggest sink of and other substances
produced by human activity.
The project will put together Colección Malaspina 2010, a collection of environmental and biological data and samples which will be available to the scientific community for it to evaluate the impacts of future global changes. This will be particularly valuable, for example, when new technologies allow levels of pollutants below current thresholds of detection to be evaluated.
Exploring the biodiversity of the deep ocean
Half the Earth's surface is covered by oceans over 3,000 metres deep, making them the biggest
Document 4:::
Marine technology is defined by WEGEMT (a European association of 40 universities in 17 countries) as "technologies for the safe use, exploitation, protection of, and intervention in, the marine environment." In this regard, according to WEGEMT, the technologies involved in marine technology are the following: naval architecture, marine engineering, ship design, ship building and ship operations; oil and gas exploration, exploitation, and production; hydrodynamics, navigation, sea surface and sub-surface support, underwater technology and engineering; marine resources (including both renewable and non-renewable marine resources); transport logistics and economics; inland, coastal, short sea and deep sea shipping; protection of the marine environment; leisure and safety.
Education and training
According to the Cape Fear Community College of Wilmington, North Carolina, the curriculum for a marine technology program provides practical skills and academic background that are essential in succeeding in the area of marine scientific support. Through a marine technology program, students aspiring to become marine technologists will become proficient in the knowledge and skills required of scientific support technicians.
The educational preparation includes classroom instructions and practical training aboard ships, such as how to use and maintain electronic navigation devices, physical and chemical measuring instruments, sampling devices, and data acquisition and reduction systems aboard ocean-going and smaller vessels, among other advanced equipment.
As far as marine technician programs are concerned, students learn hands-on to trouble shoot, service and repair four- and two-stroke outboards, stern drive, rigging, fuel & lube systems, electrical including diesel engines.
Relationship to commerce
Marine technology is related to the marine science and technology industry, also known as maritime commerce. The Executive Office of Housing and Economic Development (EOHED
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
We divide up the earth's seas into five what, which are really all interconnected?
A. lakes
B. continents
C. ecosystems
D. oceans
Answer:
|
|
sciq-3344
|
multiple_choice
|
Just as millions of different words are spelled with our 26-letter english alphabet, millions of different proteins are made with the 20 common what?
|
[
"peptides",
"amino acids",
"mutation acids",
"enzymes"
] |
B
|
Relavent Documents:
Document 0:::
Proteins are a class of biomolecules composed of amino acid chains.
Biochemistry
Antifreeze protein, class of polypeptides produced by certain fish, vertebrates, plants, fungi and bacteria
Conjugated protein, protein that functions in interaction with other chemical groups attached by covalent bonds
Denatured protein, protein which has lost its functional conformation
Matrix protein, structural protein linking the viral envelope with the virus core
Protein A, bacterial surface protein that binds antibodies
Protein A/G, recombinant protein that binds antibodies
Protein C, anticoagulant
Protein G, bacterial surface protein that binds antibodies
Protein L, bacterial surface protein that binds antibodies
Protein S, plasma glycoprotein
Protein Z, glycoprotein
Protein catabolism, the breakdown of proteins into amino acids and simple derivative compounds
Protein complex, group of two or more associated proteins
Protein electrophoresis, method of analysing a mixture of proteins by means of gel electrophoresis
Protein folding, process by which a protein assumes its characteristic functional shape or tertiary structure
Protein isoform, version of a protein with some small differences
Protein kinase, enzyme that modifies other proteins by chemically adding phosphate groups to them
Protein ligands, atoms, molecules, and ions which can bind to specific sites on proteins
Protein microarray, piece of glass on which different molecules of protein have been affixed at separate locations in an ordered manner
Protein phosphatase, enzyme that removes phosphate groups that have been attached to amino acid residues of proteins
Protein purification, series of processes intended to isolate a single type of protein from a complex mixture
Protein sequencing, protein method
Protein splicing, intramolecular reaction of a particular protein in which an internal protein segment is removed from a precursor protein
Protein structure, unique three-dimensional shape of amino
Document 1:::
Proteins are a class of macromolecular organic compounds that are essential to life. They consist of a long polypeptide chain that usually adopts a single stable three-dimensional structure. They fulfill a wide variety of functions including providing structural stability to cells, catalyze chemical reactions that produce or store energy or synthesize other biomolecules including nucleic acids and proteins, transport essential nutrients, or serve other roles such as signal transduction. They are selectively transported to various compartments of the cell or in some cases, secreted from the cell.
This list aims to organize information on how proteins are most often classified: by structure, by function, or by location.
Structure
Proteins may be classified as to their three-dimensional structure (also known a protein fold). The two most widely used classification schemes are:
CATH database
Structural Classification of Proteins database (SCOP)
Both classification schemes are based on a hierarchy of fold types. At the top level are all alpha proteins (domains consisting of alpha helices), all beta proteins (domains consisting of beta sheets), and mixed alpha helix/beta sheet proteins.
While most proteins adopt a single stable fold, a few proteins can rapidly interconvert between one or more folds. These are referred to as metamorphic proteins. Finally other proteins appear not to adopt any stable conformation and are referred to as intrinsically disordered.
Proteins frequently contain two or more domains, each have a different fold separated by intrinsically disordered regions. These are referred to as multi-domain proteins.
Function
Proteins may also be classified based on their celluar function. A widely used classification is PANTHER (protein analysis through evolutionary relationships) classification system.
Structural
Protein#Structural proteins
Catalytic
Enzymes classified according to their Enzyme Commission number (EC). Note that strictly speaki
Document 2:::
Amino acids are organic compounds that contain both amino and carboxylic acid functional groups. Although over 500 amino acids exist in nature, by far the most important are the 22 α-amino acids incorporated into proteins. Only these 22 appear in the genetic code of all life.
Amino acids can be classified according to the locations of the core structural functional groups, as alpha- (α-), beta- (β-), gamma- (γ-) or delta- (δ-) amino acids; other categories relate to polarity, ionization, and side chain group type (aliphatic, acyclic, aromatic, containing hydroxyl or sulfur, etc.). In the form of proteins, amino acid residues form the second-largest component (water being the largest) of human muscles and other tissues. Beyond their role as residues in proteins, amino acids participate in a number of processes such as neurotransmitter transport and biosynthesis. It is thought that they played a key role in enabling life on Earth and its emergence.
Amino acids are formally named by the IUPAC-IUBMB Joint Commission on Biochemical Nomenclature in terms of the fictitious "neutral" structure shown in the illustration. For example, the systematic name of alanine is 2-aminopropanoic acid, based on the formula . The Commission justified this approach as follows:
The systematic names and formulas given refer to hypothetical forms in which amino groups are unprotonated and carboxyl groups are undissociated. This convention is useful to avoid various nomenclatural problems but should not be taken to imply that these structures represent an appreciable fraction of the amino-acid molecules.
History
The first few amino acids were discovered in the early 1800s. In 1806, French chemists Louis-Nicolas Vauquelin and Pierre Jean Robiquet isolated a compound from asparagus that was subsequently named asparagine, the first amino acid to be discovered. Cystine was discovered in 1810, although its monomer, cysteine, remained undiscovered until 1884. Glycine and leucine were discovere
Document 3:::
This is a list of topics in molecular biology. See also index of biochemistry articles.
Document 4:::
Protein subfamily is a level of protein classification, based on their close evolutionary relationship. It is below the larger levels of protein superfamily and protein family.
Proteins typically share greater sequence and function similarities with other subfamily members than they do with members of their wider family. For example, in the Structural Classification of Proteins database classification system, members of a subfamily share the same interaction interfaces and interaction partners. These are stricter criteria than for a family, where members have similar structures, but may be more distantly related and so have different interfaces. Subfamilies are assigned by a variety of methods, including sequence similarity, motifs linked to function, or phylogenetic clade. There is no exact and consistent distinction between a subfamily and a family. The same group of proteins may sometimes be described as a family or a subfamily, depending on the context.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Just as millions of different words are spelled with our 26-letter english alphabet, millions of different proteins are made with the 20 common what?
A. peptides
B. amino acids
C. mutation acids
D. enzymes
Answer:
|
|
sciq-8707
|
multiple_choice
|
An aquifer is an underground layer of rock that is saturated with what?
|
[
"ocean water",
"oil",
"wastewater",
"groundwater"
] |
D
|
Relavent Documents:
Document 0:::
Groundwater is the water present beneath Earth's surface in rock and soil pore spaces and in the fractures of rock formations. About 30 percent of all readily available freshwater in the world is groundwater. A unit of rock or an unconsolidated deposit is called an aquifer when it can yield a usable quantity of water. The depth at which soil pore spaces or fractures and voids in rock become completely saturated with water is called the water table. Groundwater is recharged from the surface; it may discharge from the surface naturally at springs and seeps, and can form oases or wetlands. Groundwater is also often withdrawn for agricultural, municipal, and industrial use by constructing and operating extraction wells. The study of the distribution and movement of groundwater is hydrogeology, also called groundwater hydrology.
Typically, groundwater is thought of as water flowing through shallow aquifers, but, in the technical sense, it can also contain soil moisture, permafrost (frozen soil), immobile water in very low permeability bedrock, and deep geothermal or oil formation water. Groundwater is hypothesized to provide lubrication that can possibly influence the movement of faults. It is likely that much of Earth's subsurface contains some water, which may be mixed with other fluids in some instances.
Groundwater is often cheaper, more convenient and less vulnerable to pollution than surface water. Therefore, it is commonly used for public water supplies. For example, groundwater provides the largest source of usable water storage in the United States, and California annually withdraws the largest amount of groundwater of all the states. Underground reservoirs contain far more water than the capacity of all surface reservoirs and lakes in the US, including the Great Lakes. Many municipal water supplies are derived solely from groundwater. Over 2 billion people rely on it as their primary water source worldwide.
Human use of groundwater causes environmental prob
Document 1:::
In the field of hydrogeology, storage properties are physical properties that characterize the capacity of an aquifer to release groundwater. These properties are storativity (S), specific storage (Ss) and specific yield (Sy). According to Groundwater, by Freeze and Cherry (1979), specific storage, [m−1], of a saturated aquifer is defined as the volume of water that a unit volume of the aquifer releases from storage under a unit decline in hydraulic head.
They are often determined using some combination of field tests (e.g., aquifer tests) and laboratory tests on aquifer material samples. Recently, these properties have been also determined using remote sensing data derived from Interferometric synthetic-aperture radar.
Storativity
Storativity or the storage coefficient is the volume of water released from storage per unit decline in hydraulic head in the aquifer, per unit area of the aquifer. Storativity is a dimensionless quantity, and is always greater than 0.
is the volume of water released from storage ([L3]);
is the hydraulic head ([L])
is the specific storage
is the specific yield
is the thickness of aquifer
is the area ([L2])
Confined
For a confined aquifer or aquitard, storativity is the vertically integrated specific storage value. Specific storage is the volume of water released from one unit volume of the aquifer under one unit decline in head. This is related to both the compressibility of the aquifer and the compressibility of the water itself. Assuming the aquifer or aquitard is homogeneous:
Unconfined
For an unconfined aquifer, storativity is approximately equal to the specific yield () since the release from specific storage () is typically orders of magnitude less ().
The specific storage is the amount of water that a portion of an aquifer releases from storage, per unit mass or volume of the aquifer, per unit change in hydraulic head, while remaining fully saturated.
Mass specific storage is the mass of water that an aquife
Document 2:::
In geology and sedimentology, connate fluids are liquids that were trapped in the pores of sedimentary rocks as they were deposited. These liquids are largely composed of water, but also contain many mineral components as ions in solution.
As rocks are buried, they undergo lithification and the connate fluids are usually expelled. If the escape route for these fluids is blocked, the pore fluid pressure can build up, leading to overpressure.
Significance
An understanding of the geochemistry of connate fluids is important if the diagenesis of the rock is to be quantified. The solutes in the connate fluids often precipitate and reduce the porosity and permeability of the host rock, which can have important implications for its hydrocarbon prospectivity. The chemical components of the connate fluid can also yield information on the provenance of aquifers and of the thermal history of the host rock. Minute bubbles of fluid are often trapped within the crystals of the cementing material. These fluid inclusions provide direct information about the composition of the fluid and the pressure-temperature conditions that existed during diagenesis of the sediments.
Some analyses of connate water samples from Louisiana (USA) compared to seawater
Similar, but different in origin, is the concept of fossil water, which is used to describe very old groundwater found in deep aquifers or bedrock. Typically it was recharged during a different climatic period (e.g., the last ice age) so is also very old, but possibly not of the same genesis as the rock.
See also
Petroleum geology
Document 3:::
FEFLOW (Finite Element subsurface FLOW system) is a computer program for simulating groundwater flow, mass transfer and heat transfer in porous media and fractured media. The program uses finite element analysis to solve the groundwater flow equation of both saturated and unsaturated conditions as well as mass and heat transport, including fluid density effects and chemical kinetics for multi-component reaction systems.
History
The software was firstly introduced by Hans-Jörg G. Diersch in 1979, see and. He developed the software in the Institute of Mechanics of the German Academy of Sciences Berlin up to 1990. In 1990 he was one of the founders of WASY GmbH of Berlin, Germany (the acronym WASY translates from German to Institute for Water Resources Planning and Systems Research), where FEFLOW has been developed further, continuously improved and extended as a commercial simulation package. In 2007 the shares of WASY GmbH were purchased by DHI. The WASY company has been fused and FEFLOW became part of the DHI Group software portfolio. FEFLOW is being further developed at DHI by an international team. Software distribution and services are worldwide.
Technology
The program is offered in both 32-bit and 64-bit versions for Microsoft Windows and Linux operating systems.
FEFLOW's theoretical basis is fully described in the comprehensive FEFLOW book. It covers a wide range of physical and computational issues in the field of porous/fractured-media modeling. The book starts with a more general theory for all relevant flow and transport phenomena on the basis of the continuum mechanics, systematically develops the basic framework for important classes of problems (e.g., multiphase/multispecies non-isothermal flow and transport phenomena, variably saturated porous media, free-surface groundwater flow, aquifer-averaged equations, discrete feature elements), introduces finite element methods for solving the basic multidimensional balance equations, in detail discusses a
Document 4:::
In hydrology, bound water, is an extremely thin layer of water surrounding mineral surfaces.
Water molecules have a strong electrical polarity, meaning that there is a very strong positive charge on one side of the molecule and a strong negative charge on the other. This causes the water molecules to bond to each other and to other charged surfaces, such as soil minerals. Clay in particular has a high ability to bond with water molecules.
The strong attraction between these surfaces causes an extremely thin water film (a few molecules thick) to form on the mineral surface. These water molecules are much less mobile than the rest of the water in the soil, and have significant effects on soil dielectric permittivity and freezing-thawing.
In molecular biology and food science, bound water refers to the amount of water in body tissues which are bound to macromolecules or organelles. In food science this form of water is practically unavailable for microbiological activities so it would not cause quality decreases or pathogen increases.
See also
Adsorption
Capillary action
Effective porosity
Surface tension
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
An aquifer is an underground layer of rock that is saturated with what?
A. ocean water
B. oil
C. wastewater
D. groundwater
Answer:
|
|
sciq-11589
|
multiple_choice
|
When amino acids bind together, they form a long chain called what, which is an essential component of protein?
|
[
"polypeptide",
"lipids",
"peptide",
"enzyme"
] |
A
|
Relavent Documents:
Document 0:::
Amino acids are organic compounds that contain both amino and carboxylic acid functional groups. Although over 500 amino acids exist in nature, by far the most important are the 22 α-amino acids incorporated into proteins. Only these 22 appear in the genetic code of all life.
Amino acids can be classified according to the locations of the core structural functional groups, as alpha- (α-), beta- (β-), gamma- (γ-) or delta- (δ-) amino acids; other categories relate to polarity, ionization, and side chain group type (aliphatic, acyclic, aromatic, containing hydroxyl or sulfur, etc.). In the form of proteins, amino acid residues form the second-largest component (water being the largest) of human muscles and other tissues. Beyond their role as residues in proteins, amino acids participate in a number of processes such as neurotransmitter transport and biosynthesis. It is thought that they played a key role in enabling life on Earth and its emergence.
Amino acids are formally named by the IUPAC-IUBMB Joint Commission on Biochemical Nomenclature in terms of the fictitious "neutral" structure shown in the illustration. For example, the systematic name of alanine is 2-aminopropanoic acid, based on the formula . The Commission justified this approach as follows:
The systematic names and formulas given refer to hypothetical forms in which amino groups are unprotonated and carboxyl groups are undissociated. This convention is useful to avoid various nomenclatural problems but should not be taken to imply that these structures represent an appreciable fraction of the amino-acid molecules.
History
The first few amino acids were discovered in the early 1800s. In 1806, French chemists Louis-Nicolas Vauquelin and Pierre Jean Robiquet isolated a compound from asparagus that was subsequently named asparagine, the first amino acid to be discovered. Cystine was discovered in 1810, although its monomer, cysteine, remained undiscovered until 1884. Glycine and leucine were discovere
Document 1:::
Proteins are a class of biomolecules composed of amino acid chains.
Biochemistry
Antifreeze protein, class of polypeptides produced by certain fish, vertebrates, plants, fungi and bacteria
Conjugated protein, protein that functions in interaction with other chemical groups attached by covalent bonds
Denatured protein, protein which has lost its functional conformation
Matrix protein, structural protein linking the viral envelope with the virus core
Protein A, bacterial surface protein that binds antibodies
Protein A/G, recombinant protein that binds antibodies
Protein C, anticoagulant
Protein G, bacterial surface protein that binds antibodies
Protein L, bacterial surface protein that binds antibodies
Protein S, plasma glycoprotein
Protein Z, glycoprotein
Protein catabolism, the breakdown of proteins into amino acids and simple derivative compounds
Protein complex, group of two or more associated proteins
Protein electrophoresis, method of analysing a mixture of proteins by means of gel electrophoresis
Protein folding, process by which a protein assumes its characteristic functional shape or tertiary structure
Protein isoform, version of a protein with some small differences
Protein kinase, enzyme that modifies other proteins by chemically adding phosphate groups to them
Protein ligands, atoms, molecules, and ions which can bind to specific sites on proteins
Protein microarray, piece of glass on which different molecules of protein have been affixed at separate locations in an ordered manner
Protein phosphatase, enzyme that removes phosphate groups that have been attached to amino acid residues of proteins
Protein purification, series of processes intended to isolate a single type of protein from a complex mixture
Protein sequencing, protein method
Protein splicing, intramolecular reaction of a particular protein in which an internal protein segment is removed from a precursor protein
Protein structure, unique three-dimensional shape of amino
Document 2:::
Proteinogenic amino acids are amino acids that are incorporated biosynthetically into proteins during translation. The word "proteinogenic" means "protein creating". Throughout known life, there are 22 genetically encoded (proteinogenic) amino acids, 20 in the standard genetic code and an additional 2 (selenocysteine and pyrrolysine) that can be incorporated by special translation mechanisms.
In contrast, non-proteinogenic amino acids are amino acids that are either not incorporated into proteins (like GABA, L-DOPA, or triiodothyronine), misincorporated in place of a genetically encoded amino acid, or not produced directly and in isolation by standard cellular machinery (like hydroxyproline). The latter often results from post-translational modification of proteins. Some non-proteinogenic amino acids are incorporated into nonribosomal peptides which are synthesized by non-ribosomal peptide synthetases.
Both eukaryotes and prokaryotes can incorporate selenocysteine into their proteins via a nucleotide sequence known as a SECIS element, which directs the cell to translate a nearby UGA codon as selenocysteine (UGA is normally a stop codon). In some methanogenic prokaryotes, the UAG codon (normally a stop codon) can also be translated to pyrrolysine.
In eukaryotes, there are only 21 proteinogenic amino acids, the 20 of the standard genetic code, plus selenocysteine. Humans can synthesize 12 of these from each other or from other molecules of intermediary metabolism. The other nine must be consumed (usually as their protein derivatives), and so they are called essential amino acids. The essential amino acids are histidine, isoleucine, leucine, lysine, methionine, phenylalanine, threonine, tryptophan, and valine (i.e. H, I, L, K, M, F, T, W, V).
The proteinogenic amino acids have been found to be related to the set of amino acids that can be recognized by ribozyme autoaminoacylation systems. Thus, non-proteinogenic amino acids would have been excluded by the conting
Document 3:::
Proteins are a class of macromolecular organic compounds that are essential to life. They consist of a long polypeptide chain that usually adopts a single stable three-dimensional structure. They fulfill a wide variety of functions including providing structural stability to cells, catalyze chemical reactions that produce or store energy or synthesize other biomolecules including nucleic acids and proteins, transport essential nutrients, or serve other roles such as signal transduction. They are selectively transported to various compartments of the cell or in some cases, secreted from the cell.
This list aims to organize information on how proteins are most often classified: by structure, by function, or by location.
Structure
Proteins may be classified as to their three-dimensional structure (also known a protein fold). The two most widely used classification schemes are:
CATH database
Structural Classification of Proteins database (SCOP)
Both classification schemes are based on a hierarchy of fold types. At the top level are all alpha proteins (domains consisting of alpha helices), all beta proteins (domains consisting of beta sheets), and mixed alpha helix/beta sheet proteins.
While most proteins adopt a single stable fold, a few proteins can rapidly interconvert between one or more folds. These are referred to as metamorphic proteins. Finally other proteins appear not to adopt any stable conformation and are referred to as intrinsically disordered.
Proteins frequently contain two or more domains, each have a different fold separated by intrinsically disordered regions. These are referred to as multi-domain proteins.
Function
Proteins may also be classified based on their celluar function. A widely used classification is PANTHER (protein analysis through evolutionary relationships) classification system.
Structural
Protein#Structural proteins
Catalytic
Enzymes classified according to their Enzyme Commission number (EC). Note that strictly speaki
Document 4:::
Amide Rings are small motifs in proteins and polypeptides. They consist of 9-atom or 11-atom rings formed by two CO...HN hydrogen bonds between a side chain amide group and the main chain atoms of a short polypeptide. They are observed with glutamine or asparagine side chains within proteins and polypeptides. Structurally similar rings occur in the binding of purine, pyrimidine and nicotinamide bases to the main chain atoms of proteins. About 4% of asparagines and glutamines form amide rings; in databases of protein domain structures, one is present, on average, every other protein.
In such rings the polypeptide has the conformation of beta sheet or of type II polyproline helix (PPII). A number of glutamines and asparagines help bind short peptides (with the PPII conformation) in the groove of class II MHC (Major Histocompatibility Complex) proteins by forming these motifs. An 11-atom amide ring, involving a glutamine residue, occurs at the interior of the light chain variable domains of some Immunoglobulin G antibodies and assists in linking the two beta-sheets.
An amide ring is employed in the specificity of the adaptor protein GRB2 for a particular asparagine within proteins it binds. GRB2 binds strongly to the pentapeptide EYINQ (when the tyrosine is phosphorylated); in such structures a 9-atom amide ring occurs between the amide side chain of the pentapeptide's asparagine and the main chain atoms of residue 109 of GRB2.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
When amino acids bind together, they form a long chain called what, which is an essential component of protein?
A. polypeptide
B. lipids
C. peptide
D. enzyme
Answer:
|
|
sciq-5089
|
multiple_choice
|
Another name for table salt is?
|
[
"sodium chloride",
"dioxide chloride",
"hydrogen chloride",
"carbon chloride"
] |
A
|
Relavent Documents:
Document 0:::
A salt substitute, also known as low-sodium salt, is a low-sodium alternative to edible salt (table salt) marketed to reduce the risk of high blood pressure and cardiovascular disease associated with a high intake of sodium chloride while maintaining a similar taste.
The leading salt substitutes are non-sodium table salts, which have their tastes as a result of compounds other than sodium chloride. Non-sodium salts reduce daily sodium intake and reduce the health effects of this element.
Low sodium diet
According to current WHO guidelines, adults should consume less than 2,000 mg of sodium per day (i.e. about 5 grams of traditional table salt), and at least 3,510 mg of potassium per day. In Europe, adults and children consume about twice as much sodium as recommended by experts.
Research
In 2021, a large randomised controlled trial of 20,995 older people in China found that use of a potassium salt substitute in home cooking over a five-year period reduced the risk of stroke by 14%, major cardiovascular events by 13% and all-cause mortality by 12% compared to use of regular table salt.
The study found no significant difference in hyperkalaemia between the two groups, though people with serious kidney disease were excluded from the trial. The salt substitute used was 25% potassium chloride and 75% sodium chloride.
A 2022 Cochrane review of 26 trials involving salt substitutes found their use probably slightly reduces blood pressure, non-fatal stroke, non-fatal acute coronary syndrome and heart disease death in adults compared to use of regular table salt. A separate systematic review and meta-analysis published in the same year of 21 trials involving salt substitutes found protective effects of salt substitute on total mortality, cardiovascular mortality and cardiovascular events.
Examples
Potassium
Potassium closely resembles the saltiness of sodium. In practice, potassium chloride (also known as potassium salt) is the most commonly used salt substitute.
Document 1:::
Minor salts (micronutrients) per litre
Boric acid (H3BO3) 6. 2 mg/l
Cobalt chloride (CoCl2 · 6H2O) 0.025 mg/l
Ferrous sulfate (FeSO4 · 7H2O) 27.8 mg/l
Manganese(II) sulfate (MnSO4 · 4H2O) 22.3 mg/l
Potassium iodide (KI) 0.83 mg/l
Sodium molybdate (Na2MoO4 · 2H2O) 0.25 mg/l
Zinc sulfate (ZnSO4·7H2O) 8.6 mg/l
Ethylenediaminetetraacetic acid ferric sodium (FeNaEDTA) 36.70 mg/L
Copper sulfate (CuSO4 · 5H2O) 0.025 mg/l
Vitamins and organic compounds per litre
Myo-Inositol 100 mg/l
Nicotini
Document 2:::
In common usage, salt is a mineral composed primarily of sodium chloride (NaCl). When used in food, especially at table in ground form in dispensers, it is more formally called table salt. In the form of a natural crystalline mineral, salt is also known as rock salt or halite. Salt is essential for life in general, and saltiness is one of the basic human tastes. Salt is one of the oldest and most ubiquitous food seasonings, and is known to uniformly improve the taste perception of food, including otherwise unpalatable food. Salting, brining, and pickling are also ancient and important methods of food preservation.
Some of the earliest evidence of salt processing dates to around 6000 BC, when people living in the area of present-day Romania boiled spring water to extract salts; a salt works in China dates to approximately the same period. Salt was also prized by the ancient Hebrews, Greeks, Romans, Byzantines, Hittites, Egyptians, and Indians. Salt became an important article of trade and was transported by boat across the Mediterranean Sea, along specially built salt roads, and across the Sahara on camel caravans. The scarcity and universal need for salt have led nations to go to war over it and use it to raise tax revenues. Salt is used in religious ceremonies and has other cultural and traditional significance.
Salt is processed from salt mines, and by the evaporation of seawater (sea salt) and mineral-rich spring water in shallow pools. The greatest single use for salt (sodium chloride) is as a feedstock for the production of chemicals. It is used to produce caustic soda and chlorine; it is also used in the manufacturing processes of polyvinyl chloride, plastics, paper pulp and many other products. Of the annual global production of around three hundred million tonnes of salt, only a small percentage is used for human consumption. Other uses include water conditioning processes, de-icing highways, and agricultural use. Edible salt is sold in forms such as sea s
Document 3:::
Potassium chloride (KCl, or potassium salt) is a metal halide salt composed of potassium and chlorine. It is odorless and has a white or colorless vitreous crystal appearance. The solid dissolves readily in water, and its solutions have a salt-like taste. Potassium chloride can be obtained from ancient dried lake deposits. KCl is used as a fertilizer, in medicine, in scientific applications, domestic water softeners (as a substitute for sodium chloride salt), and in food processing, where it may be known as E number additive E508.
It occurs naturally as the mineral sylvite, and in combination with sodium chloride as sylvinite.
Uses
Fertilizer
The majority of the potassium chloride produced is used for making fertilizer, called potash, since the growth of many plants is limited by potassium availability. Potassium chloride sold as fertilizer is known as muriate of potash. The vast majority of potash fertilizer worldwide is sold as muriate of potash.
Medical use
Potassium is vital in the human body, and potassium chloride by mouth is the common means to treat low blood potassium, although it can also be given intravenously. It is on the World Health Organization's List of Essential Medicines. Overdose causes hyperkalemia which can disrupt cell signaling to the extent that the heart will stop, reversibly in the case of some open heart surgeries.
Culinary use
It can be used as a salt substitute for food, but due to its weak, bitter, unsalty flavor, it is often mixed with ordinary table salt (sodium chloride) to improve the taste to form low sodium salt. The addition of 1 ppm of thaumatin considerably reduces this bitterness. Complaints of bitterness or a chemical or metallic taste are also reported with potassium chloride used in food.
Industrial
As a chemical feedstock, it is used for the manufacture of potassium hydroxide and potassium metal. It is also used in medicine, lethal injections, scientific applications, food processing, soaps, and as a sodium-fre
Document 4:::
Salammoniac, also sal ammoniac or salmiac, is a rare naturally occurring mineral composed of ammonium chloride, NH4Cl. It forms colorless, white, or yellow-brown crystals in the isometric-hexoctahedral class. It has very poor cleavage and is brittle to conchoidal fracture. It is quite soft, with a Mohs hardness of 1.5 to 2, and it has a low specific gravity of 1.5. It is water-soluble. Sal ammoniac is also the archaic name for the chemical compound ammonium chloride.
History
Pliny, in Book XXXI of his Natural History, refers to a salt produced in the Roman province of Cyrenaica named hammoniacum, so called because of its proximity to the nearby Temple of Jupiter Amun (Greek Ἄμμων Ammon). However, the description Pliny gives of the salt does not conform to the properties of ammonium chloride. According to Herbert Hoover's commentary in his English translation of Georgius Agricola's De re metallica, it is likely to have been common sea salt. In any case, that salt ultimately gave ammonia and ammonium compounds their name.
The first attested reference to sal ammoniac as ammonium chloride is in the Pseudo-Geber work De inventione veritatis, where a preparation of sal ammoniac is given in the chapter De Salis armoniaci præparatione, salis armoniaci being a common name in the Middle Ages for sal ammoniac.
It typically forms as encrustations formed by sublimation around volcanic vents and is found around volcanic fumaroles, guano deposits and burning coal seams. Associated minerals include sodium alum, native sulfur and other fumarole minerals. Notable occurrences include Tajikistan; Mount Vesuvius, Italy; and Parícutin, Michoacan, Mexico.
Uses
It is commonly used to clean the soldering iron in the soldering of stained-glass windows.
Metal refining
In jewellery-making and the refining of precious metals, potassium carbonate is added to gold and silver in a borax-coated crucible to purify iron or steel filings that may have contaminated the scrap. It is then air-coo
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Another name for table salt is?
A. sodium chloride
B. dioxide chloride
C. hydrogen chloride
D. carbon chloride
Answer:
|
|
sciq-8042
|
multiple_choice
|
How many amino acids are arranged like "beads on a string" to form proteins?
|
[
"35",
"25",
"15",
"20"
] |
D
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
Document 2:::
Advanced Placement (AP) Biology (also known as AP Bio) is an Advanced Placement biology course and exam offered by the College Board in the United States. For the 2012–2013 school year, the College Board unveiled a new curriculum with a greater focus on "scientific practices".
This course is designed for students who wish to pursue an interest in the life sciences. The College Board recommends successful completion of high school biology and high school chemistry before commencing AP Biology, although the actual prerequisites vary from school to school and from state to state. This course, nevertheless, is considered very challenging and one of the most difficult AP classes, as shown with AP Finals grade distributions.
Topic outline
The exam covers the following 8 units. The percentage indicates the portion of the multiple-choice section of the exam focused on each content area:
The course is based on and tests six skills, called scientific practices which include:
In addition to the topics above, students are required to be familiar with general lab procedure. Students should know how to collect data, analyze data to form conclusions, and apply those conclusions.
Exam
Students are allowed to use a four-function, scientific, or graphing calculator.
The exam has two sections: a 90 minute multiple choice section and a 90 minute free response section. There are 60 multiple choice questions and six free responses, two long and four short. Both sections are worth 50% of the score.
Score distribution
Commonly used textbooks
Biology, AP Edition by Sylvia Mader (2012, hardcover )
Life: The Science of Biology (Sadava, Heller, Orians, Purves, and Hillis, )
Campbell Biology AP Ninth Edition (Reece, Urry, Cain, Wasserman, Minorsky, and Andrew Jackson )
See also
Glossary of biology
A.P Bio (TV Show)
Document 3:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
Document 4:::
There are four Advanced Placement (AP) Physics courses administered by the College Board as part of its Advanced Placement program: the algebra-based Physics 1 and Physics 2 and the calculus-based Physics C: Mechanics and Physics C: Electricity and Magnetism. All are intended to be at the college level. Each AP Physics course has an exam for which high-performing students may receive credit toward their college coursework.
AP Physics 1 and 2
AP Physics 1 and AP Physics 2 were introduced in 2015, replacing AP Physics B. The courses were designed to emphasize critical thinking and reasoning as well as learning through inquiry. They are algebra-based and do not require any calculus knowledge.
AP Physics 1
AP Physics 1 covers Newtonian mechanics, including:
Unit 1: Kinematics
Unit 2: Dynamics
Unit 3: Circular Motion and Gravitation
Unit 4: Energy
Unit 5: Momentum
Unit 6: Simple Harmonic Motion
Unit 7: Torque and Rotational Motion
Until 2020, the course also covered topics in electricity (including Coulomb's Law and resistive DC circuits), mechanical waves, and sound. These units were removed because they are included in AP Physics 2.
AP Physics 2
AP Physics 2 covers the following topics:
Unit 1: Fluids
Unit 2: Thermodynamics
Unit 3: Electric Force, Field, and Potential
Unit 4: Electric Circuits
Unit 5: Magnetism and Electromagnetic Induction
Unit 6: Geometric and Physical Optics
Unit 7: Quantum, Atomic, and Nuclear Physics
AP Physics C
From 1969 to 1972, AP Physics C was a single course with a single exam that covered all standard introductory university physics topics, including mechanics, fluids, electricity and magnetism, optics, and modern physics. In 1973, the College Board split the course into AP Physics C: Mechanics and AP Physics C: Electricity and Magnetism. The exam was also split into two separate 90-minute tests, each equivalent to a semester-length calculus-based college course. Until 2006, both exams could be taken for a single
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How many amino acids are arranged like "beads on a string" to form proteins?
A. 35
B. 25
C. 15
D. 20
Answer:
|
|
sciq-6579
|
multiple_choice
|
What biome is located between the temperate and tropical biomes?
|
[
"Tropical",
"subtropical",
"Desert",
"mountainous"
] |
B
|
Relavent Documents:
Document 0:::
Mediterranean forests, woodlands, and scrub is a biome defined by the World Wide Fund for Nature. The biome is generally characterized by dry summers and rainy winters, although in some areas rainfall may be uniform. Summers are typically hot in low-lying inland locations but can be cool near colder seas. Winters are typically mild to cool in low-lying locations but can be cold in inland and higher locations. All these ecoregions are highly distinctive, collectively harboring 10% of the Earth's plant species.
Distribution
The Mediterranean forests, woodlands, and scrub biome mostly occurs in, but not limited to, the Mediterranean climate zones, in the mid-latitudes:
the Mediterranean Basin
the Chilean Matorral
the California chaparral and woodlands ecoregion of California and the Baja California Peninsula
the Western Cape of South Africa
the southwest and southern Australia.
The biome is not limited to the Mediterranean climate zone. It can also be present in other climate zones (which typically border the Mediterranean climate zone), such as the drier regions of the oceanic and humid subtropical climates, and as well as the lusher areas of the semi-arid climate zone. Non-Mediterranean climate regions that would feature Mediterranean vegetation include the Nile River Valley in Egypt (extending upstream along the riverbanks), parts of the Eastern Cape in South Africa, southeastern Australia, southeastern Azerbaijan, southeastern Turkey, far northern Iraq, the Mazandaran Province in Iran, Central Italy, parts of the Balkans (including Northern Greece), as well as Northern and Western Jordan.
Vegetation
Vegetation types range from forests to woodlands, savannas, shrublands, and grasslands; "mosaic habitat" landscapes are common, where differing vegetation types are interleaved with one another in complex patterns created by variations in soil, topography, exposure to wind and sun, and fire history. Much of the woody vegetation in Mediterranean-climate regions
Document 1:::
The Mediterranean Biogeographic Region is the biogeographic region around and including the Mediterranean Sea.
The term is defined by the European Environment Agency as applying to the land areas of Europe that border on the Mediterranean Sea, and the corresponding territorial waters.
The region is rich in biodiversity and has many endemic species.
The term may also be used in the broader sense of all the lands of the Mediterranean Basin, or in the narrow sense of just the Mediterranean Sea.
Extent
The European Commission defines the Mediterranean Biogeographic Region as consisting of the Mediterranean Sea, Greece, Malta, Cyprus, large parts of Portugal, Spain and Italy, and a smaller part of France.
The region includes 20.6% of European Union territory.
Climate
The region has cool humid winters and hot dry summers.
Wladimir Köppen divided his "Cs" mediterranean climate classification into "Csa" with a highest mean monthly temperature over and "Csb" where the mean monthly temperature was always lower than .
The region may also be subdivided into dry zones such as Alicante in Spain, and humid zones such as Cinque Terre in Italy.
Terrain
The region has generally hilly terrain and includes islands, high mountains, semi-arid steppes and thick Mediterranean forests, woodlands, and scrub with many aromatic plants.
There are rocky shorelines and sandy beaches.
The region has been greatly affected by human activity such as livestock grazing, cultivation, forest clearance and forest fires.
In recent years tourism has put greater pressure on the shoreline environment.
Biodiversity
The Mediterranean Biogeographic Region is rich in biodiversity and has many endemic species.
The region has more plants species than all the other biogeographical regions of Europe combined.
The wildlife and vegetation are adapted to the unpredictable weather, with sudden downpours or strong winds.
Coastal wetlands are home to endemic species of insects, amphibians and fish, which provide
Document 2:::
The climate and ecology of land immediately surrounding the Mediterranean Sea is influenced by several factors. Overall, the land has a Mediterranean climate, with mild, rainy winters and hot, dry summers. The climate induces characteristic Mediterranean forests, woodlands, and scrub vegetation. Plant life immediately near the Mediterranean is in the Mediterranean Floristic region, while mountainous areas further from the sea supports the Sub-Mediterranean Floristic province.
An important factor in the local climate and ecology of the lands in the Mediterranean basin is the elevation: an increase of elevation by causes the average air temperature to drop by 5 C/ 9 F and decreases the amount of water that can be held by the atmosphere by 30%. This decrease in temperature and increase in rainfall result in altitudinal zonation, where the land can be divided into life zones of similar climate and ecology, depending on elevation.
Mediterranean vegetation shows a variety of ecological adaptations to hot and dry summer conditions. As Mediterranean vegetation differ both in species and composition from temperate vegetation, ecologists use special terminology for the Mediterranean altitudinal zonation:
Eu-mediterranean belt: 20- 16 °C (avg annual temperature)
Sub-mediterranean belt: 15- 12 °C
Hilly region: 11- 8 °C
Mountainous belt: 7- 4 °C
Alpine belt: 3- 0 °C
Subnival belt: 0- minus 4 °C
Even within the Mediterranean Basin, differences in aridity alter the life zones as a function of elevation. For example, the wetter Maritime and Dinaric Alps have a North-Mediterranean zonation pattern, while the southern Apennine Mountains and the Spanish Sierra Nevada have a moderate Eu-Mediterranean zonation pattern. Finally, the drier Atlas Mountains of Africa have a Xero-Mediterranean pattern.
See also
Köppen climate classification
Altitudinal zonation
Biome
List of terrestrial ecoregions (WWF)
Literature
Environment of the Mediterranean
Ecoregions of Europe
Mon
Document 3:::
According to IBGE (2004), Brazil has its territory occupied by six terrestrial biomes and one marine biome.
Terminology
The term "biome" has several meanings. In a narrow sense (e.g., Whittaker, 1975; Coutinho, 2006), used in literature, it names physio-functionally defined small-scale areas, habitat types or ecosystem types. Although it includes both the plants and the animals and microorganisms of a community, in practice, it is defined by the climate and physiognomy or general appearance of the plants of the community.
In the broad sense, adopted by Joly et al. (1978) and the IBGE (2016), biome can be understood as a synonym of "biogeographic province" (e.g., Rizzini, 1963, Eiten 1977, Cabrera and Willink 1980, the term "floristic province" or "phytogeographic" is used when considering plant species only), or as an approximate synonym of "morphoclimatic and phytogeographical domain" (Ab'Sáber, 1967, 2003).
In this broad sense, the "Projeto Radam" (Veloso et al., 1973) applies the term "phytoecological region", and IBGE (2012) adopts the term "floristic region". However, the term "region" must be understood, in this case, in the generalist sense of "area". The terms "region" and "province" have specific traditional meanings in phytogeography: regions are areas characterized by endemic families, and provinces are areas characterized by endemic genera and species.
In the case of the 'domains' of Ab'Sáber (1967, 2003), the defined area is characterized by the predominance of certain geomorphological and climatic characteristics, and also by a certain predominant floristic province (vegetative type). However, there is no uniformity: enclaves from other provinces, characteristics of other domains, may occur within this area.
Terrestrial biomes
Amazônia
The Amazon Forest is the largest forest formation on the planet, conditioned by the humid equatorial climate. It is equivalent to 35% of the forest areas of the planet. It has a wide variety of plant formations. M
Document 4:::
This page features a list of biogeographic provinces that were developed by Miklos Udvardy in 1975, later modified by other authors. Biogeographic Province is a biotic subdivision of biogeographic realms subdivided into ecoregions, which are classified based on their biomes or habitat types and, on this page, correspond to the floristic kingdoms of botany.
The provinces represent the large areas of Earth's surface within which organisms have been evolving in relative isolation over long periods of time, separated from one another by geographic features, such as oceans, broad deserts, or high mountain ranges, that constitute barriers to migration.
Biomes are characterized by similar climax vegetation, though each realm may include a number of different biomes. A tropical moist broadleaf forest in Brazil, for example, may be similar to one in New Guinea in its vegetation type and structure, climate, soils, etc., but these forests are inhabited by plants with very different evolutionary histories.
Afrotropical Realm
Tropical humid forests
Guinean Rainforest
Congo Rainforest
Malagasy Rainforest
Tropical dry or deciduous forests (incl. Monsoon forests) or woodlands
West African Woodland/Savanna
East African Woodland/Savanna
Congo Woodland/Savanna
Miombo Woodland/Savanna
South African Woodland/Savanna
Malagasy Woodland/Savanna
Malagasy Thorn Forest
Evergreen sclerophyllous forests, scrubs or woodlands
Cape Sclerophyll
Warm deserts and semideserts
Western Sahel
Eastern Sahel
Somalian
Namib
Kalahari
Karroo
Mixed mountain and highland systems with complex zonation
Ethiopian Highlands
Guinean Highlands
Central African Highlands
East African Highlands
South African Highlands
Mixed island systems
Ascension and St. Helena Islands
Comores Islands and Aldabra
Mascarene Islands
Lake systems
Lake Rudolph
Lake Ukerewe (Victoria)
Lake Tanganyika
Lake Malawi (Nyassa)
Antarctic Realm
Tundra communities and barren Antarctic desert
Subtropical a
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What biome is located between the temperate and tropical biomes?
A. Tropical
B. subtropical
C. Desert
D. mountainous
Answer:
|
|
scienceQA-6576
|
multiple_choice
|
In this food web, which organism contains matter that eventually moves to the parasol fungus?
|
[
"gray fox",
"swallowtail caterpillar",
"black racer",
"bobcat"
] |
B
|
Use the arrows to follow how matter moves through this food web. For each answer choice, try to find a path of arrows to the parasol fungus.There are two paths matter can take from the swallowtail caterpillar to the parasol fungus: swallowtail caterpillar->pine vole->parasol fungus. swallowtail caterpillar->black bear->parasol fungus. gray fox. There are two arrows pointing from the gray fox to other organisms. One arrow points to the bobcat. The only arrow pointing from the bobcat leads to the bolete fungus. The other arrow pointing from the gray fox leads to the bolete fungus. No arrows point from the bolete fungus to any other organisms. So, in this food web, matter does not move from the gray fox to the parasol fungus.. black racer. The only arrow pointing from the black racer leads to the bolete fungus. No arrows point from the bolete fungus to any other organisms. So, in this food web, matter does not move from the black racer to the parasol fungus.. There is one path matter can take from the silver maple to the parasol fungus: silver maple->beaver->black bear->parasol fungus. bobcat. The only arrow pointing from the bobcat leads to the bolete fungus. No arrows point from the bolete fungus to any other organisms. So, in this food web, matter does not move from the bobcat to the parasol fungus..
|
Relavent Documents:
Document 0:::
Consumer–resource interactions are the core motif of ecological food chains or food webs, and are an umbrella term for a variety of more specialized types of biological species interactions including prey-predator (see predation), host-parasite (see parasitism), plant-herbivore and victim-exploiter systems. These kinds of interactions have been studied and modeled by population ecologists for nearly a century. Species at the bottom of the food chain, such as algae and other autotrophs, consume non-biological resources, such as minerals and nutrients of various kinds, and they derive their energy from light (photons) or chemical sources. Species higher up in the food chain survive by consuming other species and can be classified by what they eat and how they obtain or find their food.
Classification of consumer types
The standard categorization
Various terms have arisen to define consumers by what they eat, such as meat-eating carnivores, fish-eating piscivores, insect-eating insectivores, plant-eating herbivores, seed-eating granivores, and fruit-eating frugivores and omnivores are meat eaters and plant eaters. An extensive classification of consumer categories based on a list of feeding behaviors exists.
The Getz categorization
Another way of categorizing consumers, proposed by South African American ecologist Wayne Getz, is based on a biomass transformation web (BTW) formulation that organizes resources into five components: live and dead animal, live and dead plant, and particulate (i.e. broken down plant and animal) matter. It also distinguishes between consumers that gather their resources by moving across landscapes from those that mine their resources by becoming sessile once they have located a stock of resources large enough for them to feed on during completion of a full life history stage.
In Getz's scheme, words for miners are of Greek etymology and words for gatherers are of Latin etymology. Thus a bestivore, such as a cat, preys on live animal
Document 1:::
The soil food web is the community of organisms living all or part of their lives in the soil. It describes a complex living system in the soil and how it interacts with the environment, plants, and animals.
Food webs describe the transfer of energy between species in an ecosystem. While a food chain examines one, linear, energy pathway through an ecosystem, a food web is more complex and illustrates all of the potential pathways. Much of this transferred energy comes from the sun. Plants use the sun’s energy to convert inorganic compounds into energy-rich, organic compounds, turning carbon dioxide and minerals into plant material by photosynthesis. Plant flowers exude energy-rich nectar above ground and plant roots exude acids, sugars, and ectoenzymes into the rhizosphere, adjusting the pH and feeding the food web underground.
Plants are called autotrophs because they make their own energy; they are also called producers because they produce energy available for other organisms to eat. Heterotrophs are consumers that cannot make their own food. In order to obtain energy they eat plants or other heterotrophs.
Above ground food webs
In above ground food webs, energy moves from producers (plants) to primary consumers (herbivores) and then to secondary consumers (predators). The phrase, trophic level, refers to the different levels or steps in the energy pathway. In other words, the producers, consumers, and decomposers are the main trophic levels. This chain of energy transferring from one species to another can continue several more times, but eventually ends. At the end of the food chain, decomposers such as bacteria and fungi break down dead plant and animal material into simple nutrients.
Methodology
The nature of soil makes direct observation of food webs difficult. Since soil organisms range in size from less than 0.1 mm (nematodes) to greater than 2 mm (earthworms) there are many different ways to extract them. Soil samples are often taken using a metal
Document 2:::
The trophic level of an organism is the position it occupies in a food web. A food chain is a succession of organisms that eat other organisms and may, in turn, be eaten themselves. The trophic level of an organism is the number of steps it is from the start of the chain. A food web starts at trophic level 1 with primary producers such as plants, can move to herbivores at level 2, carnivores at level 3 or higher, and typically finish with apex predators at level 4 or 5. The path along the chain can form either a one-way flow or a food "web". Ecological communities with higher biodiversity form more complex trophic paths.
The word trophic derives from the Greek τροφή (trophē) referring to food or nourishment.
History
The concept of trophic level was developed by Raymond Lindeman (1942), based on the terminology of August Thienemann (1926): "producers", "consumers", and "reducers" (modified to "decomposers" by Lindeman).
Overview
The three basic ways in which organisms get food are as producers, consumers, and decomposers.
Producers (autotrophs) are typically plants or algae. Plants and algae do not usually eat other organisms, but pull nutrients from the soil or the ocean and manufacture their own food using photosynthesis. For this reason, they are called primary producers. In this way, it is energy from the sun that usually powers the base of the food chain. An exception occurs in deep-sea hydrothermal ecosystems, where there is no sunlight. Here primary producers manufacture food through a process called chemosynthesis.
Consumers (heterotrophs) are species that cannot manufacture their own food and need to consume other organisms. Animals that eat primary producers (like plants) are called herbivores. Animals that eat other animals are called carnivores, and animals that eat both plants and other animals are called omnivores.
Decomposers (detritivores) break down dead plant and animal material and wastes and release it again as energy and nutrients into
Document 3:::
Microfauna (Ancient Greek mikros "small" + Neo-Latin fauna "animal") refers to microscopic animals and organisms that exhibit animal-like qualities. Microfauna are represented in the animal kingdom (e.g. nematodes, small arthropods) and the protist kingdom (i.e. protozoans).
Habitat
Microfauna are present in every habitat on Earth. They fill essential roles as decomposers and food sources for lower trophic levels, and are necessary to drive processes within larger organisms.
Role
One particular example of the role of microfauna can be seen in soil, where they are important in the cycling of nutrients in ecosystems. Soil microfauna are capable of digesting just about any organic substance, and some inorganic substances. These organisms are often essential links in the food chain between primary producers and larger species. For example, zooplankton are widespread microscopic animals and protists which feed on algae and detritus in the ocean, such as foraminifera.
Microfauna also aid in digestion and other processes in larger organisms.
Cryptozoa
The microfauna are the least understood of soil life, due to their small size and great diversity. Many microfauna are members of the so-called cryptozoa, animals that remain undescribed by science. Out of the estimated 10-20 million animal species in the world, only 1.8 million have been given scientific names, and many of the remaining millions are likely microfauna, much of it from the tropics.
Phyla
Notable phyla include:
Microscopic arthropods, including dust mites, spider mites, and some crustaceans such as copepods and certain cladocera.
Tardigrades ("water bears")
Rotifers, which are filter feeders that are usually found in fresh water.
Some nematode species
Many loricifera, including the recently discovered anaerobic species, which spend their entire lives in an anoxic environment.
See also
Fauna
Megafauna
Mesofauna
Document 4:::
A psammophile is a plant or animal that prefers or thrives in sandy areas. Plant psammophiles are also known as psammophytes. They thrive in places such as the Arabian Peninsula and the Sahara and also the dunes of coastal regions.
Because of the unique ecological selective pressures of sand, often times animals on opposite sides of the planet can convergently evolve similar features, sometimes referred to as ecomorphological convergence. The Crotalus cerastes native to American deserts, and the Bitis peringueyi native to Namibian deserts, have both independently evolved the sidewinding behavior to traverse across sand. As well, the African Jerboa and the American Kangaroo Rat have separately evolved a bipedal form with large hind legs that allow them to hop.
Etymology
Psammo is from Ancient Greek ψάμμος (psámmos, “sand”); -philo is from Ancient Greek φίλος (phílos, “dear, beloved”) via Latin -phila.
Popular culture
With the correct spelling of the word psammophile, Florida eighth-grader Dev Shah, one of 231 contestants, won the 95th Scripps National Spelling Bee in June 2023 and was awarded $50,000 in prize money.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
In this food web, which organism contains matter that eventually moves to the parasol fungus?
A. gray fox
B. swallowtail caterpillar
C. black racer
D. bobcat
Answer:
|
sciq-10175
|
multiple_choice
|
Most vascular plants are seed plants, also known as?
|
[
"spermatophytes",
"fungus",
"bacteria",
"sporozoans"
] |
A
|
Relavent Documents:
Document 0:::
The following is a list of vascular plants, bryophytes and lichens which are constant species in one or more community of the British National Vegetation Classification system.
Vascular plants
Grasses
Sedges and rushes
Trees
Other dicotyledons
Other monocotyledons
Ferns
Clubmosses
Bryophytes
Mosses
Liverworts
Lichens
British National Vegetation Classification
Lists of biota of the United Kingdom
British National Vegetation Classification, constant
Document 1:::
Microspores are land plant spores that develop into male gametophytes, whereas megaspores develop into female gametophytes. The male gametophyte gives rise to sperm cells, which are used for fertilization of an egg cell to form a zygote. Megaspores are structures that are part of the alternation of generations in many seedless vascular cryptogams, all gymnosperms and all angiosperms. Plants with heterosporous life cycles using microspores and megaspores arose independently in several plant groups during the Devonian period. Microspores are haploid, and are produced from diploid microsporocytes by meiosis.
Morphology
The microspore has three different types of wall layers. The outer layer is called the perispore, the next is the exospore, and the inner layer is the endospore. The perispore is the thickest of the three layers while the exospore and endospore are relatively equal in width.
Seedless vascular plants
In heterosporous seedless vascular plants, modified leaves called microsporophylls bear microsporangia containing many microsporocytes that undergo meiosis, each producing four microspores. Each microspore may develop into a male gametophyte consisting of a somewhat spherical antheridium within the microspore wall. Either 128 or 256 sperm cells with flagella are produced in each antheridium. The only heterosporous ferns are aquatic or semi-aquatic, including the genera Marsilea, Regnellidium, Pilularia, Salvinia, and Azolla. Heterospory also occurs in the lycopods in the spikemoss genus Selaginella and in the quillwort genus Isoëtes.
Types of seedless vascular plants:
Water ferns
Spikemosses
Quillworts
Gymnosperms
In seed plants the microspores develop into pollen grains each containing a reduced, multicellular male gametophyte. The megaspores, in turn, develop into reduced female gametophytes that produce egg cells that, once fertilized, develop into seeds. Pollen cones or microstrobili usually develop toward the tips of the lower branches in cluste
Document 2:::
Macroflora is a term used for all the plants occurring in a particular area that are large enough to be seen with the naked eye. It is usually synonymous with the Flora and can be contrasted with the microflora, a term used for all the bacteria and other microorganisms in an ecosystem.
Macroflora is also an informal term used by many palaeobotanists to refer to an assemblage of plant fossils as preserved in the rock. This is in contrast to the flora, which in this context refers to the assemblage of living plants that were growing in a particular area, whose fragmentary remains became entrapped within the sediment from which the rock was formed and thus became the macroflora.
Document 3:::
Vascular plants (), also called tracheophytes () or collectively Tracheophyta (), form a large group of land plants ( accepted known species) that have lignified tissues (the xylem) for conducting water and minerals throughout the plant. They also have a specialized non-lignified tissue (the phloem) to conduct products of photosynthesis. Vascular plants include the clubmosses, horsetails, ferns, gymnosperms (including conifers), and angiosperms (flowering plants). Scientific names for the group include Tracheophyta, Tracheobionta and Equisetopsida sensu lato. Some early land plants (the rhyniophytes) had less developed vascular tissue; the term eutracheophyte has been used for all other vascular plants, including all living ones.
Historically, vascular plants were known as "higher plants", as it was believed that they were further evolved than other plants due to being more complex organisms. However, this is an antiquated remnant of the obsolete scala naturae, and the term is generally considered to be unscientific.
Characteristics
Botanists define vascular plants by three primary characteristics:
Vascular plants have vascular tissues which distribute resources through the plant. Two kinds of vascular tissue occur in plants: xylem and phloem. Phloem and xylem are closely associated with one another and are typically located immediately adjacent to each other in the plant. The combination of one xylem and one phloem strand adjacent to each other is known as a vascular bundle. The evolution of vascular tissue in plants allowed them to evolve to larger sizes than non-vascular plants, which lack these specialized conducting tissues and are thereby restricted to relatively small sizes.
In vascular plants, the principal generation or phase is the sporophyte, which produces spores and is diploid (having two sets of chromosomes per cell). (By contrast, the principal generation phase in non-vascular plants is the gametophyte, which produces gametes and is haploid - with
Document 4:::
Non-vascular plants are plants without a vascular system consisting of xylem and phloem. Instead, they may possess simpler tissues that have specialized functions for the internal transport of water.
Non-vascular plants include two distantly related groups:
Bryophytes, an informal group that taxonomists treat as three separate land-plant divisions, namely: Bryophyta (mosses), Marchantiophyta (liverworts), and Anthocerotophyta (hornworts). In all bryophytes, the primary plants are the haploid gametophytes, with the only diploid portion being the attached sporophyte, consisting of a stalk and sporangium. Because these plants lack lignified water-conducting tissues, they cannot become as tall as most vascular plants.
Algae, especially green algae. The algae consist of several unrelated groups. Only the groups included in the Viridiplantae are still considered relatives of land plants.
These groups are sometimes called "lower plants", referring to their status as the earliest plant groups to evolve, but the usage is imprecise since both groups are polyphyletic and may be used to include vascular cryptogams, such as the ferns and fern allies that reproduce using spores. Non-vascular plants are often among the first species to move into new and inhospitable territories, along with prokaryotes and protists, and thus function as pioneer species.
Non-vascular plants do not have a wide variety of specialized tissue types. Mosses and leafy liverworts have structures called phyllids that resemble leaves, but only consist of single sheets of cells with no internal air spaces, no cuticle or stomata, and no xylem or phloem. Consequently, phyllids are unable to control the rate of water loss from their tissues and are said to be poikilohydric. Some liverworts, such as Marchantia, have a cuticle, and the sporophytes of mosses have both cuticles and stomata, which were important in the evolution of land plants.
All land plants have a life cycle with an alternation of generatio
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Most vascular plants are seed plants, also known as?
A. spermatophytes
B. fungus
C. bacteria
D. sporozoans
Answer:
|
|
scienceQA-3413
|
multiple_choice
|
What do these two changes have in common?
slicing cheese
carving a piece of wood
|
[
"Both are caused by heating.",
"Both are chemical changes.",
"Both are caused by cooling.",
"Both are only physical changes."
] |
D
|
Step 1: Think about each change.
Slicing cheese is a physical change. The cheese changes shape. But it is still made of the same type of matter.
Carving a piece of wood is a physical change. The wood changes shape, but it is still made of the same type of matter.
Step 2: Look at each answer choice.
Both are only physical changes.
Both changes are physical changes. No new matter is created.
Both are chemical changes.
Both changes are physical changes. They are not chemical changes.
Both are caused by heating.
Neither change is caused by heating.
Both are caused by cooling.
Neither change is caused by cooling.
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.
Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate.
A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density.
An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge.
Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change.
Examples
Heating and cooling
Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation.
Magnetism
Ferro-magnetic materials can become magnetic. The process is reve
Document 2:::
Adaptive comparative judgement is a technique borrowed from psychophysics which is able to generate reliable results for educational assessment – as such it is an alternative to traditional exam script marking. In the approach, judges are presented with pairs of student work and are then asked to choose which is better, one or the other. By means of an iterative and adaptive algorithm, a scaled distribution of student work can then be obtained without reference to criteria.
Introduction
Traditional exam script marking began in Cambridge 1792 when, with undergraduate numbers rising, the importance of proper ranking of students was growing. So in 1792 the new Proctor of Examinations, William Farish, introduced marking, a process in which every examiner gives a numerical score to each response by every student, and the overall total mark puts the students in the final rank order. Francis Galton (1869) noted that, in an unidentified year about 1863, the Senior Wrangler scored 7,634 out of a maximum of 17,000, while the Second Wrangler scored 4,123. (The 'Wooden Spoon' scored only 237.)
Prior to 1792, a team of Cambridge examiners convened at 5pm on the last day of examining, reviewed the 19 papers each student had sat – and published their rank order at midnight. Marking solved the problems of numbers and prevented unfair personal bias, and its introduction was a step towards modern objective testing, the format it is best suited to. But the technology of testing that followed, with its major emphasis on reliability and the automatisation of marking, has been an uncomfortable partner for some areas of educational achievement: assessing writing or speaking, and other kinds of performance need something more qualitative and judgemental.
The technique of Adaptive Comparative Judgement is an alternative to marking. It returns to the pre-1792 idea of sorting papers according to their quality, but retains the guarantee of reliability and fairness. It is by far the most rel
Document 3:::
Engineering mathematics is a branch of applied mathematics concerning mathematical methods and techniques that are typically used in engineering and industry. Along with fields like engineering physics and engineering geology, both of which may belong in the wider category engineering science, engineering mathematics is an interdisciplinary subject motivated by engineers' needs both for practical, theoretical and other considerations outside their specialization, and to deal with constraints to be effective in their work.
Description
Historically, engineering mathematics consisted mostly of applied analysis, most notably: differential equations; real and complex analysis (including vector and tensor analysis); approximation theory (broadly construed, to include asymptotic, variational, and perturbative methods, representations, numerical analysis); Fourier analysis; potential theory; as well as linear algebra and applied probability, outside of analysis. These areas of mathematics were intimately tied to the development of Newtonian physics, and the mathematical physics of that period. This history also left a legacy: until the early 20th century subjects such as classical mechanics were often taught in applied mathematics departments at American universities, and fluid mechanics may still be taught in (applied) mathematics as well as engineering departments.
The success of modern numerical computer methods and software has led to the emergence of computational mathematics, computational science, and computational engineering (the last two are sometimes lumped together and abbreviated as CS&E), which occasionally use high-performance computing for the simulation of phenomena and the solution of problems in the sciences and engineering. These are often considered interdisciplinary fields, but are also of interest to engineering mathematics.
Specialized branches include engineering optimization and engineering statistics.
Engineering mathematics in tertiary educ
Document 4:::
Computer science and engineering (CSE) is an academic program at many universities which comprises computer science classes (e.g. data structures and algorithms) and computer engineering classes (e.g computer architecture). There is no clear division in computing between science and engineering, just like in the field of materials science and engineering. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both undergraduate as well postgraduate with specializations.
Academic courses
Academic programs vary between colleges, but typically include a combination of topics in computer science, computer engineering, and electrical engineering. Undergraduate courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most CSE programs require introductory mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, probability, and statistics, as well as the basics of electrical and electronic engineering, physics, and electromagnetism.
Example universities with CSE majors and departments
APJ Abdul Kalam Technological University
American International University-B
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do these two changes have in common?
slicing cheese
carving a piece of wood
A. Both are caused by heating.
B. Both are chemical changes.
C. Both are caused by cooling.
D. Both are only physical changes.
Answer:
|
sciq-5924
|
multiple_choice
|
What happens when alkanes are mixed with oxygen at room temperature?
|
[
"combustion",
"no reaction",
"redox",
"single replacement"
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 2:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
Document 3:::
Test equating traditionally refers to the statistical process of determining comparable scores on different forms of an exam. It can be accomplished using either classical test theory or item response theory.
In item response theory, equating is the process of placing scores from two or more parallel test forms onto a common score scale. The result is that scores from two different test forms can be compared directly, or treated as though they came from the same test form. When the tests are not parallel, the general process is called linking. It is the process of equating the units and origins of two scales on which the abilities of students have been estimated from results on different tests. The process is analogous to equating degrees Fahrenheit with degrees Celsius by converting measurements from one scale to the other. The determination of comparable scores is a by-product of equating that results from equating the scales obtained from test results.
Purpose
Suppose that Dick and Jane both take a test to become licensed in a certain profession. Because the high stakes (you get to practice the profession if you pass the test) may create a temptation to cheat, the organization that oversees the test creates two forms. If we know that Dick scored 60% on form A and Jane scored 70% on form B, do we know for sure which one has a better grasp of the material? What if form A is composed of very difficult items, while form B is relatively easy? Equating analyses are performed to address this very issue, so that scores are as fair as possible.
Equating in item response theory
In item response theory, person "locations" (measures of some quality being assessed by a test) are estimated on an interval scale; i.e., locations are estimated in relation to a unit and origin. It is common in educational assessment to employ tests in order to assess different groups of students with the intention of establishing a common scale by equating the origins, and when appropri
Document 4:::
The School of Textile and Clothing industries (ESITH) is a Moroccan engineering school, established in 1996, that focuses on textiles and clothing. It was created in collaboration with ENSAIT and ENSISA, as a result of a public private partnership designed to grow a key sector in the Moroccan economy. The partnership was successful and has been used as a model for other schools.
ESITH is the only engineering school in Morocco that provides a comprehensive program in textile engineering with internships for students at the Canadian Group CTT. Edith offers three programs in industrial engineering: product management, supply chain, and logistics, and textile and clothing
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What happens when alkanes are mixed with oxygen at room temperature?
A. combustion
B. no reaction
C. redox
D. single replacement
Answer:
|
|
sciq-1756
|
multiple_choice
|
Different elements differ in the size, mass, and other properties of what fundamental structures?
|
[
"particles",
"atoms",
"ions",
"compounds"
] |
B
|
Relavent Documents:
Document 0:::
Material is a substance or mixture of substances that constitutes an object. Materials can be pure or impure, living or non-living matter. Materials can be classified on the basis of their physical and chemical properties, or on their geological origin or biological function. Materials science is the study of materials, their properties and their applications.
Raw materials can be processed in different ways to influence their properties, by purification, shaping or the introduction of other materials. New materials can be produced from raw materials by synthesis.
In industry, materials are inputs to manufacturing processes to produce products or more complex materials.
Historical elements
Materials chart the history of humanity. The system of the three prehistoric ages (Stone Age, Bronze Age, Iron Age) were succeeded by historical ages: steel age in the 19th century, polymer age in the middle of the following century (plastic age) and silicon age in the second half of the 20th century.
Classification by use
Materials can be broadly categorized in terms of their use, for example:
Building materials are used for construction
Building insulation materials are used to retain heat within buildings
Refractory materials are used for high-temperature applications
Nuclear materials are used for nuclear power and weapons
Aerospace materials are used in aircraft and other aerospace applications
Biomaterials are used for applications interacting with living systems
Material selection is a process to determine which material should be used for a given application.
Classification by structure
The relevant structure of materials has a different length scale depending on the material. The structure and composition of a material can be determined by microscopy or spectroscopy.
Microstructure
In engineering, materials can be categorised according to their microscopic structure:
Plastics: a wide range of synthetic or semi-synthetic materials that use polymers as a main ingred
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
The periodic table is an arrangement of the chemical elements, structured by their atomic number, electron configuration and recurring chemical properties. In the basic form, elements are presented in order of increasing atomic number, in the reading sequence. Then, rows and columns are created by starting new rows and inserting blank cells, so that rows (periods) and columns (groups) show elements with recurring properties (called periodicity). For example, all elements in group (column) 18 are noble gases that are largely—though not completely—unreactive.
The history of the periodic table reflects over two centuries of growth in the understanding of the chemical and physical properties of the elements, with major contributions made by Antoine-Laurent de Lavoisier, Johann Wolfgang Döbereiner, John Newlands, Julius Lothar Meyer, Dmitri Mendeleev, Glenn T. Seaborg, and others.
Early history
Nine chemical elements – carbon, sulfur, iron, copper, silver, tin, gold, mercury, and lead, have been known since before antiquity, as they are found in their native form and are relatively simple to mine with primitive tools. Around 330 BCE, the Greek philosopher Aristotle proposed that everything is made up of a mixture of one or more roots, an idea originally suggested by the Sicilian philosopher Empedocles. The four roots, which the Athenian philosopher Plato called elements, were earth, water, air and fire. Similar ideas about these four elements existed in other ancient traditions, such as Indian philosophy.
A few extra elements were known in the age of alchemy: zinc, arsenic, antimony, and bismuth. Platinum was also known to pre-Columbian South Americans, but knowledge of it did not reach Europe until the 16th century.
First categorizations
The history of the periodic table is also a history of the discovery of the chemical elements. The first person in recorded history to discover a new element was Hennig Brand, a bankrupt German merchant. Brand tried to discover
Document 3:::
A nonmetal is a chemical element that mostly lacks metallic properties. Seventeen elements are generally considered nonmetals, though some authors recognize more or fewer depending on the properties considered most representative of metallic or nonmetallic character. Some borderline elements further complicate the situation.
Nonmetals tend to have low density and high electronegativity (the ability of an atom in a molecule to attract electrons to itself). They range from colorless gases like hydrogen to shiny solids like the graphite form of carbon. Nonmetals are often poor conductors of heat and electricity, and when solid tend to be brittle or crumbly. In contrast, metals are good conductors and most are pliable. While compounds of metals tend to be basic, those of nonmetals tend to be acidic.
The two lightest nonmetals, hydrogen and helium, together make up about 98% of the observable ordinary matter in the universe by mass. Five nonmetallic elements—hydrogen, carbon, nitrogen, oxygen, and silicon—make up the overwhelming majority of the Earth's crust, atmosphere, oceans and biosphere.
The distinct properties of nonmetallic elements allow for specific uses that metals often cannot achieve. Elements like hydrogen, oxygen, carbon, and nitrogen are essential building blocks for life itself. Moreover, nonmetallic elements are integral to industries such as electronics, energy storage, agriculture, and chemical production.
Most nonmetallic elements were not identified until the 18th and 19th centuries. While a distinction between metals and other minerals had existed since antiquity, a basic classification of chemical elements as metallic or nonmetallic emerged only in the late 18th century. Since then nigh on two dozen properties have been suggested as single criteria for distinguishing nonmetals from metals.
Definition and applicable elements
Properties mentioned hereafter refer to the elements in their most stable forms in ambient conditions unless otherwise
Document 4:::
Characterization, when used in materials science, refers to the broad and general process by which a material's structure and properties are probed and measured. It is a fundamental process in the field of materials science, without which no scientific understanding of engineering materials could be ascertained. The scope of the term often differs; some definitions limit the term's use to techniques which study the microscopic structure and properties of materials, while others use the term to refer to any materials analysis process including macroscopic techniques such as mechanical testing, thermal analysis and density calculation. The scale of the structures observed in materials characterization ranges from angstroms, such as in the imaging of individual atoms and chemical bonds, up to centimeters, such as in the imaging of coarse grain structures in metals.
While many characterization techniques have been practiced for centuries, such as basic optical microscopy, new techniques and methodologies are constantly emerging. In particular the advent of the electron microscope and secondary ion mass spectrometry in the 20th century has revolutionized the field, allowing the imaging and analysis of structures and compositions on much smaller scales than was previously possible, leading to a huge increase in the level of understanding as to why different materials show different properties and behaviors. More recently, atomic force microscopy has further increased the maximum possible resolution for analysis of certain samples in the last 30 years.
Microscopy
Microscopy is a category of characterization techniques which probe and map the surface and sub-surface structure of a material. These techniques can use photons, electrons, ions or physical cantilever probes to gather data about a sample's structure on a range of length scales. Some common examples of microscopy techniques include:
Optical microscopy
Scanning electron microscopy (SEM)
Transmission electron mi
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Different elements differ in the size, mass, and other properties of what fundamental structures?
A. particles
B. atoms
C. ions
D. compounds
Answer:
|
|
sciq-9469
|
multiple_choice
|
What is a dip slip fault where the dip of the fault plane is vertical?
|
[
"reverse slip",
"strike-slip",
"strike - theory",
"incline slip"
] |
B
|
Relavent Documents:
Document 0:::
Shale Gouge Ratio (typically abbreviated to SGR) is a mathematical algorithm that aims to predict the fault rock types for simple fault zones developed in sedimentary sequences dominated by sandstone and shale.
The parameter is widely used in the oil and gas exploration and production industries to enable quantitative predictions to be made regarding the hydrodynamic behavior of faults.
Definition
At any point on a fault surface, the shale gouge ratio is equal to the net shale/clay content of the rocks that have slipped past that point.
The SGR algorithm assumes complete mixing of the wall-rock components in any particular 'throw interval'. The parameter is a measure of the 'upscaled' composition of the fault zone.
Application to hydrocarbon exploration
Hydrocarbon exploration involves identifying and defining accumulations of hydrocarbons that are trapped in subsurface structures. These structures are often segmented by faults. For a thorough trap evaluation, it is necessary to predict whether the fault is sealing or leaking to hydrocarbons and also to provide an estimate of how 'strong' the fault seal might be. The 'strength' of a fault seal can be quantified in terms of subsurface pressure, arising from the buoyancy forces within the hydrocarbon column, that the fault can support before it starts to leak. When acting on a fault zone this subsurface pressure is termed capillary threshold pressure.
For faults developed in sandstone and shale sequences, the first order control on capillary threshold pressure is likely to be the composition, in particular the shale or clay content, of the fault-zone material. SGR is used to estimate the shale content of the fault zone.
In general, fault zones with higher clay content, equivalent to higher SGR values, can support higher capillary threshold pressures. On a broader scale, other factors also exert a control on the threshold pressure, such as depth of the rock sequence at the time of faulting, and the maxim
Document 1:::
Episodic tremor and slip (ETS) is a seismological phenomenon observed in some subduction zones that is characterized by non-earthquake seismic rumbling, or tremor, and slow slip along the plate interface. Slow slip events are distinguished from earthquakes by their propagation speed and focus. In slow slip events, there is an apparent reversal of crustal motion, although the fault motion remains consistent with the direction of subduction. ETS events themselves are imperceptible to human beings and do not cause damage.
Discovery
Nonvolcanic, episodic tremor was first identified in southwest Japan in 2002. Shortly afterwards, the Geological Survey of Canada coined the term "episodic tremor and slip" to characterize observations of GPS measurements in the Vancouver Island area. Vancouver Island lies in the eastern, North American region of the Cascadia subduction zone. ETS events in Cascadia were observed to reoccur cyclically with a period of approximately 14 months. Analysis of measurements led to the successful prediction of ETS events in following years (e.g., 2003, 2004, 2005, and 2007). In Cascadia, these events are marked by about two weeks of 1 to 10 Hz seismic trembling and non-earthquake ("aseismic") slip on the plate boundary equivalent to a magnitude 7 earthquake. (Tremor is a weak seismological signal only detectable by very sensitive seismometers.) Recent episodes of tremor and slip in the Cascadia region have occurred down-dip of the region ruptured in the 1700 Cascadia earthquake.
Since the initial discovery of this seismic mode in the Cascadia region, slow slip and tremor have been detected in other subduction zones around the world, including Japan and Mexico.
Slow slip is not accompanied by tremor in the Hikurangi Subduction Zone.
Every five years a year-long quake of this type occurs beneath the New Zealand capital, Wellington. It was first measured in 2003, and has reappeared in 2008 and 2013.
Characteristics
Slip behaviour
In the Casca
Document 2:::
The Newmark's sliding block analysis method is an engineering that calculates permanent displacements of soil slopes (also embankments and dams) during seismic loading. Newmark analysis does not calculate actual displacement, but rather is an index value that can be used to provide an indication of the structures likelihood of failure during a seismic event. It is also simply called Newmark's analysis or Sliding block method of slope stability analysis.
History
The method is an extension of the Newmark's direct integration method originally proposed by Nathan M. Newmark in 1943. It was applied to the sliding block problem in a lecture delivered by him in 1965 in the British Geotechnical Association's 5th Rankine Lecture in London and published later in the Association's scientific journal Geotechnique. The extension owes a great deal to Nicholas Ambraseys whose doctoral thesis on the seismic stability of earth dams at Imperial College London in 1958 formed the basis of the method. At his Rankine Lecture, Newmark himself acknowledged Ambraseys' contribution to this method through various discussions between the two researchers while the latter was a visiting professor at the University of Illinois.
Method
According to Kramer, the Newmark method is an improvement over the traditional pseudo-static method which considered the seismic slope failure only at limiting conditions (i.e. when the Factor of Safety, FOS, became equal to 1) and providing information about the collapse state but no information about the induced deformations. The new method points out that when the FOS becomes less than 1 "failure" does not necessarily occur as the time for which this happens is very short. However, each time the FOS falls below unity, some permanent deformations occur which accumulate whenever FOS < 1. The method further suggests that a failing mass from the slope may be considered as a block of mass sliding (and therefore sliding block) on an inclined surface only when the i
Document 3:::
Maui Nui is a modern geologists' name given to a prehistoric Hawaiian island and the corresponding modern biogeographic region. Maui Nui is composed of four modern islands: Maui, Molokaʻi, Lānaʻi, and Kahoʻolawe. Administratively, the four modern islands comprise Maui County (and a tiny part of Molokaʻi called Kalawao County). Long after the breakup of Maui Nui, the four modern islands retained plant and animal life similar to each other. Thus, Maui Nui is not only a prehistoric island but also a modern biogeographic region.
Geology
Maui Nui formed and broke up during the Pleistocene Epoch, which lasted from about 2.58 million to 11,700 years ago.
Maui Nui is built from seven shield volcanoes. The three oldest are Penguin Bank, West Molokaʻi, and East Molokaʻi, which probably range from slightly over to slightly less than 2 million years old. The four younger volcanoes are Lāna‘i, West Maui, Kaho‘olawe, and Haleakalā, which probably formed between 1.5 and 2 million years ago.
At its prime 1.2 million years ago, Maui Nui was , 50% larger than today's Hawaiʻi Island. The island of Maui Nui included four modern islands (Maui, Molokaʻi, Lānaʻi, and Kahoʻolawe) and landmass west of Molokaʻi called Penguin Bank, which is now completely submerged.
Maui Nui broke up as rising sea levels flooded the connections between the volcanoes. The breakup was complex because global sea levels rose and fell intermittently during the Quaternary glaciation. About 600,000 years ago, the connection between Molokaʻi and the island of Lāna‘i/Maui/Kahoʻolawe became intermittent. About 400,000 years ago, the connection between Lāna‘i and Maui/Kahoʻolawe also became intermittent. The connection between Maui and Kahoʻolawe was permanently broken between 200,000 and 150,000 years ago. Maui, Lāna‘i, and Molokaʻi were connected intermittently thereafter, most recently about 18,000 years ago during the Last Glacial Maximum.
Today, the sea floor between these four islands is relatively shallow
Document 4:::
The Géotechnique lecture is an biennial lecture on the topic of soil mechanics, organised by the British Geotechnical Association named after its major scientific journal Géotechnique.
This should not be confused with the annual BGA Rankine Lecture.
List of Géotechnique Lecturers
See also
Named lectures
Rankine Lecture
Terzaghi Lecture
External links
ICE Géotechnique journal
British Geotechnical Association
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is a dip slip fault where the dip of the fault plane is vertical?
A. reverse slip
B. strike-slip
C. strike - theory
D. incline slip
Answer:
|
|
sciq-5979
|
multiple_choice
|
Sponges possess an internal skeleton called what?
|
[
"endoskeleton",
"hydrostatic skeleton",
"fluid skeleton",
"exoskelton"
] |
A
|
Relavent Documents:
Document 0:::
Pinacocytes are flat cells found on the outside of sponges, as well as the internal canals of a sponge. Pinacocytes are not specific to the sponge however. It was discovered that pinacocytes do not have as many sponge specific genes. These genes suggest that pinacocytes had evolved before the metazoan time period, which is, before porifera had evolved.
Function
Pinacocytes are part of the epithelium in sponges. They play a role in movement (contracting and stretching), cell adhesion, signaling, phagocytosis, and polarity. Pinacocytes are filled with mesohyl which is a gel like substance that helps maintain the shape and structure of the sponge.
Types
Basipinacocytes
These are the cells in contact with the sponge's substrate (the surface to which it is attached).
Exopinacocytes
These are found on the exterior of the sponge. Exopinococytes produce spicules which is a needle like process that serves as structure for the organism.
Endopinacocytes
These line the sponge's interior canals.
Document 1:::
Porocytes are tubular cells which make up the pores of a sponge known as ostia.
Description
Covering the sponge is a layer of cells known as the pinacoderm, which is composed of pinacocytes. In a sponge, pinacocytes are a thin, elastic layer which keeps water out. Between the pinacocytes, there are the porocytes that allow water into the sponge. Myocytes are small muscular cells that open and close the porocytes. They also form a circular ring around the osculum and help in closing and opening of it. Once through the pores, water travels down canals. The opening to a porocyte is a pore known as an ostium.
In sponges, like Scypha, there are some cells that have an intracellular pore. These cells are known as porocytes. They are present in the Leucosolenia (an asconoid sponge) in the body wall through which water enters the body or they are present in Scypha (a syconoid sponge) as a connection between incurrent canal and radial canal. The pore is called an ostia in asconoid type sponges as it serves as the connection between the outside of the body and the spongocoel but called a prosopyle in syconoid sponges. They are modified pinacocytes.
Notes
Animal cells
Sponge anatomy
Document 2:::
In zoology, the epidermis is an epithelium (sheet of cells) that covers the body of a eumetazoan (animal more complex than a sponge). Eumetazoa have a cavity lined with a similar epithelium, the gastrodermis, which forms a boundary with the epidermis at the mouth.
Sponges have no epithelium, and therefore no epidermis or gastrodermis. The epidermis of a more complex invertebrate is just one layer deep, and may be protected by a non-cellular cuticle. The epidermis of a higher vertebrate has many layers, and the outer layers are reinforced with keratin and then die.
Document 3:::
An exoskeleton (from Greek éxō "outer" and skeletós "skeleton") is an external skeleton that both supports the body shape and protects the internal organs of an animal, in contrast to an internal endoskeleton (e.g. that of a human) which is enclosed under other soft tissues. Some large, hard protective exoskeletons are known as "shells".
Examples of exoskeletons in animals include the arthropod exoskeleton shared by arthropods (insects, chelicerates, myriapods and crustaceans) and tardigrades, as well as the outer shell of certain sponges and the mollusc shell shared by snails, clams, tusk shells, chitons and nautilus. Some vertebrate animals, such as the turtle, have both an endoskeleton and a protective exoskeleton.
Role
Exoskeletons contain rigid and resistant components that fulfil a set of functional roles in many animals including protection, excretion, sensing, support, feeding, and acting as a barrier against desiccation in terrestrial organisms. Exoskeletons have roles in defence from pests and predators and in providing an attachment framework for musculature.
Arthropod exoskeletons contain chitin; the addition of calcium carbonate makes them harder and stronger, at the price of increased weight. Ingrowths of the arthropod exoskeleton known as apodemes serve as attachment sites for muscles. These structures are composed of chitin and are approximately six times stronger and twice the stiffness of vertebrate tendons. Similar to tendons, apodemes can stretch to store elastic energy for jumping, notably in locusts. Calcium carbonates constitute the shells of molluscs, brachiopods, and some tube-building polychaete worms. Silica forms the exoskeleton in the microscopic diatoms and radiolaria. One mollusc species, the scaly-foot gastropod, even uses the iron sulfides greigite and pyrite.
Some organisms, such as some foraminifera, agglutinate exoskeletons by sticking grains of sand and shell to their exterior. Contrary to a common misconception, echinoder
Document 4:::
Archaeocytes (from Greek archaios "beginning" and kytos "hollow vessel") or amoebocytes are amoeboid cells found in sponges. They are totipotent and have varied functions depending on the species.
The structure of these cells match to that of the stem cells as of containing high cytoplasmic content that helps the cells to morph according to their function.
Location
Archaeocytes are along with other specialized sponge cells including collencytes and structural elements called spicules. They move about within the mesohyl with amoeba-like movements performing a number of important functions.
Functions
Cellular differentiation is an essential function of the archaeocyte. All specialized cells within the sponge have its origins with the archaeocyte. This is especially important in reproduction as the sex cells of the sponge in sexual reproduction are formed from these amoeboid cells. Similarly in asexual reproduction amoebocytes result in the formation of gemmules which are cyst-like spheres containing more amoebocytes as well as other sponge cells including the phylum specific choanocyte. These cells move within the walls of a sponge and form spicules.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Sponges possess an internal skeleton called what?
A. endoskeleton
B. hydrostatic skeleton
C. fluid skeleton
D. exoskelton
Answer:
|
|
sciq-6670
|
multiple_choice
|
What type of behavior has the advantage of being flexible and capable of changing to suit changing conditions?
|
[
"noted behavior",
"learned behavior",
"saved behavior",
"inherited behavior"
] |
B
|
Relavent Documents:
Document 0:::
Behavior management, similar to behavior modification, is a less-intensive form of behavior therapy. Unlike behavior modification, which focuses on changing behavior, behavior management focuses on maintaining positive habits and behaviors and reducing negative ones. Behavior management skills are especially useful for teachers and educators, healthcare workers, and those working in supported living communities. This form of management aims to help professionals oversee and guide behavior management in individuals and groups toward fulfilling, productive, and socially acceptable behaviors. Behavior management can be accomplished through modeling, rewards, or punishment.
Research
Influential behavior management researchers B.F. Skinner and Carl Rogers both take different approaches to managing behavior.
Skinner claimed that anyone can manipulate behavior by identifying what a person finds rewarding. Once the rewards are known, they can be given in exchange for good behavior. Skinner called this "Positive Reinforcement Psychology."
Rogers proposed that the desire to behave appropriately must come before addressing behavioral problems. This is accomplished by teaching the individual about morality, including why one should do what is right. Rogers held that a person must have an internal awareness of right and wrong.
Many principles and techniques are the same as in behavior modification. However, they are considerably different and administered less often.
In the classroom
Behavior management is often applied by a classroom teacher as a form of behavioral engineering, in order to raise students' retention of material and produce higher yields of student work completion. This also helps to reduce classroom disruption and places more focus on building self-control and self-regulating a calm emotional state.
American education psychologist, Brophy (1986) writes:
In general, behavior management strategies are effective at reducing classroom disruption. Recent
Document 1:::
A behavioral cusp is any behavior change that brings an organism's behavior into contact with new contingencies that have far-reaching consequences. A behavioral cusp is a special type of behavior change because it provides the learner with opportunities to access new reinforcers, new contingencies, new environments, new related behaviors (generativeness) and competition with archaic or problem behaviors. It affects the people around the learner, and these people agree to the behavior change and support its development after the intervention is removed.
The concept has far reaching implications for every individual, and for the field of developmental psychology, because it provides a behavioral alternative to the concept of maturation and change due to the simple passage of time, such as developmental milestones. The cusp is a behavior change that presents special features when compared to other behavior changes.
History
The concept was first proposed by Sidney W. Bijou, an American developmental psychologist. The idea of the cusp was to link behavioral principles to rapid spurts in development (see Behavior analysis of child development).
A behavioral cusp as conceptualized by Jesus Rosales-Ruiz & Donald Baer in 1997 is an important behavior change that affects future behavior changes. The behavioral cusp, like the reinforcer, is apprehended by its effects. Whereas a reinforcer acts on a single response or a group of related responses, the effects of a behavioral cusp regulate a large number of responses in a more distant future.
The concept has been compared to a developmental milestone, however, not all cusps are milestones. For example, learning to play soccer is not a milestone, but it was life-changing for Pelé. As a result of learning to kick grapefruits (the initial important change or cusp), Pelé accessed (1) new environments, (2) new reinforcers, (3) new soccer moves, (4) dropped competing behaviors (smoking), and (5) gained international acclaims for
Document 2:::
"Fixed action pattern" is an ethological term describing an instinctive behavioral sequence that is highly stereotyped and species-characteristic. Fixed action patterns are said to be produced by the innate releasing mechanism, a "hard-wired" neural network, in response to a sign/key stimulus or releaser. Once released, a fixed action pattern runs to completion.
This term is often associated with Konrad Lorenz, who is the founder of the concept. Lorenz identified six characteristics of fixed action patterns. These characteristics state that fixed action patterns are stereotyped, complex, species-characteristic, released, triggered, and independent of experience.
Fixed action patterns have been observed in many species, but most notably in fish and birds. Classic studies by Konrad Lorenz and Niko Tinbergen involve male stickleback mating behavior and greylag goose egg-retrieval behavior.
Fixed action patterns have been shown to be evolutionarily advantageous, as they increase both fitness and speed. However, as a result of their predictability, they may also be used as a means of exploitation. An example of this exploitation would be brood parasitism.
There are four exceptions to fixed action pattern rules: reduced response threshold, vacuum activity, displacement behavior, and graded response.
Characteristics
There are 6 characteristics of fixed action patterns. Fixed action patterns are said to be stereotyped, complex, species-characteristic, released, triggered, and independent of experience.
Stereotyped: Fixed action patterns occur in rigid, predictable, and highly-structured sequences.
Complex: Fixed action patterns are not a simple reflex. They are a complex pattern of behavior.
Species-characteristic: Fixed action patterns occur in all members of a species of a certain sex and/or a given age when they have attained a specific level of arousal.
Released: Fixed action patterns occur in response to a certain sign stimulus or releaser.
Triggered: Once relea
Document 3:::
Behavioral plasticity refers to a change in an organism's behavior that results from exposure to stimuli, such as changing environmental conditions. Behavior can change more rapidly in response to changes in internal or external stimuli than is the case for most morphological traits and many physiological traits. As a result, when organisms are confronted by new conditions, behavioral changes often occur in advance of physiological or morphological changes. For instance, larval amphibians changed their antipredator behavior within an hour after a change in cues from predators, but morphological changes in body and tail shape in response to the same cues required a week to complete.
Background
For many years, ethologists have studied the ways that behavior can change in response to changes in external stimuli or changes in the internal state of an organism. In a parallel literature, psychologists studying learning and cognition have spent years documenting the many ways that experiences in the past can affect the behavior an individual expresses at the current time. Interest in behavioral plasticity gained prominence more recently as an example of a type of phenotypic plasticity with major consequences for evolutionary biology.
Types
Behavioral plasticity can be broadly organized into two types: exogenous and endogenous. Exogenous plasticity refers to the changes in behavioral phenotype (i.e., observable behaviors) caused by an external stimulus, experience, or environment. Endogenous plasticity encompasses plastic responses that result from changes in internal cues, such as genotype, circadian rhythms, and menstruation.
These two broad categories can be further broken down into two other important classifications. When an external stimulus elicits or "activates" an immediate response (an immediate effect on behavior), then the organism is demonstrating contextual plasticity. This form of plasticity highlights the concept that external stimuli in a given context
Document 4:::
Theoretical behaviorism is a framework for psychology proposed by J. E. R. Staddon as an extension of experimental psychologist B. F. Skinner's radical behaviorism. It originated at Harvard in the early 1960s.
In the late 1980s, R. H. Ettinger and Staddon critiqued functional analysis.
Application of selection and variation to behaviorism
In the early 1950s, B. F. Skinner and others began to point out the similarities between the learning process and evolution through variation and selection. More recently, models explicitly analogous to gene mutation and selection by reinforcement have been applied to operant conditioning phenomena. Skinner’s idea of "emitted behavior" is an example of a parallel between evolution and behaviorism: once a behavior varies, a variant that results in reward is strengthened and therefore increases in frequency. When a reward is taken away or when selection is relaxed, there is an increase in variability in both natural selection and selection by reinforcement schedule.
Skinner said little about the causes and types of behavior variation, believing it to be random. On the other hand, Zener, Liddell and others argue that the variation in behaviors that psychological reinforcement acts on is not random. For example, it is different for food than for sex or a social reward. The ethologist Lorenz first identified the dog’s behavior as a particular instinctive pattern, similar to a repertoire.
Repertoire of possible behaviors
A "repertoire" of behaviors involves potential behaviors that may occur under certain conditions, such as if the currently active behavior is unrewarded. The observed repertoire in a particular animal depends on the reward size and nature of the stimulus: anticipation of food will lead to a different repertoire than anticipation of electric shock.
In addition to the active behavior, a repertoire includes latent possible activities. This idea of a latent response was first suggested by B.F. Skinner:
"Our basic dat
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of behavior has the advantage of being flexible and capable of changing to suit changing conditions?
A. noted behavior
B. learned behavior
C. saved behavior
D. inherited behavior
Answer:
|
|
sciq-21
|
multiple_choice
|
What is the termination of a pregnancy in progress called?
|
[
"contraception",
"abortion",
"delivery",
"miscarriage"
] |
B
|
Relavent Documents:
Document 0:::
The Human Fertilisation and Embryology Act 2008 (c 22) is an Act of the Parliament of the United Kingdom. The Act constitutes a major review and update of the Human Fertilisation and Embryology Act 1990.
According to the Department of Health, the Act's key provisions are:
The Bill's discussion in Parliament did not permit time to debate whether it should extend abortion rights under the Abortion Act 1967 to also cover Northern Ireland. The 2008 Act does not alter the status quo.
The Act also repealed and replaced the Human Reproductive Cloning Act 2001.
Document 1:::
Early pregnancy loss is a medical term that when referring to humans can variously be used to mean:
Death of an embryo or fetus during the first trimester. This can happen by implantation failure, miscarriage, embryo resorption, early fetal resorption or vanishing twin syndrome.
Death of an embryo or fetus before 20 weeks gestation, as in all pregnancy loss before it becomes considered stillbirth.
Causes of early pregnancy loss
Pregnancy loss, in many cases, occurs for unknown reasons, often involving random chromosome issues during conception. Miscarriage is not caused by everyday activities like working, exercising, or having sex. Even falls or blows are rarely to blame. Research on the effects of alcohol, tobacco, and caffeine on miscarriage is inconclusive, so it's not something you could have prevented. It's crucial not to blame yourself for a miscarriage, as it's not the result of anything you did or didn't do.
Symptoms of early pregnancy loss
The most prevalent indication of pregnancy loss is vaginal bleeding. In the later stages of pregnancy, a woman experiencing a stillbirth may cease to sense fetal movements. However, it's important to note that each type of pregnancy loss presents distinct symptoms, so it's essential to consult your healthcare provider for a proper diagnosis.
See also
Pregnancy with abortive outcome
Document 2:::
Pregnant women have historically been excluded from clinical research due to ethical concerns about harming the fetus or the perception of increased risk to the woman. Excluding pregnant women from research has also been called unethical, as it results in a scarcity of data about how therapies affect pregnant women and their fetuses. Despite consensus from bioethicists, researchers, and regulators that pregnant women should be included in clinical research, up to 95% of Phase IV clinical trials that could have included pregnant women did not, according to a 2013 review.
Ethical considerations
There are several points of concern regarding clinical research with pregnant women. Some concern is related to the idea that the fetus cannot give consent to participate in the research. Some clinical research could also result in unexpected harm to the fetus. Other concerns are that pregnant women are potentially more vulnerable to negative side effects than other populations. It has also been hypothesized that pregnant women could be more susceptible to coercion than non-pregnant adults. There is insufficient data to support either of these two latter concerns, according to a 2020 review.
Conversely, the exclusion of pregnant women from clinical research has also been called unethical. The data regarding drug use and pregnancy is scarce and of poor quality. Therefore, pregnant women do not necessarily have the same access to informed, effective healthcare as other populations.
Limiting participation
Due to complications from the drugs thalidomide and diethylstilbestrol in women in the 1960s and 1970s, the US Food and Drug Administration (FDA) enacted protections to limit reproductive-age women's exposure to substances that may cause birth defects. However, the guidelines were interpreted to exclude pregnant women from any clinical trial. Despite a 1994 National Academy of Medicine Report Ethical and Legal Issues of Including Women in Clinical Studies concluding that "preg
Document 3:::
Maternal somatic support after brain death occurs when a brain dead patient is pregnant and their body is kept alive to deliver a fetus. It occurs very rarely internationally. Even among brain dead patients, in a U.S. study of 252 brain dead patients from 1990–96, only 5 (2.8%) cases involved pregnant women between 15 and 45 years of age.
Past cases
In the 28-year period between 1982 and 2010, there were "30 [reported] cases of maternal brain death (19 case reports and 1 case series)." In 12 of those cases, a viable child was delivered via cesarean section after extended somatic support. However, according to Esmaelilzadeh, et al. there is no widely accepted protocol to manage a brain dead mother "since only a few reported cases are found in the medical literature." Moreover, the mother's wishes are rarely, if ever, known, and family should be consulted in developing a care plan.
Life support complications
Throughout their care, brain dead patients could experience a wide range of complications, including "infection, hemodynamic instability, diabetes insipidus (DI), panhypopituitarism, poikilothermia, metabolic instability, acute respiratory distress syndrome and disseminated intravascular coagulation." Treating these complications is difficult since the effects of medication on the fetus's health are unknown.
Fetus's chance of survival
According to Esmaelilzadeh, et al., "[a]t present, it seems that there is no clear lower limit to the gestational age which would restrict the physician's efforts to support the brain dead mother and her fetus." However, the older a fetus is when its mother becomes brain dead, the greater its chance for survival. Research into preterm births indicates that "a fetus born before 24 weeks of gestation has a limited chance of survival. At 24, 28 and 32 weeks, a fetus has approximately a 20–30%, 80% and 98% likelihood of survival with a 40%, 10% and less than 2% chance of suffering from a severe handicap, respectively."
Brain de
Document 4:::
Prenatal perception is the study of the extent of somatosensory and other types of perception during pregnancy. In practical terms, this means the study of fetuses; none of the accepted indicators of perception are present in embryos. Studies in the field inform the abortion debate, along with certain related pieces of legislation in countries affected by that debate. As of 2022, there is no scientific consensus on whether a fetus can feel pain.
Prenatal hearing
Numerous studies have found evidence indicating a fetus's ability to respond to auditory stimuli. The earliest fetal response to a sound stimulus has been observed at 16 weeks' gestational age, while the auditory system is fully functional at 25–29 weeks' gestation. At 33–41 weeks' gestation, the fetus is able to distinguish its mother's voice from others.
Prenatal pain
The hypothesis that human fetuses are capable of perceiving pain in the first trimester has little support, although fetuses at 14 weeks may respond to touch. A multidisciplinary systematic review from 2005 found limited evidence that thalamocortical pathways begin to function "around 29 to 30 weeks' gestational age", only after which a fetus is capable of feeling pain.
In March 2010, the Royal College of Obstetricians and Gynecologists submitted a report, concluding that "Current research shows that the sensory structures are not developed or specialized enough to respond to pain in a fetus of less than 24 weeks",
The report specifically identified the anterior cingulate as the area of the cerebral cortex responsible for pain processing. The anterior cingulate is part of the cerebral cortex, which begins to develop in the fetus at week 26. A co-author of that report revisited the evidence in 2020, specifically the functionality of the thalamic projections into the cortical subplate, and posited "an immediate and unreflective pain experience...from as early as 12 weeks."
There is a consensus among developmental neurobiologists that the
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the termination of a pregnancy in progress called?
A. contraception
B. abortion
C. delivery
D. miscarriage
Answer:
|
|
sciq-6336
|
multiple_choice
|
While the numerical aperture can be used to compare resolutions of various objectives, it does not indicate how far the lens could be from the what?
|
[
"specimen",
"focal point",
"diameter",
"microscope"
] |
A
|
Relavent Documents:
Document 0:::
The angular aperture of a lens is the angular size of the lens aperture as seen from the focal point:
where
is the focal length
is the diameter of the aperture.
Relation to numerical aperture
In a medium with an index of refraction close to 1, such as air, the angular aperture is approximately equal to twice the numerical aperture of the lens.
Formally, the numerical aperture in air is:
In the paraxial approximation, with a small aperture, :
Document 1:::
Rudolf Karl Lüneburg (30 March 1903, Volkersheim (Bockenem) - 19 August 1949, Great Falls, Montana), after his emigration at first Lueneburg, later Luneburg, sometimes misspelled Luneberg or Lunenberg) was a professor of mathematics and optics at the Dartmouth College Eye Institute. He was born in Germany, received his doctorate at Göttingen, and emigrated to the United States in 1935.
His work included an analysis of the geometry of visual space as expected from physiology and the assumption that the angle of vergence provides a constant measure of distance. From these premises he concluded that near field visual space is hyperbolic.
Bibliography
published in:
Reprint:
See also
Luneburg lens
Luneburg method
1903 births
1949 deaths
Emigrants from Nazi Germany to the United States
Geometers
Optical physicists
Dartmouth College faculty
20th-century German mathematicians
Academic staff of Leiden University
University of Göttingen alumni
New York University faculty
University of Southern California faculty
Brown University faculty
Document 2:::
The calculation of glass properties (glass modeling) is used to predict glass properties of interest or glass behavior under certain conditions (e.g., during production) without experimental investigation, based on past data and experience, with the intention to save time, material, financial, and environmental resources, or to gain scientific insight. It was first practised at the end of the 19th century by A. Winkelmann and O. Schott. The combination of several glass models together with other relevant functions can be used for optimization and six sigma procedures. In the form of statistical analysis glass modeling can aid with accreditation of new data, experimental procedures, and measurement institutions (glass laboratories).
History
Historically, the calculation of glass properties is directly related to the founding of glass science. At the end of the 19th century the physicist Ernst Abbe developed equations that allow calculating the design of optimized optical microscopes in Jena, Germany, stimulated by co-operation with the optical workshop of Carl Zeiss. Before Ernst Abbe's time the building of microscopes was mainly a work of art and experienced craftsmanship, resulting in very expensive optical microscopes with variable quality. Now Ernst Abbe knew exactly how to construct an excellent microscope, but unfortunately, the required lenses and prisms with specific ratios of refractive index and dispersion did not exist. Ernst Abbe was not able to find answers to his needs from glass artists and engineers; glass making was not based on science at this time.
In 1879 the young glass engineer Otto Schott sent Abbe glass samples with a special composition (lithium silicate glass) that he had prepared himself and that he hoped to show special optical properties. Following measurements by Ernst Abbe, Schott's glass samples did not have the desired properties, and they were also not as homogeneous as desired. Nevertheless, Ernst Abbe invited Otto Schott to work
Document 3:::
In optics, spatial cutoff frequency is a precise way to quantify the smallest object resolvable by an optical system. Due to diffraction at the image plane, all optical systems act as low pass filters with a finite ability to resolve detail. If it were not for the effects of diffraction, a 2" aperture telescope could theoretically be used to read newspapers on a planet circling Alpha Centauri, over four light-years distant. Unfortunately, the wave nature of light will never permit this to happen.
The spatial cutoff frequency for a perfectly corrected incoherent optical system is given by
where is the wavelength expressed in millimeters and is the lens' focal ratio. As an example, a telescope having an objective and imaging at 0.55 micrometers has a spatial cutoff frequency of 303 cycles/millimeter. High-resolution black-and-white film is capable of resolving details on the film as small as 3 micrometers or smaller, thus its cutoff frequency is about 150 cycles/millimeter. So, the telescope's optical resolution is about twice that of high-resolution film, and a crisp, sharp picture would result (provided focus is perfect and atmospheric turbulence is at a minimum).
This formula gives the best-case resolution performance and is valid only for perfect optical systems. The presence of aberrations reduces image contrast and can effectively reduce the system spatial cutoff frequency if the image contrast falls below the ability of the imaging device to discern.
The coherent case is given by
See also
Modulation transfer function
Superlens
Document 4:::
In optics, the numerical aperture (NA) of an optical system is a dimensionless number that characterizes the range of angles over which the system can accept or emit light. By incorporating index of refraction in its definition, NA has the property that it is constant for a beam as it goes from one material to another, provided there is no refractive power at the interface. The exact definition of the term varies slightly between different areas of optics. Numerical aperture is commonly used in microscopy to describe the acceptance cone of an objective (and hence its light-gathering ability and resolution), and in fiber optics, in which it describes the range of angles within which light that is incident on the fiber will be transmitted along it.
General optics
In most areas of optics, and especially in microscopy, the numerical aperture of an optical system such as an objective lens is defined by
where is the index of refraction of the medium in which the lens is working (1.00 for air, 1.33 for pure water, and typically 1.52 for immersion oil; see also list of refractive indices), and is the half-angle of the maximum cone of light that can enter or exit the lens. In general, this is the angle of the real marginal ray in the system. Because the index of refraction is included, the NA of a pencil of rays is an invariant as a pencil of rays passes from one material to another through a flat surface. This is easily shown by rearranging Snell's law to find that is constant across an interface.
In air, the angular aperture of the lens is approximately twice this value (within the paraxial approximation). The NA is generally measured with respect to a particular object or image point and will vary as that point is moved. In microscopy, NA generally refers to object-space NA unless otherwise noted.
In microscopy, NA is important because it indicates the resolving power of a lens. The size of the finest detail that can be resolved (the resolution) is proportional t
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
While the numerical aperture can be used to compare resolutions of various objectives, it does not indicate how far the lens could be from the what?
A. specimen
B. focal point
C. diameter
D. microscope
Answer:
|
|
sciq-5007
|
multiple_choice
|
What kind of harm does a corrosive substance cause?
|
[
"Attracts dust",
"eats through objects",
"Rust objects",
"Builds a crust"
] |
B
|
Relavent Documents:
Document 0:::
Contamination is the presence of a constituent, impurity, or some other undesirable element that spoils, corrupts, infects, makes unfit, or makes inferior a material, physical body, natural environment, workplace, etc.
Types of contamination
Within the sciences, the word "contamination" can take on a variety of subtle differences in meaning, whether the contaminant is a solid or a liquid, as well as the variance of environment the contaminant is found to be in. A contaminant may even be more abstract, as in the case of an unwanted energy source that may interfere with a process. The following represent examples of different types of contamination based on these and other variances.
Chemical contamination
In chemistry, the term "contamination" usually describes a single constituent, but in specialized fields the term can also mean chemical mixtures, even up to the level of cellular materials. All chemicals contain some level of impurity. Contamination may be recognized or not and may become an issue if the impure chemical causes additional chemical reactions when mixed with other chemicals or mixtures. Chemical reactions resulting from the presence of an impurity may at times be beneficial, in which case the label "contaminant" may be replaced with "reactant" or "catalyst." (This may be true even in physical chemistry, where, for example, the introduction of an impurity in an intrinsic semiconductor positively increases conductivity.) If the additional reactions are detrimental, other terms are often applied such as "toxin", "poison", or pollutant, depending on the type of molecule involved. Chemical decontamination of substance can be achieved through decomposition, neutralization, and physical processes, though a clear understanding of the underlying chemistry is required. Contamination of pharmaceutics and therapeutics is notoriously dangerous and creates both perceptual and technical challenges.
Environmental contamination
In environmental chemistry, the term
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
Mold health issues refer to the harmful health effects of moulds ("molds" in American English) and their mycotoxins. However, recent research has shown these adverse health effects are caused not exclusively by molds, but also other microbial agents and biotoxins associated with dampness, mold, and water-damaged buildings, such as gram-negative bacteria that produce endotoxins, as well as actinomycetes and their associated exotoxins. Approximately 47% of houses in the United States have substantial levels of mold, with over 85% of commercial and office buildings found to have water damage predictive of mold. As many as 21% of asthma cases may result from exposure to mold. Substantial and statistically significant increases in the risks of both respiratory infections and bronchitis have been associated with dampness in homes and the resulting mold.
Molds and many related microbial agents are ubiquitous in the biosphere, and mold spores are a common component of household and workplace dust. While the most molds in the outdoor environment are not hazardous to humans, many found inside buildings are known to be. Reaction to molds can vary between individuals, from relatively minor allergic reactions through to severe multi-system inflammatory effects, neurological problems, and death. The United States Centers for Disease Control and Prevention (CDC) reported in its June 2006 report, 'Mold Prevention Strategies and Possible Health Effects in the Aftermath of Hurricanes and Major Floods,' that "excessive exposure to mold-contaminated materials can cause adverse health effects in susceptible persons regardless of the type of mold or the extent of contamination." Mold spores and associated toxins can cause harm primarily via inhalation, ingestion, and contact. In higher quantities such as those found in water-damaged buildings, they can present especially hazardous health risks to humans after sufficient exposure, with three generally accepted mechanisms of harm and a fo
Document 3:::
In situ chemical reduction (ISCR) is a type of environmental remediation technique used for soil and/or groundwater remediation to reduce the concentrations of targeted environmental contaminants to acceptable levels. It is the mirror process of In Situ Chemical Oxidation (ISCO). ISCR is usually applied in the environment by injecting chemically reductive additives in liquid form into the contaminated area or placing a solid medium of chemical reductants in the path of a contaminant plume. It can be used to remediate a variety of organic compounds, including some that are resistant to natural degradation.
The in situ in ISCR is just Latin for "in place", signifying that ISCR is a chemical reduction reaction that occurs at the site of the contamination. Like ISCO, it is able to decontaminate many compounds, and, in theory, ISCR could be more effective in ground water remediation than ISCO.
Chemical reduction is one half of a redox reaction, which results in the gain of electrons. One of the reactants in the reaction becomes oxidized, or loses electrons, while the other reactant becomes reduced, or gains electrons. In ISCR, reducing compounds, compounds that accept electrons given by other compounds in a reaction, are used to change the contaminants into harmless compounds.
History
Early work examined the dechlorinations with copper. Substrates included DDT, endrin, chloroform, and hexachlorocyclopentadiene. Aluminum and magnesium behave similarly in the laboratory. Ground water treatment most generally focuses on the use of iron.
Reductants
Zero valent metals (ZVMs)
Zero-valent metals are the main reductants used in ISCR. The most common metal used is iron, in the form of ZVI (zero valent iron), and it is also the metal longest in use. However, some studies show that zero valent zinc (ZVZ) could be up to ten times more effective at eradicating the contaminants than ZVI. Some applications of ZVMs are to clean up Trichloroethylene (TCE) and Hexavalent chromium
Document 4:::
Ecotoxicity, the subject of study in the field of ecotoxicology (a portmanteau of ecology and toxicology), refers to the biological, chemical or physical stressors that affect ecosystems. Such stressors could occur in the natural environment at densities, concentrations, or levels high enough to disrupt natural biochemical and physiological behavior and interactions. This ultimately affects all living organisms that comprise an ecosystem.
Ecotoxicology has been defined as a branch of toxicology that focuses on the study of toxic effects, caused by natural or synthetic pollutants. These pollutants affect animals (including humans), vegetation, and microbes, in an intrinsic way.
Acute vs. chronic ecotoxicity
According to Barrie Peake in their paper “Impact of Pharmaceuticals on the Environment.”, The ecotoxicity of chemicals can be described based on the amount of exposure to any hazardous materials. There are two categories of ecotoxicity founded off of this description: acute toxins and chronic toxins (Peake, 2016). Acute ecotoxicity refers to the detrimental effects resulting from a hazardous exposure for no more than 15 days. Acute ecotoxicity is the direct result from the interaction of a chemical hazard with cell membranes of an organism (Peake, 2016). This interaction often leads to cell or tissue damage or death. Chronic ecotoxicity on the other hand are the detrimental effects resulting from a hazardous exposure of 15 days, to possibly years (Peake, 2016). Chronic ecotoxicity is often associated with “particular drug–receptor actions that initiate a particular pharmacological response in an aquatic or terrestrial organism.” (Peake, 2016). Due to this interaction, chronic ecotoxicity is usually not lethal in the way that acute ecotoxicity is. However, chronic ecotoxicity decreases cellular biochemical functions. This often results in alterations to psychological or behavioral responses of the organism to environmental stimuli (Peake, 2016).
Common environ
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What kind of harm does a corrosive substance cause?
A. Attracts dust
B. eats through objects
C. Rust objects
D. Builds a crust
Answer:
|
|
sciq-63
|
multiple_choice
|
What's the term for the gradual progression from simple plants to larger more complex ones in an area?
|
[
"complex progression",
"pattern progression",
"primary succession",
"primary pattern"
] |
C
|
Relavent Documents:
Document 0:::
Primary succession is the beginning step of ecological succession after an extreme disturbance, which usually occurs in an environment devoid of vegetation and other organisms. These environments are typically lacking in soil, as disturbances like lava flow or retreating glaciers scour the environment clear of nutrients.
In contrast, secondary succession occurs on substrates that previously supported vegetation before an ecological disturbance. This occurs when smaller disturbances like floods, hurricanes, tornadoes, and fires destroy only the local plant life and leave soil nutrients for immediate establishment by intermediate community species.
Occurrence
In primary succession pioneer species like lichen, algae and fungi as well as abiotic factors like wind and water start to "normalise" the habitat or in other words start to develop soil and other important mechanisms for greater diversity to flourish. Primary succession begins on rock formations, such as volcanoes or mountains, or in a place with no organisms or soil. Primary succession leads to conditions nearer optimum for vascular plant growth; pedogenesis or the formation of soil, and the increased amount of shade are the most important processes.
These pioneer lichen, algae, and fungi are then dominated and often replaced by plants that are better adapted to less harsh conditions, these plants include vascular plants like grasses and some shrubs that are able to live in thin soils that are often mineral-based. Water and nutrient levels increase with the amount of succession exhibited.
The early stages of primary succession are dominated by species with small propagules (seed and spores) which can be dispersed long distances. The early colonizers—often algae, fungi, and lichens—stabilize the substrate. Nitrogen supplies are limited in new soils, and nitrogen-fixing species tend to play an important role early in primary succession. Unlike in primary succession, the species that dominate secondary success
Document 1:::
Phytomorphology is the study of the physical form and external structure of plants. This is usually considered distinct from plant anatomy, which is the study of the internal structure of plants, especially at the microscopic level. Plant morphology is useful in the visual identification of plants. Recent studies in molecular biology started to investigate the molecular processes involved in determining the conservation and diversification of plant morphologies. In these studies transcriptome conservation patterns were found to mark crucial ontogenetic transitions during the plant life cycle which may result in evolutionary constraints limiting diversification.
Scope
Plant morphology "represents a study of the development, form, and structure of plants, and, by implication, an attempt to interpret these on the basis of similarity of plan and origin". There are four major areas of investigation in plant morphology, and each overlaps with another field of the biological sciences.
First of all, morphology is comparative, meaning that the morphologist examines structures in many different plants of the same or different species, then draws comparisons and formulates ideas about similarities. When structures in different species are believed to exist and develop as a result of common, inherited genetic pathways, those structures are termed homologous. For example, the leaves of pine, oak, and cabbage all look very different, but share certain basic structures and arrangement of parts. The homology of leaves is an easy conclusion to make. The plant morphologist goes further, and discovers that the spines of cactus also share the same basic structure and development as leaves in other plants, and therefore cactus spines are homologous to leaves as well. This aspect of plant morphology overlaps with the study of plant evolution and paleobotany.
Secondly, plant morphology observes both the vegetative (somatic) structures of plants, as well as the reproductive str
Document 2:::
"Auto-" meaning self or same, and "-genic" meaning producing or causing. Autogenic succession refers to ecological succession driven by biotic factors within an ecosystem and although the mechanisms of autogenic succession have long been debated, the role of living things in shaping the progression of succession was realized early on. Presently, there is more of a consensus that the mechanisms of facilitation, tolerance, and inhibition all contribute to autogenic succession. The concept of succession is most often associated with communities of vegetation and forests, though it is applicable to a broader range of ecosystems. In contrast, allogenic succession is driven by the abiotic components of the ecosystem.
How it occurs
The plants themselves (biotic components) cause succession to occur.
Light captured by leaves
Production of detritus
Water and nutrient uptake
Nitrogen fixation
anthropogenic climate change
These aspects lead to a gradual ecological change in a particular spot of land, known as a progression of inhabiting species. Autogenic succession can be viewed as a secondary succession because of pre-existing plant life. A 2000 case study in the journal Oecologia tested the hypothesis that areas with high plant diversity could suppress weed growth more effectively than those with lower plant diversity.
Facilitation
Improvement of site factors like increased organic matter
Inhibition
Hinders species or growth
Document 3:::
In botany, a plant shoot consists of any plant stem together with its appendages like, leaves and lateral buds, flowering stems, and flower buds. The new growth from seed germination that grows upward is a shoot where leaves will develop. In the spring, perennial plant shoots are the new growth that grows from the ground in herbaceous plants or the new stem or flower growth that grows on woody plants.
In everyday speech, shoots are often synonymous with stems. Stems, which are an integral component of shoots, provide an axis for buds, fruits, and leaves.
Young shoots are often eaten by animals because the fibers in the new growth have not yet completed secondary cell wall development, making the young shoots softer and easier to chew and digest.
As shoots grow and age, the cells develop secondary cell walls that have a hard and tough structure.
Some plants (e.g. bracken) produce toxins that make their shoots inedible or less palatable.
Shoot types of woody plants
Many woody plants have distinct short shoots and long shoots. In some angiosperms, the short shoots, also called spur shoots or fruit spurs, produce the majority of flowers and fruit. A similar pattern occurs in some conifers and in Ginkgo, although the "short shoots" of some genera such as Picea are so small that they can be mistaken for part of the leaf that they have produced.
A related phenomenon is seasonal heterophylly, which involves visibly different leaves from spring growth and later lammas growth. Whereas spring growth mostly comes from buds formed the previous season, and often includes flowers, lammas growth often involves long shoots.
See also
Bud
Crown (botany)
Heteroblasty (botany), an abrupt change in the growth pattern of some plants as they mature
Lateral shoot
Phyllotaxis, the arrangement of leaves along a plant stem
Seedling
Sterigma, the "woody peg" below the leaf of some conifers
Thorn (botany), true thorns, as distinct from spines or prickles, are short shoots
Document 4:::
Primary growth in plants is growth that takes place from the tips of roots or shoots. It leads to lengthening of roots and stems and sets the stage for organ formation. It is distinguished from secondary growth that leads to widening. Plant growth takes place in well defined plant locations. Specifically, the cell division and differentiation needed for growth occurs in specialized structures called meristems. These consist of undifferentiated cells (meristematic cells) capable of cell division. Cells in the meristem can develop into all the other tissues and organs that occur in plants. These cells continue to divide until they differentiate and then lose the ability to divide. Thus, the meristems produce all the cells used for plant growth and function.
At the tip of each stem and root, an apical meristem adds cells to their length, resulting in the elongation of both. Examples of primary growth are the rapid lengthening growth of seedlings after they emerge from the soil and the penetration of roots deep into the soil. Furthermore, all plant organs arise ultimately from cell divisions in the apical meristems, followed by cell expansion and differentiation.
In contrast, a growth process that involves thickening of stems takes place within lateral meristems that are located throughout the length of the stems. The lateral meristems of larger plants also extend into the roots. This thickening is secondary growth and is needed to give mechanical support and stability to the plant.
The functions of a plant's growing tips – its apical (or primary) meristems – include: lengthening through cell division and elongation; organising the development of leaves along the stem; creating platforms for the eventual development of branches along the stem; laying the groundwork for organ formation by providing a stock of undifferentiated or incompletely differentiated cells that later develop into fully differentiated cells, thereby ultimately allowing the "spatial deployment
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What's the term for the gradual progression from simple plants to larger more complex ones in an area?
A. complex progression
B. pattern progression
C. primary succession
D. primary pattern
Answer:
|
|
sciq-4697
|
multiple_choice
|
An object's energy due to motion is known as?
|
[
"thermodynamic energy",
"inertia",
"residual energy",
"kinetic energy"
] |
D
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
This is a list of topics that are included in high school physics curricula or textbooks.
Mathematical Background
SI Units
Scalar (physics)
Euclidean vector
Motion graphs and derivatives
Pythagorean theorem
Trigonometry
Motion and forces
Motion
Force
Linear motion
Linear motion
Displacement
Speed
Velocity
Acceleration
Center of mass
Mass
Momentum
Newton's laws of motion
Work (physics)
Free body diagram
Rotational motion
Angular momentum (Introduction)
Angular velocity
Centrifugal force
Centripetal force
Circular motion
Tangential velocity
Torque
Conservation of energy and momentum
Energy
Conservation of energy
Elastic collision
Inelastic collision
Inertia
Moment of inertia
Momentum
Kinetic energy
Potential energy
Rotational energy
Electricity and magnetism
Ampère's circuital law
Capacitor
Coulomb's law
Diode
Direct current
Electric charge
Electric current
Alternating current
Electric field
Electric potential energy
Electron
Faraday's law of induction
Ion
Inductor
Joule heating
Lenz's law
Magnetic field
Ohm's law
Resistor
Transistor
Transformer
Voltage
Heat
Entropy
First law of thermodynamics
Heat
Heat transfer
Second law of thermodynamics
Temperature
Thermal energy
Thermodynamic cycle
Volume (thermodynamics)
Work (thermodynamics)
Waves
Wave
Longitudinal wave
Transverse waves
Transverse wave
Standing Waves
Wavelength
Frequency
Light
Light ray
Speed of light
Sound
Speed of sound
Radio waves
Harmonic oscillator
Hooke's law
Reflection
Refraction
Snell's law
Refractive index
Total internal reflection
Diffraction
Interference (wave propagation)
Polarization (waves)
Vibrating string
Doppler effect
Gravity
Gravitational potential
Newton's law of universal gravitation
Newtonian constant of gravitation
See also
Outline of physics
Physics education
Document 2:::
In physics, a number of noted theories of the motion of objects have developed. Among the best known are:
Classical mechanics
Newton's laws of motion
Euler's laws of motion
Cauchy's equations of motion
Kepler's laws of planetary motion
General relativity
Special relativity
Quantum mechanics
Motion (physics)
Document 3:::
Advanced Placement (AP) Physics C: Mechanics (also known as AP Mechanics) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a one-semester calculus-based university course in mechanics. The content of Physics C: Mechanics overlaps with that of AP Physics 1, but Physics 1 is algebra-based, while Physics C is calculus-based. Physics C: Mechanics may be combined with its electricity and magnetism counterpart to form a year-long course that prepares for both exams.
Course content
Intended to be equivalent to an introductory college course in mechanics for physics or engineering majors, the course modules are:
Kinematics
Newton's laws of motion
Work, energy and power
Systems of particles and linear momentum
Circular motion and rotation
Oscillations and gravitation.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a Calculus I class.
This course is often compared to AP Physics 1: Algebra Based for its similar course material involving kinematics, work, motion, forces, rotation, and oscillations. However, AP Physics 1: Algebra Based lacks concepts found in Calculus I, like derivatives or integrals.
This course may be combined with AP Physics C: Electricity and Magnetism to make a unified Physics C course that prepares for both exams.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Mechanics is separate from the AP examination for AP Physics C: Electricity and Magnetism. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday aftern
Document 4:::
In physics, work is the energy transferred to or from an object via the application of force along a displacement. In its simplest form, for a constant force aligned with the direction of motion, the work equals the product of the force strength and the distance traveled. A force is said to do positive work if when applied it has a component in the direction of the displacement of the point of application. A force does negative work if it has a component opposite to the direction of the displacement at the point of application of the force.
For example, when a ball is held above the ground and then dropped, the work done by the gravitational force on the ball as it falls is positive, and is equal to the weight of the ball (a force) multiplied by the distance to the ground (a displacement). If the ball is thrown upwards, the work done by the gravitational force is negative, and is equal to the weight multiplied by the displacement in the upwards direction.
Both force and displacement are vectors. The work done is given by the dot product of the two vectors. When the force is constant and the angle between the force and the displacement is also constant, then the work done is given by:
Work is a scalar quantity, so it has only magnitude and no direction. Work transfers energy from one place to another, or one form to another. The SI unit of work is the joule (J), the same unit as for energy.
History
The ancient Greek understanding of physics was limited to the statics of simple machines (the balance of forces), and did not include dynamics or the concept of work. During the Renaissance the dynamics of the Mechanical Powers, as the simple machines were called, began to be studied from the standpoint of how far they could lift a load, in addition to the force they could apply, leading eventually to the new concept of mechanical work. The complete dynamic theory of simple machines was worked out by Italian scientist Galileo Galilei in 1600 in Le Meccaniche (On Me
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
An object's energy due to motion is known as?
A. thermodynamic energy
B. inertia
C. residual energy
D. kinetic energy
Answer:
|
|
sciq-8030
|
multiple_choice
|
Animals that live in groups with other members of their species are called what?
|
[
"common animals",
"social animals",
"energetic animals",
"present animals"
] |
B
|
Relavent Documents:
Document 0:::
Animals are multicellular eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, are able to move, reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Over 1.5 million living animal species have been described—of which around 1 million are insects—but it has been estimated there are over 7 million in total. Animals range in size from 8.5 millionths of a metre to long and have complex interactions with each other and their environments, forming intricate food webs. The study of animals is called zoology.
Animals may be listed or indexed by many criteria, including taxonomy, status as endangered species, their geographical location, and their portrayal and/or naming in human culture.
By common name
List of animal names (male, female, young, and group)
By aspect
List of common household pests
List of animal sounds
List of animals by number of neurons
By domestication
List of domesticated animals
By eating behaviour
List of herbivorous animals
List of omnivores
List of carnivores
By endangered status
IUCN Red List endangered species (Animalia)
United States Fish and Wildlife Service list of endangered species
By extinction
List of extinct animals
List of extinct birds
List of extinct mammals
List of extinct cetaceans
List of extinct butterflies
By region
Lists of amphibians by region
Lists of birds by region
Lists of mammals by region
Lists of reptiles by region
By individual (real or fictional)
Real
Lists of snakes
List of individual cats
List of oldest cats
List of giant squids
List of individual elephants
List of historical horses
List of leading Thoroughbred racehorses
List of individual apes
List of individual bears
List of giant pandas
List of individual birds
List of individual bovines
List of individual cetaceans
List of individual dogs
List of oldest dogs
List of individual monkeys
List of individual pigs
List of w
Document 1:::
Animals are multicellular, eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. As of 2022, 2.16 million living animal species have been described—of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are around 7.77 million animal species. Animals range in length from to . They have complex interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology.
Most living animal species are in Bilateria, a clade whose members have a bilaterally symmetric body plan. The Bilateria include the protostomes, containing animals such as nematodes, arthropods, flatworms, annelids and molluscs, and the deuterostomes, containing the echinoderms and the chordates, the latter including the vertebrates. Life forms interpreted as early animals were present in the Ediacaran biota of the late Precambrian. Many modern animal phyla became clearly established in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago.
Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on ad
Document 2:::
The cognitive ecology of individual recognition has been studied in many species, especially in primates or other mammalian species that exhibit complex social behaviours, but comparatively little research has been done on colonial birds. Colonial birds live in dense colonies in which many individuals interact with each other daily. For colonial birds, being able to identify and recognize individuals can be a crucial skill.
Sociality and brain size
Individual recognition is one of the most basic forms of social cognition. If we were to define individual recognition, it would imply that a given individual has the capacity to discriminate a familiar individual from another one at any given time. It is believed that in many species, group size is often a representation of social complexity, with higher social complexity demanding higher cognitive capabilities. This hypothesis is also known as the "social brain hypothesis" and has been supported by many researchers. The logic behind this hypothesis is based on the principle that larger group size will require a higher degree of complexity in their interactions. Many studies have looked at the effect of sociality on the brain development, mostly focussing on non-human primate species. In primates, it has been shown that relative brain size, when controlling for the size of the species and the phylogeny, seemed to correlate with the size of the social group. These results allowed for a direct correlation between sociality and cognition. However, when reproducing such experiments in non-primate species, like with reptiles, birds and even other mammalian species, the correlation between brain size and social group size does not seems to exist. A study done on mountain chickadees looking at the impact of sociality on the hippocampus size as well as on neurogenesis found no evidence of change related to group size, therefore rejecting the "social brain hypothesis" in birds. Further research looking at bird cognitive ecolo
Document 3:::
Animal science is described as "studying the biology of animals that are under the control of humankind". It can also be described as the production and management of farm animals. Historically, the degree was called animal husbandry and the animals studied were livestock species, like cattle, sheep, pigs, poultry, and horses. Today, courses available look at a broader area, including companion animals, like dogs and cats, and many exotic species. Degrees in Animal Science are offered at a number of colleges and universities. Animal science degrees are often offered at land-grant universities, which will often have on-campus farms to give students hands-on experience with livestock animals.
Education
Professional education in animal science prepares students for careers in areas such as animal breeding, food and fiber production, nutrition, animal agribusiness, animal behavior, and welfare. Courses in a typical Animal Science program may include genetics, microbiology, animal behavior, nutrition, physiology, and reproduction. Courses in support areas, such as genetics, soils, agricultural economics and marketing, legal aspects, and the environment also are offered.
Bachelor degree
At many universities, a Bachelor of Science (BS) degree in Animal Science allows emphasis in certain areas. Typical areas are species-specific or career-specific. Species-specific areas of emphasis prepare students for a career in dairy management, beef management, swine management, sheep or small ruminant management, poultry production, or the horse industry. Other career-specific areas of study include pre-veterinary medicine studies, livestock business and marketing, animal welfare and behavior, animal nutrition science, animal reproduction science, or genetics. Youth programs are also an important part of animal science programs.
Pre-veterinary emphasis
Many schools that offer a degree option in Animal Science also offer a pre-veterinary emphasis such as Iowa State University, th
Document 4:::
Vertebrate zoology is the biological discipline that consists of the study of Vertebrate animals, i.e., animals with a backbone, such as fish, amphibians, reptiles, birds and mammals. Many natural history museums have departments named Vertebrate Zoology. In some cases whole museums bear this name, e.g. the Museum of Vertebrate Zoology at the University of California, Berkeley.
Subdivisions
This subdivision of zoology has many further subdivisions, including:
Ichthyology - the study of fishes.
Mammalogy - the study of mammals.
Chiropterology - the study of bats.
Primatology - the study of primates.
Ornithology - the study of birds.
Herpetology - the study of reptiles.
Batrachology - the study of amphibians.
These divisions are sometimes further divided into more specific specialties.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Animals that live in groups with other members of their species are called what?
A. common animals
B. social animals
C. energetic animals
D. present animals
Answer:
|
|
ai2_arc-853
|
multiple_choice
|
In early 2003, the Human Genome Project identified the sequence of base pairs in the genes in human DNA. With all of this information, many of the functions of the genes are still unknown. Currently, scientists are studying many of these genes in order to learn more about them. What is the significance of this new genetic discovery?
|
[
"It can provide new methods for creating diseases.",
"It can lead to faster chromosome replication.",
"It can lead to a simpler structure for DNA.",
"It can provide new ways to treat diseases."
] |
D
|
Relavent Documents:
Document 0:::
The Personal Genetics Education Project (pgEd) aims to engage and inform a worldwide audience about the benefits of knowing one's genome as well as the ethical, legal and social issues (ELSI) and dimensions of personal genetics. pgEd was founded in 2006, is housed in the Department of Genetics at Harvard Medical School and is directed by Ting Wu, a professor in that department. It employs a variety of strategies for reaching general audiences, including generating online curricular materials, leading discussions in classrooms, workshops, and conferences, developing a mobile educational game (Map-Ed), holding an annual conference geared toward accelerating awareness (GETed), and working with the world of entertainment to improve accuracy and outreach.
Online curricular materials and professional development for teachers
pgEd develops tools for teachers and general audiences that examine the potential benefits and risks of personalized genome analysis. These include freely accessible, interactive lesson plans that tackle issues such as genetic testing of minors, reproductive genetics, complex human traits and genetics, and the history of eugenics. pgEd also engages educators at conferences as well as organizes professional development workshops. All of pgEd's materials are freely available online.
Map-Ed, a mobile quiz
In 2013, pgEd created a mobile educational quiz called Map-Ed. Map-Ed invites players to work their way through five questions that address key concepts in genetics and then pin themselves on a world map. Within weeks of its launch, Map-Ed gained over 1,000 pins around the world, spanning across all 7 continents. Translations and new maps linked to questions on topics broadly related to genetics are in development.
GETed conference
pgEd hosts the annual GETed conference, a meeting that brings together experts from across the United States and beyond in education, research, health, entertainment, and policy to develop strategies for acceleratin
Document 1:::
Genetics (from Ancient Greek , “genite” and that from , “origin”), a discipline of biology, is the science of heredity and variation in living organisms.
Articles (arranged alphabetically) related to genetics include:
#
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z
Document 2:::
In genomics, the postgenomic era (or post-genomic era) refers to the time period from after the completion of the Human Genome Project to the present day. The name refers to the fact that the genetic epistemology of contemporary science has progressed beyond the gene-centered view of the earlier genomic era. It is defined by the widespread availability of both the human genome sequence and of the complete genomes of many reference organisms.
The postgenomic era is characterized by a paradigm shift in which new genetic research has upended many dogmas about the way in which genes influence phenotypes, and the way in which the term "gene" itself is defined. This has included a new conceptualization of genes as being constituted during "genome expression", and the creation of the discipline of functional genomics to analyze genomic data and convert it to useful information. It has also seen major changes in the way scientific research is conducted and its results publicized, with open science initiatives allowing knowledge creation to occur well outside the traditional environment of the laboratory. This has led to extensive debate about whether the best way to conduct genomic research is at a small or large scale.
Soon after the HGP's results were initially announced in 2000, researchers predicted that these results would lead to individualized treatment and more accurate testing for human diseases. More recently, researchers have suggested that the way in which human diseases are classified needs to be updated in light of the results of the HGP.
Document 3:::
New Genetics and Society is a triannual peer-reviewed scientific journal covering sociological perspectives on contemporary genetics and related biological sciences. It was established in 1999 by Peter Glasner and Harry Rothman, with its first issue appearing in April of that year. It is published by Routledge and the editors-in-chief are Richard Tutton (Lancaster University) and Adam Hedgecoe (University of Cardiff). According to the Journal Citation Reports, the journal has a 2017 impact factor of 1.571.
Document 4:::
The Human Genome Project (HGP) was an international scientific research project with the goal of determining the base pairs that make up human DNA, and of identifying, mapping and sequencing all of the genes of the human genome from both a physical and a functional standpoint. It started in 1990 and was completed in 2003. It remains the world's largest collaborative biological project. Planning for the project started after it was adopted in 1984 by the US government, and it officially launched in 1990. It was declared complete on April 14, 2003, and included about 92% of the genome. Level "complete genome" was achieved in May 2021, with a remaining only 0.3% bases covered by potential issues. The final gapless assembly was finished in January 2022.
Funding came from the United States government through the National Institutes of Health (NIH) as well as numerous other groups from around the world. A parallel project was conducted outside the government by the Celera Corporation, or Celera Genomics, which was formally launched in 1998. Most of the government-sponsored sequencing was performed in twenty universities and research centres in the United States, the United Kingdom, Japan, France, Germany, and China, working in the International Human Genome Sequencing Consortium (IHGSC).
The Human Genome Project originally aimed to map the complete set of nucleotides contained in a human haploid reference genome, of which there are more than three billion. The "genome" of any given individual is unique; mapping the "human genome" involved sequencing samples collected from a small number of individuals and then assembling the sequenced fragments to get a complete sequence for each of 24 human chromosomes (22 autosomes and 2 sex chromosomes). Therefore, the finished human genome is a mosaic, not representing any one individual. Much of the project's utility comes from the fact that the vast majority of the human genome is the same in all humans.
History
The Human G
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
In early 2003, the Human Genome Project identified the sequence of base pairs in the genes in human DNA. With all of this information, many of the functions of the genes are still unknown. Currently, scientists are studying many of these genes in order to learn more about them. What is the significance of this new genetic discovery?
A. It can provide new methods for creating diseases.
B. It can lead to faster chromosome replication.
C. It can lead to a simpler structure for DNA.
D. It can provide new ways to treat diseases.
Answer:
|
|
sciq-2801
|
multiple_choice
|
What were the first photosynthetic organisms on earth?
|
[
"mosses",
"bacteria",
"fungi",
"trees"
] |
B
|
Relavent Documents:
Document 0:::
The evolution of photosynthesis refers to the origin and subsequent evolution of photosynthesis, the process by which light energy is used to assemble sugars from carbon dioxide and a hydrogen and electron source such as water. The process of photosynthesis was discovered by Jan Ingenhousz, a Dutch-born British physician and scientist, first publishing about it in 1779.
The first photosynthetic organisms probably evolved early in the evolutionary history of life and most likely used reducing agents such as hydrogen rather than water. There are three major metabolic pathways by which photosynthesis is carried out: C3 photosynthesis, C4 photosynthesis, and CAM photosynthesis. C3 photosynthesis is the oldest and most common form. A C3 plant uses the Calvin cycle for the initial steps that incorporate into organic material. A C4 plant prefaces the Calvin cycle with reactions that incorporate into four-carbon compounds. A CAM plant uses crassulacean acid metabolism, an adaptation for photosynthesis in arid conditions. C4 and CAM plants have special adaptations that save water.
Origin
Available evidence from geobiological studies of Archean (>2500 Ma) sedimentary rocks indicates that life existed 3500 Ma. Fossils of what are thought to be filamentous photosynthetic organisms have been dated at 3.4 billion years old, consistent with recent studies of photosynthesis. Early photosynthetic systems, such as those from green and purple sulfur and green and purple nonsulfur bacteria, are thought to have been anoxygenic, using various molecules as electron donors. Green and purple sulfur bacteria are thought to have used hydrogen and hydrogen sulfide as electron and hydrogen donors. Green nonsulfur bacteria used various amino and other organic acids. Purple nonsulfur bacteria used a variety of nonspecific organic and inorganic molecules. It is suggested that photosynthesis likely originated at low-wavelength geothermal light from acidic hydrothermal vents, Zn-tetrapyrroles w
Document 1:::
Cyanobacteria (), also called Cyanobacteriota or Cyanophyta, are a phylum of gram-negative bacteria that obtain energy via photosynthesis. The name cyanobacteria refers to their color (), which similarly forms the basis of cyanobacteria's common name, blue-green algae, although they are not usually scientifically classified as algae. They appear to have originated in a freshwater or terrestrial environment. Sericytochromatia, the proposed name of the paraphyletic and most basal group, is the ancestor of both the non-photosynthetic group Melainabacteria and the photosynthetic cyanobacteria, also called Oxyphotobacteria.
Cyanobacteria use photosynthetic pigments, such as carotenoids, phycobilins, and various forms of chlorophyll, which absorb energy from light. Unlike heterotrophic prokaryotes, cyanobacteria have internal membranes. These are flattened sacs called thylakoids where photosynthesis is performed. Phototrophic eukaryotes such as green plants perform photosynthesis in plastids that are thought to have their ancestry in cyanobacteria, acquired long ago via a process called endosymbiosis. These endosymbiotic cyanobacteria in eukaryotes then evolved and differentiated into specialized organelles such as chloroplasts, chromoplasts, etioplasts, and leucoplasts, collectively known as plastids.
Cyanobacteria are the first organisms known to have produced oxygen. By producing and releasing oxygen as a byproduct of photosynthesis, cyanobacteria are thought to have converted the early oxygen-poor, reducing atmosphere into an oxidizing one, causing the Great Oxidation Event and the "rusting of the Earth", which dramatically changed the composition of life forms on Earth.
The cyanobacteria Synechocystis and Cyanothece are important model organisms with potential applications in biotechnology for bioethanol production, food colorings, as a source of human and animal food, dietary supplements and raw materials. Cyanobacteria produce a range of toxins known as cyanotox
Document 2:::
Photosynthetic picoplankton or picophytoplankton is the fraction of the phytoplankton performing photosynthesis composed of cells between 0.2 and 2 µm in size (picoplankton). It is especially important in the central oligotrophic regions of the world oceans that have very low concentration of nutrients.
History
1952: Description of the first truly picoplanktonic species, Chromulina pusilla, by Butcher. This species was renamed in 1960 to Micromonas pusilla and a few studies have found it to be abundant in temperate oceanic waters, although very little such quantification data exists for eukaryotic picophytoplankton.
1979: Discovery of marine Synechococcus by Waterbury and confirmation with electron microscopy by Johnson and Sieburth.
1982: The same Johnson and Sieburth demonstrate the importance of small eukaryotes by electron microscopy.
1983: W.K.W Li and colleagues, including Trevor Platt show that a large fraction of marine primary production is due to organisms smaller than 2 µm.
1986: Discovery of "prochlorophytes" by Chisholm and Olson in the Sargasso Sea, named in 1992 as Prochlorococcus marinus.
1994: Discovery in the Thau lagoon in France of the smallest photosynthetic eukaryote known to date, Ostreococcus tauri, by Courties.
2001: Through sequencing of the ribosomal RNA gene extracted from marine samples, several European teams discover that eukaryotic picoplankton are highly diverse. This finding followed on the first discovery of such eukaryotic diversity in 1998 by Rappe and colleagues at Oregon State University, who were the first to apply rRNA sequencing to eukaryotic plankton in the open-ocean, where they discovered sequences that seemed distant from known phytoplankton The cells containing DNA matching one of these novel sequences were recently visualized and further analyzed using specific probes and found to be broadly distributed.
Methods of study
Because of its very small size, picoplankton is difficult to study by classic methods
Document 3:::
Gloeomargarita lithophora is a cyanobacterium, and is the proposed sister of the endosymbiotic plastids in the eukaryote group Archaeplastida (glaucophytes, plants, green and red algae). Gloeomargarita'''s relative would have ended up in an ancestral archaeplastid through a singular endosymbiotic event some 1900-1400 million years ago, after which it was recruited by the euglenids and some members of the SAR supergroup.
The origin of plastids by endosymbiosis signifies the beginning of photosynthesis in eukaryotes, and as such their evolutionary relationship to Gloeomargarita lithophora, perhaps as a direct divergent, is of high importance to the evolutionary history of photosynthesis. Gloeomargarita appears to be related to a (basal) Synechococcus branch. A similar endosymbiotic event occurred about 500 million years ago, with another Synechococcus related bacteria appearing in Paulinella chromatophora.
Description G. lithophora was first isolated in 2007 from microbiolate samples taken from alkaline Lake Alchichica (Mexico). These samples were maintained in a lab aquarium and G. lithophora was isolated from biofilm that occurred within the aquarium. G. lithophora are gram-negative, unicellular rods with oxygenic photoautotrophic metabolism and gliding motility. They contain chlorophyll a and phycocyanin and photosynthetic thylakoids located peripherally. Cells are 1.1 μm wide and 3.9 μm long on average. Growth occurred in both liquid and solid BG-11 growth media, as well as in alkaline water. Optimal growth temperature is 25 °C and optimal growth pH is 8–8.5.
Bioremediation
Some evidence suggests that Gloeomargarita lithophora'' could serve as a biological buffer to treat water contaminated with strontium, barium, or radioactive pollutants such as radium. This could be a useful application of bioremediation.
Document 4:::
The evolution of plants has resulted in a wide range of complexity, from the earliest algal mats of unicellular archaeplastids evolved through endosymbiosis, through multicellular marine and freshwater green algae, to spore-bearing terrestrial bryophytes, lycopods and ferns, and eventually to the complex seed-bearing gymnosperms and angiosperms (flowering plants) of today. While many of the earliest groups continue to thrive, as exemplified by red and green algae in marine environments, more recently derived groups have displaced previously ecologically dominant ones; for example, the ascendance of flowering plants over gymnosperms in terrestrial environments.
There is evidence that cyanobacteria and multicellular photosynthetic eukaryotes lived in freshwater communities on land as early as 1 billion years ago, and that communities of complex, multicellular photosynthesizing organisms existed on land in the late Precambrian, around .
Evidence of the emergence of embryophyte land plants first occurs in the mid-Ordovician (~), and by the middle of the Devonian (~), many of the features recognised in land plants today were present, including roots and leaves. By the late Devonian (~) some free-sporing plants such as Archaeopteris had secondary vascular tissue that produced wood and had formed forests of tall trees. Also by the late Devonian, Elkinsia, an early seed fern, had evolved seeds.
Evolutionary innovation continued throughout the rest of the Phanerozoic eon and still continues today. Most plant groups were relatively unscathed by the Permo-Triassic extinction event, although the structures of communities changed. This may have set the scene for the appearance of the flowering plants in the Triassic (~), and their later diversification in the Cretaceous and Paleogene. The latest major group of plants to evolve were the grasses, which became important in the mid-Paleogene, from around . The grasses, as well as many other groups, evolved new mechanisms of metab
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What were the first photosynthetic organisms on earth?
A. mosses
B. bacteria
C. fungi
D. trees
Answer:
|
|
sciq-307
|
multiple_choice
|
What term describes a collection of similar cells that had a common embryonic origin?
|
[
"tissue",
"nucleus",
"plasma",
"organ-level organization"
] |
A
|
Relavent Documents:
Document 0:::
A cell type is a classification used to identify cells that share morphological or phenotypical features. A multicellular organism may contain cells of a number of widely differing and specialized cell types, such as muscle cells and skin cells, that differ both in appearance and function yet have identical genomic sequences. Cells may have the same genotype, but belong to different cell types due to the differential regulation of the genes they contain. Classification of a specific cell type is often done through the use of microscopy (such as those from the cluster of differentiation family that are commonly used for this purpose in immunology). Recent developments in single cell RNA sequencing facilitated classification of cell types based on shared gene expression patterns. This has led to the discovery of many new cell types in e.g. mouse cortex, hippocampus, dorsal root ganglion and spinal cord.
Animals have evolved a greater diversity of cell types in a multicellular body (100–150 different cell types), compared
with 10–20 in plants, fungi, and protists. The exact number of cell types is, however, undefined, and the Cell Ontology, as of 2021, lists over 2,300 different cell types.
Multicellular organisms
All higher multicellular organisms contain cells specialised for different functions. Most distinct cell types arise from a single totipotent cell that differentiates into hundreds of different cell types during the course of development. Differentiation of cells is driven by different environmental cues (such as cell–cell interaction) and intrinsic differences (such as those caused by the uneven distribution of molecules during division). Multicellular organisms are composed of cells that fall into two fundamental types: germ cells and somatic cells. During development, somatic cells will become more specialized and form the three primary germ layers: ectoderm, mesoderm, and endoderm. After formation of the three germ layers, cells will continue to special
Document 1:::
Embryomics is the identification, characterization and study of the diverse cell types which arise during embryogenesis, especially as this relates to the location and developmental history of cells in the embryo. Cell type may be determined according to several criteria: location in the developing embryo, gene expression as indicated by protein and nucleic acid markers and surface antigens, and also position on the embryogenic tree.
Embryome
There are many cell markers useful in distinguishing, classifying, separating and purifying the numerous cell types present at any given time in a developing organism. These cell markers consist of select RNAs and proteins present inside, and surface antigens present on the surface of, the cells making up the embryo. For any given cell type, these RNA and protein markers reflect the genes characteristically active in that cell type. The catalog of all these cell types and their characteristic markers is known as the organism's embryome. The word is a portmanteau of embryo and genome. “Embryome” may also refer to the totality of the physical cell markers themselves.
Embryogenesis
As an embryo develops from a fertilized egg, the single egg cell splits into many cells, which grow in number and migrate to the appropriate locations inside the embryo at appropriate times during development. As the embryo's cells grow in number and migrate, they also differentiate into an increasing number of different cell types, ultimately turning into the stable, specialized cell types characteristic of the adult organism. Each of the cells in an embryo contains the same genome, characteristic of the species, but the level of activity of each of the many thousands of genes that make up the complete genome varies with, and determines, a particular cell's type (e.g. neuron, bone cell, skin cell, muscle cell, etc.).
During embryo development (embryogenesis), many cell types are present which are not present in the adult organism. These temporary c
Document 2:::
In biology, cell theory is a scientific theory first formulated in the mid-nineteenth century, that organisms are made up of cells, that they are the basic structural/organizational unit of all organisms, and that all cells come from pre-existing cells. Cells are the basic unit of structure in all organisms and also the basic unit of reproduction.
The theory was once universally accepted, but now some biologists consider non-cellular entities such as viruses living organisms, and thus disagree with the first tenet. As of 2021: "expert opinion remains divided roughly a third each between yes, no and don’t know". As there is no universally accepted definition of life, discussion still continues.
History
With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as it was believed no one else had seen these. To further support his theory, Matthias Schleiden and Theodor Schwann both also studied cells of both animal and plants. What they discovered were significant differences between the two types of cells. This put forth the idea that cells were not only fundamental to plants, but animals as well.
Microscopes
The discovery of the cell was made possible through the invention of the microscope. In the first century BC, Romans were able to make glass. They discovered that objects appeared to be larger under the glass. The expanded use of lenses in eyeglasses in the 13th century probably led to wider spread use of simple microscopes (magnifying glasses) with limited magnification. Compound microscopes, which combine an objective lens with an eyepiece to view a real image achieving much higher magnification, first appeared in Europe around 1620. In 1665, Robert Hooke used a microscope
Document 3:::
The branches of science known informally as omics are various disciplines in biology whose names end in the suffix -omics, such as genomics, proteomics, metabolomics, metagenomics, phenomics and transcriptomics. Omics aims at the collective characterization and quantification of pools of biological molecules that translate into the structure, function, and dynamics of an organism or organisms.
The related suffix -ome is used to address the objects of study of such fields, such as the genome, proteome or metabolome respectively. The suffix -ome as used in molecular biology refers to a totality of some sort; it is an example of a "neo-suffix" formed by abstraction from various Greek terms in , a sequence that does not form an identifiable suffix in Greek.
Functional genomics aims at identifying the functions of as many genes as possible of a given organism. It combines
different -omics techniques such as transcriptomics and proteomics with saturated mutant collections.
Origin
The Oxford English Dictionary (OED) distinguishes three different fields of application for the -ome suffix:
in medicine, forming nouns with the sense "swelling, tumour"
in botany or zoology, forming nouns in the sense "a part of an animal or plant with a specified structure"
in cellular and molecular biology, forming nouns with the sense "all constituents considered collectively"
The -ome suffix originated as a variant of -oma, and became productive in the last quarter of the 19th century. It originally appeared in terms like sclerome or rhizome. All of these terms derive from Greek words in , a sequence that is not a single suffix, but analyzable as , the belonging to the word stem (usually a verb) and the being a genuine Greek suffix forming abstract nouns.
The OED suggests that its third definition originated as a back-formation from mitome, Early attestations include biome (1916) and genome (first coined as German Genom in 1920).
The association with chromosome in molecular bio
Document 4:::
According to the principle of nuclear equivalence, the nuclei of essentially all differentiated adult cells of an individual are genetically (though not necessarily metabolically) identical to one another and to the nucleus of the zygote from which they descended. This means that virtually all somatic cells in an adult have the same genes. However, different cells express different subsets of these genes.
The evidence for nuclear equivalence comes from cases in which differentiated cells or their nuclei have been found to retain the potential of directing the development of the entire organism. Such cells or nuclei are said to exhibit totipotency.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What term describes a collection of similar cells that had a common embryonic origin?
A. tissue
B. nucleus
C. plasma
D. organ-level organization
Answer:
|
|
sciq-28
|
multiple_choice
|
What is defined as a change in the inherited traits of organisms over time?
|
[
"evolution",
"divergence",
"variation",
"generation"
] |
A
|
Relavent Documents:
Document 0:::
Evolutionary biology is the subfield of biology that studies the evolutionary processes (natural selection, common descent, speciation) that produced the diversity of life on Earth. It is also defined as the study of the history of life forms on Earth. Evolution holds that all species are related and gradually change over generations. In a population, the genetic variations affect the phenotypes (physical characteristics) of an organism. These changes in the phenotypes will be an advantage to some organisms, which will then be passed onto their offspring. Some examples of evolution in species over many generations are the peppered moth and flightless birds. In the 1930s, the discipline of evolutionary biology emerged through what Julian Huxley called the modern synthesis of understanding, from previously unrelated fields of biological research, such as genetics and ecology, systematics, and paleontology.
The investigational range of current research has widened to encompass the genetic architecture of adaptation, molecular evolution, and the different forces that contribute to evolution, such as sexual selection, genetic drift, and biogeography. Moreover, the newer field of evolutionary developmental biology ("evo-devo") investigates how embryogenesis is controlled, thus yielding a wider synthesis that integrates developmental biology with the fields of study covered by the earlier evolutionary synthesis.
Subfields
Evolution is the central unifying concept in biology. Biology can be divided into various ways. One way is by the level of biological organization, from molecular to cell, organism to population. Another way is by perceived taxonomic group, with fields such as zoology, botany, and microbiology, reflecting what was once seen as the major divisions of life. A third way is by approaches, such as field biology, theoretical biology, experimental evolution, and paleontology. These alternative ways of dividing up the subject have been combined with evolution
Document 1:::
Ecological inheritance occurs when organisms inhabit a modified environment that a previous generation created; it was first described in Odling-Smee (1988) and Odling-Smee et al. (1996) as a consequence of niche construction. Standard evolutionary theory focuses on the influence that natural selection and genetic inheritance has on biological evolution, when individuals that survive and reproduce also transmit genes to their offspring. If offspring do not live in a modified environment created by their parents, then niche construction activities of parents do not affect the selective pressures of their offspring (see orb-web spiders in Genetic inheritance vs. ecological inheritance below). However, when niche construction affects multiple generations (i.e., parents and offspring), ecological inheritance acts a inheritance system different than genetic inheritance.
Since ecological inheritance is a result of ecosystem engineering and niche construction, the fitness of several species and their subsequent generations experience a selective pressure dependent on the modified environment they inherit. Organisms in subsequent generations will encounter ecological inheritance because they are affected by a new selective environment created by prior niche construction. On a macroevolutionary scale, ecological inheritance has been defined as, "the persistence of environmental modifications by a species over multiple generations to influence the evolution of that or other species." Ecological inheritance has also been defined as, "... the accumulation of environmental changes, such as altered soil, atmosphere or ocean states that previous generations have brought about through their niche-constructing activity, and that influence the development of descendant organisms."
Related to niche construction and ecological inheritance are factors and features of an organism and environment, respectively, where the feature of an organism is synonymous with adaptation if natural se
Document 2:::
In biology, evolution is the process of change in all forms of life over generations, and evolutionary biology is the study of how evolution occurs. Biological populations evolve through genetic changes that correspond to changes in the organisms' observable traits. Genetic changes include mutations, which are caused by damage or replication errors in organisms' DNA. As the genetic variation of a population drifts randomly over generations, natural selection gradually leads traits to become more or less common based on the relative reproductive success of organisms with those traits.
The age of the Earth is about 4.5 billion years. The earliest undisputed evidence of life on Earth dates from at least 3.5 billion years ago. Evolution does not attempt to explain the origin of life (covered instead by abiogenesis), but it does explain how early lifeforms evolved into the complex ecosystem that we see today. Based on the similarities between all present-day organisms, all life on Earth is assumed to have originated through common descent from a last universal ancestor from which all known species have diverged through the process of evolution.
All individuals have hereditary material in the form of genes received from their parents, which they pass on to any offspring. Among offspring there are variations of genes due to the introduction of new genes via random changes called mutations or via reshuffling of existing genes during sexual reproduction. The offspring differs from the parent in minor random ways. If those differences are helpful, the offspring is more likely to survive and reproduce. This means that more offspring in the next generation will have that helpful difference and individuals will not have equal chances of reproductive success. In this way, traits that result in organisms being better adapted to their living conditions become more common in descendant populations. These differences accumulate resulting in changes within the population. This proce
Document 3:::
An acquired characteristic is a non-heritable change in a function or structure of a living organism caused after birth by disease, injury, accident, deliberate modification, variation, repeated use, disuse, misuse, or other environmental influence. Acquired traits are synonymous with acquired characteristics. They are not passed on to offspring through reproduction.
The changes that constitute acquired characteristics can have many manifestations and degrees of visibility, but they all have one thing in common. They change a facet of a living organism's function or structure after birth.
For example:
The muscles acquired by a bodybuilder through physical training and diet.
The loss of a limb due to an injury.
The miniaturization of bonsai plants through careful cultivation techniques.
Acquired characteristics can be minor and temporary like bruises, blisters, or shaving body hair. Permanent but inconspicuous or invisible ones are corrective eye surgery and organ transplant or removal.
Semi-permanent but inconspicuous or invisible traits are vaccination and laser hair removal. Perms, tattoos, scars, and amputations are semi-permanent and highly visible.
Applying makeup, nail polish, dying one's hair, applying henna to the skin, and tooth whitening are not examples of acquired traits. They change the appearance of a facet of an organism, but do not change the structure or functionality.
Inheritance of acquired characteristics was historically proposed by renowned theorists such as Hippocrates, Aristotle, and French naturalist Jean-Baptiste Lamarck. Conversely, this hypothesis was denounced by other renowned theorists such as Charles Darwin.
Today, although Lamarckism is generally discredited, there is still debate on whether some acquired characteristics in organisms are actually inheritable.
Disputes
Acquired characteristics, by definition, are characteristics that are gained by an organism after birth as a result of external influences or the organism's ow
Document 4:::
The theory of facilitated variation demonstrates how seemingly complex biological systems can arise through a limited number of regulatory genetic changes, through the differential re-use of pre-existing developmental components. The theory was presented in 2005 by Marc W. Kirschner (a professor and chair at the Department of Systems Biology, Harvard Medical School) and John C. Gerhart (a professor at the Graduate School, University of California, Berkeley).
The theory of facilitated variation addresses the nature and function of phenotypic variation in evolution. Recent advances in cellular and evolutionary developmental biology shed light on a number of mechanisms for generating novelty. Most anatomical and physiological traits that have evolved since the Cambrian are, according to Kirschner and Gerhart, the result of regulatory changes in the usage of various conserved core components that function in development and physiology. Novel traits arise as novel packages of modular core components, which requires modest genetic change in regulatory elements. The modularity and adaptability of developmental systems reduces the number of regulatory changes needed to generate adaptive phenotypic variation, increases the probability that genetic mutation will be viable, and allows organisms to respond flexibly to novel environments. In this manner, the conserved core processes facilitate the generation of adaptive phenotypic variation, which natural selection subsequently propagates.
Description of the theory
The theory of facilitated variation consists of several elements. Organisms are built from a set of highly conserved modules called "core processes" that function in development and physiology, and have remained largely unchanged for millions (in some instances billions) of years. Genetic mutation leads to regulatory changes in the package of core components (i.e. new combinations, amounts, and functional states of those components) exhibited by an organism. Finall
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is defined as a change in the inherited traits of organisms over time?
A. evolution
B. divergence
C. variation
D. generation
Answer:
|
|
sciq-5654
|
multiple_choice
|
What is the measure of the amount of space occupied by an object?
|
[
"volume",
"liquid",
"growth",
"mass"
] |
A
|
Relavent Documents:
Document 0:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
Quantity calculus is the formal method for describing the mathematical relations between abstract physical quantities.
Its roots can be traced to Fourier's concept of dimensional analysis (1822). The basic axiom of quantity calculus is Maxwell's description of a physical quantity as the product of a "numerical value" and a "reference quantity" (i.e. a "unit quantity" or a "unit of measurement"). De Boer summarized the multiplication, division, addition, association and commutation rules of quantity calculus and proposed that a full axiomatization has yet to be completed.
Measurements are expressed as products of a numeric value with a unit symbol, e.g. "12.7 m". Unlike algebra, the unit symbol represents a measurable quantity such as a meter, not an algebraic variable.
A careful distinction needs to be made between abstract quantities and measurable quantities. The multiplication and division rules of quantity calculus are applied to SI base units (which are measurable quantities) to define SI derived units, including dimensionless derived units, such as the radian (rad) and steradian (sr) which are useful for clarity, although they are both algebraically equal to 1. Thus there is some disagreement about whether it is meaningful to multiply or divide units. Emerson suggests that if the units of a quantity are algebraically simplified, they then are no longer units of that quantity. Johansson proposes that there are logical flaws in the application of quantity calculus, and that the so-called dimensionless quantities should be understood as "unitless quantities".
How to use quantity calculus for unit conversion and keeping track of units in algebraic manipulations is explained in the handbook Quantities, Units and Symbols in Physical Chemistry.
Notes
Document 3:::
The surface-area-to-volume ratio or surface-to-volume ratio (denoted as SA:V, SA/V, or sa/vol) is the ratio between surface area and volume of an object or collection of objects.
SA:V is an important concept in science and engineering. It is used to explain the relation between structure and function in processes occurring through the surface the volume. Good examples for such processes are processes governed by the heat equation, that is, diffusion and heat transfer by thermal conduction. SA:V is used to explain the diffusion of small molecules, like oxygen and carbon dioxide between air, blood and cells, water loss by animals, bacterial morphogenesis, organism's thermoregulation, design of artificial bone tissue, artificial lungs and many more biological and biotechnological structures. For more examples see Glazier.
The relation between SA:V and diffusion or heat conduction rate is explained from flux and surface perspective, focusing on the surface of a body as the place where diffusion, or heat conduction, takes place, i.e., the larger the SA:V there is more surface area per unit volume through which material can diffuse, therefore, the diffusion or heat conduction, will be faster. Similar explanation appears in the literature: "Small size implies a large ratio of surface area to volume, thereby helping to maximize the uptake of nutrients across the plasma membrane", and elsewhere.
For a given volume, the object with the smallest surface area (and therefore with the smallest SA:V) is a ball, a consequence of the isoperimetric inequality in 3 dimensions. By contrast, objects with acute-angled spikes will have very large surface area for a given volume.
For solid spheres
A solid sphere or ball is a three-dimensional object, being the solid figure bounded by a sphere. (In geometry, the term sphere properly refers only to the surface, so a sphere thus lacks volume in this context.)
For an ordinary three-dimensional ball, the SA:V can be calculated using
Document 4:::
Physical or chemical properties of materials and systems can often be categorized as being either intensive or extensive, according to how the property changes when the size (or extent) of the system changes.
The terms "intensive and extensive quantities" were introduced into physics by German mathematician Georg Helm in 1898, and by American physicist and chemist Richard C. Tolman in 1917.
According to International Union of Pure and Applied Chemistry (IUPAC), an intensive property or intensive quantity is one whose magnitude is independent of the size of the system.
An intensive property is not necessarily homogeneously distributed in space; it can vary from place to place in a body of matter and radiation. Examples of intensive properties include temperature, T; refractive index, n; density, ρ; and hardness, η.
By contrast, an extensive property or extensive quantity is one whose magnitude is additive for subsystems.
Examples include mass, volume and entropy.
Not all properties of matter fall into these two categories. For example, the square root of the volume is neither intensive nor extensive. If a system is doubled in size by juxtaposing a second identical system, the value of an intensive property equals the value for each subsystem and the value of an extensive property is twice the value for each subsystem. However the property √V is instead multiplied by √2 .
Intensive properties
An intensive property is a physical quantity whose value does not depend on the amount of substance which was measured. The most obvious intensive quantities are ratios of extensive quantities. In a homogeneous system divided into two halves, all its extensive properties, in particular its volume and its mass, are divided into two halves. All its intensive properties, such as the mass per volume (mass density) or volume per mass (specific volume), must remain the same in each half.
The temperature of a system in thermal equilibrium is the same as the temperature of any part
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the measure of the amount of space occupied by an object?
A. volume
B. liquid
C. growth
D. mass
Answer:
|
|
sciq-4715
|
multiple_choice
|
Prokaryotic cells can only regulate gene expression by controlling the amount of what?
|
[
"RNA processing",
"folding",
"transcription",
"translation"
] |
C
|
Relavent Documents:
Document 0:::
In molecular biology and genetics, transcriptional regulation is the means by which a cell regulates the conversion of DNA to RNA (transcription), thereby orchestrating gene activity. A single gene can be regulated in a range of ways, from altering the number of copies of RNA that are transcribed, to the temporal control of when the gene is transcribed. This control allows the cell or organism to respond to a variety of intra- and extracellular signals and thus mount a response. Some examples of this include producing the mRNA that encode enzymes to adapt to a change in a food source, producing the gene products involved in cell cycle specific activities, and producing the gene products responsible for cellular differentiation in multicellular eukaryotes, as studied in evolutionary developmental biology.
The regulation of transcription is a vital process in all living organisms. It is orchestrated by transcription factors and other proteins working in concert to finely tune the amount of RNA being produced through a variety of mechanisms. Bacteria and eukaryotes have very different strategies of accomplishing control over transcription, but some important features remain conserved between the two. Most importantly is the idea of combinatorial control, which is that any given gene is likely controlled by a specific combination of factors to control transcription. In a hypothetical example, the factors A and B might regulate a distinct set of genes from the combination of factors A and C. This combinatorial nature extends to complexes of far more than two proteins, and allows a very small subset (less than 10%) of the genome to control the transcriptional program of the entire cell.
In bacteria
Much of the early understanding of transcription came from bacteria, although the extent and complexity of transcriptional regulation is greater in eukaryotes. Bacterial transcription is governed by three main sequence elements:
Promoters are elements of DNA that may bind
Document 1:::
MicrobeLibrary is a permanent collection of over 1400 original peer-reviewed resources for teaching undergraduate microbiology. It is provided by the American Society for Microbiology, Washington DC, United States.
Contents include curriculum activities; images and animations; reviews of books, websites and other resources; and articles from Focus on Microbiology Education, Microbiology Education and Microbe. Around 40% of the materials are free to educators and students, the remainder require a subscription. the service is suspended with the message to:
"Please check back with us in 2017".
External links
MicrobeLibrary
Microbiology
Document 2:::
A gene (or genetic) regulatory network (GRN) is a collection of molecular regulators that interact with each other and with other substances in the cell to govern the gene expression levels of mRNA and proteins which, in turn, determine the function of the cell. GRN also play a central role in morphogenesis, the creation of body structures, which in turn is central to evolutionary developmental biology (evo-devo).
The regulator can be DNA, RNA, protein or any combination of two or more of these three that form a complex, such as a specific sequence of DNA and a transcription factor to activate that sequence. The interaction can be direct or indirect (through transcribed RNA or translated protein). In general, each mRNA molecule goes on to make a specific protein (or set of proteins). In some cases this protein will be structural, and will accumulate at the cell membrane or within the cell to give it particular structural properties. In other cases the protein will be an enzyme, i.e., a micro-machine that catalyses a certain reaction, such as the breakdown of a food source or toxin. Some proteins though serve only to activate other genes, and these are the transcription factors that are the main players in regulatory networks or cascades. By binding to the promoter region at the start of other genes they turn them on, initiating the production of another protein, and so on. Some transcription factors are inhibitory.
In single-celled organisms, regulatory networks respond to the external environment, optimising the cell at a given time for survival in this environment. Thus a yeast cell, finding itself in a sugar solution, will turn on genes to make enzymes that process the sugar to alcohol. This process, which we associate with wine-making, is how the yeast cell makes its living, gaining energy to multiply, which under normal circumstances would enhance its survival prospects.
In multicellular animals the same principle has been put in the service of gene cascades
Document 3:::
This list of life sciences comprises the branches of science that involve the scientific study of life – such as microorganisms, plants, and animals including human beings. This science is one of the two major branches of natural science, the other being physical science, which is concerned with non-living matter. Biology is the overall natural science that studies life, with the other life sciences as its sub-disciplines.
Some life sciences focus on a specific type of organism. For example, zoology is the study of animals, while botany is the study of plants. Other life sciences focus on aspects common to all or many life forms, such as anatomy and genetics. Some focus on the micro-scale (e.g. molecular biology, biochemistry) other on larger scales (e.g. cytology, immunology, ethology, pharmacy, ecology). Another major branch of life sciences involves understanding the mindneuroscience. Life sciences discoveries are helpful in improving the quality and standard of life and have applications in health, agriculture, medicine, and the pharmaceutical and food science industries. For example, it has provided information on certain diseases which has overall aided in the understanding of human health.
Basic life science branches
Biology – scientific study of life
Anatomy – study of form and function, in plants, animals, and other organisms, or specifically in humans
Astrobiology – the study of the formation and presence of life in the universe
Bacteriology – study of bacteria
Biotechnology – study of combination of both the living organism and technology
Biochemistry – study of the chemical reactions required for life to exist and function, usually a focus on the cellular level
Bioinformatics – developing of methods or software tools for storing, retrieving, organizing and analyzing biological data to generate useful biological knowledge
Biolinguistics – the study of the biology and evolution of language.
Biological anthropology – the study of humans, non-hum
Document 4:::
Gene expression is the process by which information from a gene is used in the synthesis of a functional gene product that enables it to produce end products, proteins or non-coding RNA, and ultimately affect a phenotype. These products are often proteins, but in non-protein-coding genes such as transfer RNA (tRNA) and small nuclear RNA (snRNA), the product is a functional non-coding RNA. Gene expression is summarized in the central dogma of molecular biology first formulated by Francis Crick in 1958, further developed in his 1970 article, and expanded by the subsequent discoveries of reverse transcription and RNA replication.
The process of gene expression is used by all known life—eukaryotes (including multicellular organisms), prokaryotes (bacteria and archaea), and utilized by viruses—to generate the macromolecular machinery for life.
In genetics, gene expression is the most fundamental level at which the genotype gives rise to the phenotype, i.e. observable trait. The genetic information stored in DNA represents the genotype, whereas the phenotype results from the "interpretation" of that information. Such phenotypes are often displayed by the synthesis of proteins that control the organism's structure and development, or that act as enzymes catalyzing specific metabolic pathways.
All steps in the gene expression process may be modulated (regulated), including the transcription, RNA splicing, translation, and post-translational modification of a protein. Regulation of gene expression gives control over the timing, location, and amount of a given gene product (protein or ncRNA) present in a cell and can have a profound effect on the cellular structure and function. Regulation of gene expression is the basis for cellular differentiation, development, morphogenesis and the versatility and adaptability of any organism. Gene regulation may therefore serve as a substrate for evolutionary change.
Mechanism
Transcription
The production of a RNA copy from a DNA st
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Prokaryotic cells can only regulate gene expression by controlling the amount of what?
A. RNA processing
B. folding
C. transcription
D. translation
Answer:
|
|
sciq-7217
|
multiple_choice
|
Who proposed that everything in the universe exerts a force of attraction on everything else?
|
[
"wilson",
"newton",
"einstein",
"bell"
] |
B
|
Relavent Documents:
Document 0:::
Physics education or physics teaching refers to the education methods currently used to teach physics. The occupation is called physics educator or physics teacher. Physics education research refers to an area of pedagogical research that seeks to improve those methods. Historically, physics has been taught at the high school and college level primarily by the lecture method together with laboratory exercises aimed at verifying concepts taught in the lectures. These concepts are better understood when lectures are accompanied with demonstration, hand-on experiments, and questions that require students to ponder what will happen in an experiment and why. Students who participate in active learning for example with hands-on experiments learn through self-discovery. By trial and error they learn to change their preconceptions about phenomena in physics and discover the underlying concepts. Physics education is part of the broader area of science education.
Ancient Greece
Aristotle wrote what is considered now as the first textbook of physics. Aristotle's ideas were taught unchanged until the Late Middle Ages, when scientists started making discoveries that didn't fit them. For example, Copernicus' discovery contradicted Aristotle's idea of an Earth-centric universe. Aristotle's ideas about motion weren't displaced until the end of the 17th century, when Newton published his ideas.
Today's physics students often think of physics concepts in Aristotelian terms, despite being taught only Newtonian concepts.
Hong Kong
High schools
In Hong Kong, physics is a subject for public examination. Local students in Form 6 take the public exam of Hong Kong Diploma of Secondary Education (HKDSE).
Compare to the other syllabus include GCSE, GCE etc. which learn wider and boarder of different topics, the Hong Kong syllabus is learning more deeply and more challenges with calculations. Topics are narrow down to a smaller amount compared to the A-level due to the insufficient teachi
Document 1:::
Newton for Beginners, republished as Introducing Newton, is a 1993 graphic study guide to the Isaac Newton and classical physics written and illustrated by William Rankin. The volume, according to the publisher's website, "explains the extraordinary ideas of a man who [...] single-handedly made enormous advances in mathematics, mechanics and optics," and, "was also a secret heretic, a mystic and an alchemist."
"William Rankin," Public Understanding of Science reviewer Patrick Fullick confirms, "sets out to illuminate the man whose work laid the foundations of the physics of the last 350 years, and to place him and his work in the context of the times in which he lived." New Scientist reviewer Roy Herbert adds that, "alongside theories of the Universe from ancient times, the book explains those originating since Isaac Newton, so placing him deftly in his scientific context."
Publication History
This volume was originally published in the UK by Icon Books in 1993 as Newton for Beginners, and subsequently republished with different covers in different editions.
Selected editions:
Related volumes in the series:
Reception
"This book shares the general characteristics of the Beginners series with a large number of line drawings and cartoons with associated text and many asides," states Patrick Fullick, writing in Public Understanding of Science, "for some readers the asides may seem idiosyncratic or even annoying." "Some may dislike the humour and bad puns that abound in this work," confirms Bill Palmer, writing in the Journal of the Science Teacher Association of the Northern Territory, "but I suspect that those starting the study of Newton's life and work will appreciate this attempt to facilitate reading."
"The book is well-grounded in recent historiography," and, "Rankin is clearly sympathetic towards his subject," states Fullick, "but inevitably Newton still comes over as one whose intellectual vanity was at times apt to overcome his self-control." Roy Herbert
Document 2:::
Categories: On the Beauty of Physics is a non-fiction science and art book edited, co-written, and published by American author Hilary Thayer Hamann in 2006. The book was conceived as a multidisciplinary educational tool that uses art and literature to broaden the reader's understanding of challenging material. Alan Lightman, author of Einstein's Dreams, called Categories "A beautiful synthesis of science and art, pleasing to the mind and to the eye," and Dr. Helen Caldicott, founder and president of the Nuclear Policy Research Institute, said, "This wonderful book will provoke thought in lovers of science and art alike, and with knowledge comes the inspiration to preserve the beauty of life on Earth."
Author
Hamann is co-writer, creative and editorial director of Categories—On the Beauty of Physics (2006), a multidisciplinary, interdisciplinary educational text that uses imagery to facilitate the reader's encounter with challenging material. She worked with physicist Emiliano Seffusati, Ph.D., who wrote the science text, and collage artist John Morse, who created the original artwork.
Overview
Categories is a book about physics that uses literature and art to stimulate the wonder and interest of the reader. It is intended to promote scientific literacy, foster an appreciation of the humanities, and encourage readers to make informed and imaginative connections between the sciences and the arts.
Hamann intended the physics book to be the first in a series, with subsequent titles to focus on biology and chemistry, and for the three titles to form the cornerstone of a television series for adolescents and their parents.
Criticism
Library Journal gave the book a starred review, calling Categories "a gorgeous book," "a comprehensive overview of physics," and "highly recommended."
The book received high praise from critics and scientists.
Cognitive scientist, Harvard professor, and author of The Language Instinct (1994), and How the Mind Works (1997) Steven Pinker
Document 3:::
The study of electromagnetism in higher education, as a fundamental part of both physics and engineering, is typically accompanied by textbooks devoted to the subject. The American Physical Society and the American Association of Physics Teachers recommend a full year of graduate study in electromagnetism for all physics graduate students. A joint task force by those organizations in 2006 found that in 76 of the 80 US physics departments surveyed, a course using John David Jackson's Classical Electrodynamics was required for all first year graduate students. For undergraduates, there are several widely used textbooks, including David Griffiths' Introduction to Electrodynamics and Electricity and Magnetism by Edward Mills Purcell and D. J. Morin. Also at an undergraduate level, Richard Feynman's classic The Feynman Lectures on Physics is available online to read for free.
Undergraduate
There are several widely used undergraduate textbooks in electromagnetism, including David Griffiths' Introduction to Electrodynamics as well as Electricity and Magnetism by Edward Mills Purcell and D. J. Morin. The Feynman Lectures on Physics also include a volume on electromagnetism that is available to read online for free, through the California Institute of Technology. In addition, there are popular physics textbooks that include electricity and magnetism among the material they cover, such as David Halliday and Robert Resnick's Fundamentals of Physics.
Graduate
A 2006 report by a joint taskforce between the American Physical Society and the American Association of Physics Teachers found that 76 of the 80 physics departments surveyed require a first-year graduate course in John David Jackson's Classical Electrodynamics. This made Jackson's book the most popular textbook in any field of graduate-level physics, with Herbert Goldstein's Classical Mechanics as the second most popular with adoption at 48 universities. In a 2015 review of Andrew Zangwill's Modern Electrodynamics in
Document 4:::
Physics outreach encompasses facets of science outreach and physics education, and a variety of activities by schools, research institutes, universities, clubs and institutions such as science museums aimed at broadening the audience for and awareness and understanding of physics. While the general public may sometimes be the focus of such activities, physics outreach often centers on developing and providing resources and making presentations to students, educators in other disciplines, and in some cases researchers within different areas of physics.
History
Ongoing efforts to expand the understanding of physics to a wider audience have been undertaken by individuals and institutions since the early 19th century. Historic works, such as the Dialogue Concerning the Two Chief World Systems, and Two New Sciences by Galileo Galilei, sought to present revolutionary knowledge in astronomy, frames of reference, and kinematics in a manner that a general audience could understand with great effect.
In the mid 1800s, English physicist and chemist, Michael Faraday gave a series of nineteen lectures aimed towards young adults with the hopes of conveying scientific phenomena. His intentions were to raise awareness, inspire them and generate revenue of the Royal Institution. This series became known as the Christmas lectures, and still continues today. By the early 20th century, the public notoriety of physicists such as Albert Einstein and Marie Curie, and inventions such as radio led to a growing interest in physics. In 1921, in the United States, the establishment of Sigma Pi Sigma physics honor society at universities was instrumental in the expanding number of physics presentations, and led to the creation of physics clubs open to all students.
Museums were an important form of outreach but most early science museums were generally focused on natural history. Some specialized museums, such as the Cavendish Museum at University of Cambridge, housed many of the historicall
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Who proposed that everything in the universe exerts a force of attraction on everything else?
A. wilson
B. newton
C. einstein
D. bell
Answer:
|
|
sciq-2915
|
multiple_choice
|
How is energy expressed when it is released in a chemical reaction?
|
[
"as negative number",
"as an equation",
"as a percentage",
"as a positive number"
] |
A
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Activation energy asymptotics (AEA), also known as large activation energy asymptotics, is an asymptotic analysis used in the combustion field utilizing the fact that the reaction rate is extremely sensitive to temperature changes due to the large activation energy of the chemical reaction.
History
The techniques were pioneered by the Russian scientists Yakov Borisovich Zel'dovich, David A. Frank-Kamenetskii and co-workers in the 30s, in their study on premixed flames and thermal explosions (Frank-Kamenetskii theory), but not popular to western scientists until the 70s. In the early 70s, due to the pioneering work of Williams B. Bush, Francis E. Fendell, Forman A. Williams, Amable Liñán and John F. Clarke, it became popular in western community and since then it was widely used to explain more complicated problems in combustion.
Method overview
In combustion processes, the reaction rate is dependent on temperature in the following form (Arrhenius law),
where is the activation energy, and is the universal gas constant. In general, the condition is satisfied, where is the burnt gas temperature. This condition forms the basis for activation energy asymptotics. Denoting for unburnt gas temperature, one can define the Zel'dovich number and heat release parameter as follows
In addition, if we define a non-dimensional temperature
such that approaching zero in the unburnt region and approaching unity in the burnt gas region (in other words, ), then the ratio of reaction rate at any temperature to reaction rate at burnt gas temperature is given by
Now in the limit of (large activation energy) with , the reaction rate is exponentially small i.e., and negligible everywhere, but non-negligible when . In other words, the reaction rate is negligible everywhere, except in a small region very close to burnt gas temperature, where . Thus, in solving the conservation equations, one identifies two different regimes, at leading order,
Outer convective-diffusive zone
I
Document 2:::
A chemical equation is the symbolic representation of a chemical reaction in the form of symbols and chemical formulas. The reactant entities are given on the left-hand side and the product entities are on the right-hand side with a plus sign between the entities in both the reactants and the products, and an arrow that points towards the products to show the direction of the reaction. The chemical formulas may be symbolic, structural (pictorial diagrams), or intermixed. The coefficients next to the symbols and formulas of entities are the absolute values of the stoichiometric numbers. The first chemical equation was diagrammed by Jean Beguin in 1615.
Structure
A chemical equation (see an example below) consists of a list of reactants (the starting substances) on the left-hand side, an arrow symbol, and a list of products (substances formed in the chemical reaction) on the right-hand side. Each substance is specified by its chemical formula, optionally preceded by a number called stoichiometric coefficient. The coefficient specifies how many entities (e.g. molecules) of that substance are involved in the reaction on a molecular basis. If not written explicitly, the coefficient is equal to 1. Multiple substances on any side of the equation are separated from each other by a plus sign.
As an example, the equation for the reaction of hydrochloric acid with sodium can be denoted:
Given the formulas are fairly simple, this equation could be read as "two H-C-L plus two N-A yields two N-A-C-L and H two." Alternately, and in general for equations involving complex chemicals, the chemical formulas are read using IUPAC nomenclature, which could verbalise this equation as "two hydrochloric acid molecules and two sodium atoms react to form two formula units of sodium chloride and a hydrogen gas molecule."
Reaction types
Different variants of the arrow symbol are used to denote the type of a reaction:
{|
| style="text-align: center; padding-right: 0.5em;" | -> || net forwa
Document 3:::
In chemistry, yield, also referred to as reaction yield, is a measure of the quantity of moles of a product formed in relation to the reactant consumed, obtained in a chemical reaction, usually expressed as a percentage. Yield is one of the primary factors that scientists must consider in organic and inorganic chemical synthesis processes. In chemical reaction engineering, "yield", "conversion" and "selectivity" are terms used to describe ratios of how much of a reactant was consumed (conversion), how much desired product was formed (yield) in relation to the undesired product (selectivity), represented as X, Y, and S.
Definitions
In chemical reaction engineering, "yield", "conversion" and "selectivity" are terms used to describe ratios of how much of a reactant has reacted—conversion, how much of a desired product was formed—yield, and how much desired product was formed in ratio to the undesired product—selectivity, represented as X,S, and Y.
According to the Elements of Chemical Reaction Engineering manual, yield refers to the amount of a specific product formed per mole of reactant consumed. In chemistry, mole is used to describe quantities of reactants and products in chemical reactions.
The Compendium of Chemical Terminology defined yield as the "ratio expressing the efficiency of a mass conversion process. The yield coefficient is defined as the amount of cell mass (kg) or product formed (kg,mol) related to the consumed substrate (carbon or nitrogen source or oxygen in kg or moles) or to the intracellular ATP production (moles)."
In the section "Calculations of yields in the monitoring of reactions" in the 1996 4th edition of Vogel's Textbook of Practical Organic Chemistry (1978), the authors write that, "theoretical yield in an organic reaction is the weight of product which would be obtained if the reaction has proceeded to completion according to the chemical equation. The yield is the weight of the pure product which is isolated from the react
Document 4:::
In biology, the biological cost or metabolic price is a measure of the increased energy metabolism that is required to achieve a function. Drug resistance in microbiology, for instance, has a very high metabolic price, especially for antibiotic resistance.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How is energy expressed when it is released in a chemical reaction?
A. as negative number
B. as an equation
C. as a percentage
D. as a positive number
Answer:
|
|
sciq-9141
|
multiple_choice
|
Pressure and ________ are directly proportional at a constant volume?
|
[
"precipitation",
"temperature",
"speed",
"heating"
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Physical or chemical properties of materials and systems can often be categorized as being either intensive or extensive, according to how the property changes when the size (or extent) of the system changes.
The terms "intensive and extensive quantities" were introduced into physics by German mathematician Georg Helm in 1898, and by American physicist and chemist Richard C. Tolman in 1917.
According to International Union of Pure and Applied Chemistry (IUPAC), an intensive property or intensive quantity is one whose magnitude is independent of the size of the system.
An intensive property is not necessarily homogeneously distributed in space; it can vary from place to place in a body of matter and radiation. Examples of intensive properties include temperature, T; refractive index, n; density, ρ; and hardness, η.
By contrast, an extensive property or extensive quantity is one whose magnitude is additive for subsystems.
Examples include mass, volume and entropy.
Not all properties of matter fall into these two categories. For example, the square root of the volume is neither intensive nor extensive. If a system is doubled in size by juxtaposing a second identical system, the value of an intensive property equals the value for each subsystem and the value of an extensive property is twice the value for each subsystem. However the property √V is instead multiplied by √2 .
Intensive properties
An intensive property is a physical quantity whose value does not depend on the amount of substance which was measured. The most obvious intensive quantities are ratios of extensive quantities. In a homogeneous system divided into two halves, all its extensive properties, in particular its volume and its mass, are divided into two halves. All its intensive properties, such as the mass per volume (mass density) or volume per mass (specific volume), must remain the same in each half.
The temperature of a system in thermal equilibrium is the same as the temperature of any part
Document 2:::
Engineering mathematics is a branch of applied mathematics concerning mathematical methods and techniques that are typically used in engineering and industry. Along with fields like engineering physics and engineering geology, both of which may belong in the wider category engineering science, engineering mathematics is an interdisciplinary subject motivated by engineers' needs both for practical, theoretical and other considerations outside their specialization, and to deal with constraints to be effective in their work.
Description
Historically, engineering mathematics consisted mostly of applied analysis, most notably: differential equations; real and complex analysis (including vector and tensor analysis); approximation theory (broadly construed, to include asymptotic, variational, and perturbative methods, representations, numerical analysis); Fourier analysis; potential theory; as well as linear algebra and applied probability, outside of analysis. These areas of mathematics were intimately tied to the development of Newtonian physics, and the mathematical physics of that period. This history also left a legacy: until the early 20th century subjects such as classical mechanics were often taught in applied mathematics departments at American universities, and fluid mechanics may still be taught in (applied) mathematics as well as engineering departments.
The success of modern numerical computer methods and software has led to the emergence of computational mathematics, computational science, and computational engineering (the last two are sometimes lumped together and abbreviated as CS&E), which occasionally use high-performance computing for the simulation of phenomena and the solution of problems in the sciences and engineering. These are often considered interdisciplinary fields, but are also of interest to engineering mathematics.
Specialized branches include engineering optimization and engineering statistics.
Engineering mathematics in tertiary educ
Document 3:::
Swelling index may refer to the following material parameters that quantify volume change:
Crucible swelling index, also known as free swelling index, in coal assay
Swelling capacity, the amount of a liquid that can be absorbed by a polymer
Shrink–swell capacity in soil mechanics
Unload-reload constant (κ) in critical state soil mechanics
Mechanics
Materials science
Document 4:::
Transport Phenomena is the first textbook about transport phenomena. It is specifically designed for chemical engineering students. The first edition was published in 1960, two years after having been preliminarily published under the title Notes on Transport Phenomena based on mimeographed notes prepared for a chemical engineering course taught at the University of Wisconsin–Madison during the academic year 1957-1958. The second edition was published in August 2001. A revised second edition was published in 2007. This text is often known simply as BSL after its authors' initials.
History
As the chemical engineering profession developed in the first half of the 20th century, the concept of "unit operations" arose as being needed in the education of undergraduate chemical engineers. The theories of mass, momentum and energy transfer were being taught at that time only to the extent necessary for a narrow range of applications. As chemical engineers began moving into a number of new areas, problem definitions and solutions required a deeper knowledge of the fundamentals of transport phenomena than those provided in the textbooks then available on unit operations.
In the 1950s, R. Byron Bird, Warren E. Stewart and Edwin N. Lightfoot stepped forward to develop an undergraduate course at the University of Wisconsin–Madison to integrate the teaching of fluid flow, heat transfer, and diffusion. From this beginning, they prepared their landmark textbook Transport Phenomena.
Subjects covered in the book
The book is divided into three basic sections, named Momentum Transport, Energy Transport and Mass Transport:
Momentum Transport
Viscosity and the Mechanisms of Momentum Transport
Momentum Balances and Velocity Distributions in Laminar Flow
The Equations of Change for Isothermal Systems
Velocity Distributions in Turbulent Flow
Interphase Transport in Isothermal Systems
Macroscopic Balances for Isothermal Flow Systems
Energy Transport
Thermal Conductivity and the Me
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Pressure and ________ are directly proportional at a constant volume?
A. precipitation
B. temperature
C. speed
D. heating
Answer:
|
|
sciq-6595
|
multiple_choice
|
What is the term for when gravity pulls soil, mud, and rocks down cliffs and hillsides?
|
[
"mass movement",
"mass pressure",
"avalanche",
"mass momentum"
] |
A
|
Relavent Documents:
Document 0:::
Mass wasting, also known as mass movement, is a general term for the movement of rock or soil down slopes under the force of gravity. It differs from other processes of erosion in that the debris transported by mass wasting is not entrained in a moving medium, such as water, wind, or ice. Types of mass wasting include creep, solifluction, rockfalls, debris flows, and landslides, each with its own characteristic features, and taking place over timescales from seconds to hundreds of years. Mass wasting occurs on both terrestrial and submarine slopes, and has been observed on Earth, Mars, Venus, Jupiter's moon Io, and on many other bodies in the Solar System.
Subsidence is sometimes regarded as a form of mass wasting. A distinction is then made between mass wasting by subsidence, which involves little horizontal movement, and mass wasting by slope movement.
Rapid mass wasting events, such as landslides, can be deadly and destructive. More gradual mass wasting, such as soil creep, poses challenges to civil engineering, as creep can deform roadways and structures and break pipelines. Mitigation methods include slope stabilization, construction of walls, catchment dams, or other structures to contain rockfall or debris flows, afforestation, or improved drainage of source areas.
Types
Mass wasting is a general term for any process of erosion that is driven by gravity and in which the transported soil and rock is not entrained in a moving medium, such as water, wind, or ice. The presence of water usually aids mass wasting, but the water is not abundant enough to be regarded as a transporting medium. Thus, the distinction between mass wasting and stream erosion lies between a mudflow (mass wasting) and a very muddy stream (stream erosion), without a sharp dividing line. Many forms of mass wasting are recognized, each with its own characteristic features, and taking place over timescales from seconds to hundreds of years.
Based on how the soil, regolith or rock moves dow
Document 1:::
Sediment transport is the movement of solid particles (sediment), typically due to a combination of gravity acting on the sediment, and the movement of the fluid in which the sediment is entrained. Sediment transport occurs in natural systems where the particles are clastic rocks (sand, gravel, boulders, etc.), mud, or clay; the fluid is air, water, or ice; and the force of gravity acts to move the particles along the sloping surface on which they are resting. Sediment transport due to fluid motion occurs in rivers, oceans, lakes, seas, and other bodies of water due to currents and tides. Transport is also caused by glaciers as they flow, and on terrestrial surfaces under the influence of wind. Sediment transport due only to gravity can occur on sloping surfaces in general, including hillslopes, scarps, cliffs, and the continental shelf—continental slope boundary.
Sediment transport is important in the fields of sedimentary geology, geomorphology, civil engineering, hydraulic engineering and environmental engineering (see applications, below). Knowledge of sediment transport is most often used to determine whether erosion or deposition will occur, the magnitude of this erosion or deposition, and the time and distance over which it will occur.
Mechanisms
Aeolian
Aeolian or eolian (depending on the parsing of æ) is the term for sediment transport by wind. This process results in the formation of ripples and sand dunes. Typically, the size of the transported sediment is fine sand (<1 mm) and smaller, because air is a fluid with low density and viscosity, and can therefore not exert very much shear on its bed.
Bedforms are generated by aeolian sediment transport in the terrestrial near-surface environment. Ripples and dunes form as a natural self-organizing response to sediment transport.
Aeolian sediment transport is common on beaches and in the arid regions of the world, because it is in these environments that vegetation does not prevent the presence and motion
Document 2:::
The Géotechnique lecture is an biennial lecture on the topic of soil mechanics, organised by the British Geotechnical Association named after its major scientific journal Géotechnique.
This should not be confused with the annual BGA Rankine Lecture.
List of Géotechnique Lecturers
See also
Named lectures
Rankine Lecture
Terzaghi Lecture
External links
ICE Géotechnique journal
British Geotechnical Association
Document 3:::
Terradynamics is the study of forces and movement during terrestrial locomotion (particularly that using legs) on ground that can flow such as sand and soil. The term "terradynamics" is used in analogy to aerodynamics for flying in the air and hydrodynamics for swimming in water. Terradynamics has been used "to predict a small legged robot’s locomotion on granular media".
Document 4:::
Wave equation analysis is a numerical method of analysis for the behavior of driven foundation piles. It predicts the pile capacity versus blow count relationship (bearing graph) and pile driving stress. The model mathematically represents the pile driving hammer and all its accessories (ram, cap, and cap block), as well as the pile, as a series of lumped masses and springs in a one-dimensional analysis. The soil response for each pile segment is modeled as viscoelastic-plastic. The method was first developed in the 1950s by E.A. Smith of the Raymond Pile Driving Company.
Wave equation analysis of piles has seen many improvements since the 1950s such as including a thermodynamic diesel hammer model and residual stress. Commercial software packages (such as AllWave-PDP and GRLWEAP) are now available to perform the analysis.
One of the principal uses of this method is the performance of a driveability analysis to select the parameters for safe pile installation, including recommendations on cushion stiffness, hammer stroke and other driving system parameters that optimize blow counts and pile stresses during pile driving. For example, when a soft or hard layer causes excessive stresses or unacceptable blow counts.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the term for when gravity pulls soil, mud, and rocks down cliffs and hillsides?
A. mass movement
B. mass pressure
C. avalanche
D. mass momentum
Answer:
|
|
sciq-9340
|
multiple_choice
|
What, now former, planet is small, icy, and rocky?
|
[
"jupiter",
"pluto",
"mercury",
"neptune"
] |
B
|
Relavent Documents:
Document 0:::
This is a list of potentially habitable exoplanets. The list is mostly based on estimates of habitability by the Habitable Exoplanets Catalog (HEC), and data from the NASA Exoplanet Archive. The HEC is maintained by the Planetary Habitability Laboratory at the University of Puerto Rico at Arecibo. There is also a speculative list being developed of superhabitable planets.
Surface planetary habitability is thought to require orbiting at the right distance from the host star for liquid surface water to be present, in addition to various geophysical and geodynamical aspects, atmospheric density, radiation type and intensity, and the host star's plasma environment.
List
This is a list of exoplanets within the circumstellar habitable zone that are under 10 Earth masses and smaller than 2.5 Earth radii, and thus have a chance of being rocky. Note that inclusion on this list does not guarantee habitability, and in particular the larger planets are unlikely to have a rocky composition. Earth is included for comparison.
Note that mass and radius values prefixed with "~" have not been measured, but are estimated from a mass-radius relationship.
Previous candidates
Some exoplanet candidates detected by radial velocity that were originally thought to be potentially habitable were later found to most likely be artifacts of stellar activity. These include Gliese 581 d & g, Gliese 667 Ce & f, Gliese 682 b & c, Kapteyn b, and Gliese 832 c.
HD 85512 b was initially estimated to be potentially habitable, but updated models for the boundaries of the habitable zone placed the planet interior to the HZ, and it is now considered non-habitable. Kepler-69c has gone through a similar process; though initially estimated to be potentially habitable, it was quickly realized that the planet is more likely to be similar to Venus, and is thus no longer considered habitable. Several other planets, such as Gliese 180 b, also appear to be examples of planets once considered potentially habit
Document 1:::
This article is a list of notable unsolved problems in astronomy. Some of these problems are theoretical, meaning that existing theories may be incapable of explaining certain observed phenomena or experimental results. Others are experimental, meaning that experiments necessary to test proposed theory or investigate a phenomenon in greater detail have not yet been performed. Some pertain to unique events or occurrences that have not repeated themselves and whose causes remain unclear.
Planetary astronomy
Our solar system
Orbiting bodies and rotation:
Are there any non-dwarf planets beyond Neptune?
Why do extreme trans-Neptunian objects have elongated orbits?
Rotation rate of Saturn:
Why does the magnetosphere of Saturn rotate at a rate close to that at which the planet's clouds rotate?
What is the rotation rate of Saturn's deep interior?
Satellite geomorphology:
What is the origin of the chain of high mountains that closely follows the equator of Saturn's moon, Iapetus?
Are the mountains the remnant of hot and fast-rotating young Iapetus?
Are the mountains the result of material (either from the rings of Saturn or its own ring) that over time collected upon the surface?
Extra-solar
How common are Solar System-like planetary systems? Some observed planetary systems contain Super-Earths and Hot Jupiters that orbit very close to their stars. Systems with Jupiter-like planets in Jupiter-like orbits appear to be rare. There are several possibilities why Jupiter-like orbits are rare, including that data is lacking or the grand tack hypothesis.
Stellar astronomy and astrophysics
Solar cycle:
How does the Sun generate its periodically reversing large-scale magnetic field?
How do other Sol-like stars generate their magnetic fields, and what are the similarities and differences between stellar activity cycles and that of the Sun?
What caused the Maunder Minimum and other grand minima, and how does the solar cycle recover from a minimum state?
Coronal heat
Document 2:::
Planetary oceanography also called astro-oceanography or exo-oceanography is the study of oceans on planets and moons other than Earth. Unlike other planetary sciences like astrobiology, astrochemistry and planetary geology, it only began after the discovery of underground oceans in Saturn's moon Titan and Jupiter's moon Europa. This field remains speculative until further missions reach the oceans beneath the rock or ice layer of the moons. There are many theories about oceans or even ocean worlds of celestial bodies in the Solar System, from oceans made of diamond in Neptune to a gigantic ocean of liquid hydrogen that may exist underneath Jupiter's surface.
Early in their geologic histories, Mars and Venus are theorized to have had large water oceans. The Mars ocean hypothesis suggests that nearly a third of the surface of Mars was once covered by water, and a runaway greenhouse effect may have boiled away the global ocean of Venus. Compounds such as salts and ammonia dissolved in water lower its freezing point so that water might exist in large quantities in extraterrestrial environments as brine or convecting ice. Unconfirmed oceans are speculated beneath the surface of many dwarf planets and natural satellites; notably, the ocean of the moon Europa is estimated to have over twice the water volume of Earth's. The Solar System's giant planets are also thought to have liquid atmospheric layers of yet to be confirmed compositions. Oceans may also exist on exoplanets and exomoons, including surface oceans of liquid water within a circumstellar habitable zone. Ocean planets are a hypothetical type of planet with a surface completely covered with liquid.
Extraterrestrial oceans may be composed of water or other elements and compounds. The only confirmed large stable bodies of extraterrestrial surface liquids are the lakes of Titan, which are made of hydrocarbons instead of water. However, there is strong evidence for subsurface water oceans' existence elsewhere in t
Document 3:::
A dwarf planet is a small planetary-mass object that is in direct orbit of the Sun, smaller than any of the eight classical planets. The prototypical dwarf planet is Pluto. The interest of dwarf planets to planetary geologists is that they may be geologically active bodies, an expectation that was borne out in 2015 by the Dawn mission to and the New Horizons mission to Pluto.
Astronomers are in general agreement that at least the nine largest candidates are dwarf planets – in rough order of size, , , , , , , , , and – although there is some doubt for Orcus. Of these nine plus the tenth-largest candidate , two have been visited by spacecraft (Pluto and Ceres) and seven others have at least one known moon (Eris, Haumea, Makemake, Gonggong, Quaoar, Orcus, and Salacia), which allows their masses and thus an estimate of their densities to be determined. Mass and density in turn can be fit into geophysical models in an attempt to determine the nature of these worlds. Only one, Sedna, has neither been visited nor has any known moons, making an accurate estimate of mass difficult. Some astronomers include many smaller bodies as well, but there is no consensus that these are likely to be dwarf planets.
The term dwarf planet was coined by planetary scientist Alan Stern as part of a three-way categorization of planetary-mass objects in the Solar System: classical planets, dwarf planets, and satellite planets. Dwarf planets were thus conceived of as a category of planet. In 2006, however, the concept was adopted by the International Astronomical Union (IAU) as a category of sub-planetary objects, part of a three-way recategorization of bodies orbiting the Sun: planets, dwarf planets and small Solar System bodies. Thus Stern and other planetary geologists consider dwarf planets and large satellites to be planets, but since 2006, the IAU and perhaps the majority of astronomers have excluded them from the roster of planets.
History of the concept
Starting in 1801, astronom
Document 4:::
This is a list of most likely gravitationally rounded objects of the Solar System, which are objects that have a rounded, ellipsoidal shape due to their own gravity (but are not necessarily in hydrostatic equilibrium). Apart from the Sun itself, these objects qualify as planets according to common geophysical definitions of that term. The sizes of these objects range over three orders of magnitude in radius, from planetary-mass objects like dwarf planets and some moons to the planets and the Sun. This list does not include small Solar System bodies, but it does include a sample of possible planetary-mass objects whose shapes have yet to be determined. The Sun's orbital characteristics are listed in relation to the Galactic Center, while all other objects are listed in order of their distance from the Sun.
Star
The Sun is a G-type main-sequence star. It contains almost 99.9% of all the mass in the Solar System.
Planets
In 2006, the International Astronomical Union (IAU) defined a planet as a body in orbit around the Sun that was large enough to have achieved hydrostatic equilibrium and to have "cleared the neighbourhood around its orbit". The practical meaning of "cleared the neighborhood" is that a planet is comparatively massive enough for its gravitation to control the orbits of all objects in its vicinity. In practice, the term "hydrostatic equilibrium" is interpreted loosely. Mercury is round but not actually in hydrostatic equilibrium, but it is universally regarded as a planet nonetheless.
According to the IAU's explicit count, there are eight planets in the Solar System; four terrestrial planets (Mercury, Venus, Earth, and Mars) and four giant planets, which can be divided further into two gas giants (Jupiter and Saturn) and two ice giants (Uranus and Neptune). When excluding the Sun, the four giant planets account for more than 99% of the mass of the Solar System.
Dwarf planets
Dwarf planets are bodies orbiting the Sun that are massive and warm eno
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What, now former, planet is small, icy, and rocky?
A. jupiter
B. pluto
C. mercury
D. neptune
Answer:
|
|
sciq-6005
|
multiple_choice
|
What particles allow fungi to reproduce through unfavorable conditions?
|
[
"atoms",
"bosons",
"spores",
"quarks"
] |
C
|
Relavent Documents:
Document 0:::
Fungal Genetics and Biology is a peer-reviewed scientific journal established in 1977 as Experimental Mycology, obtaining its current title in 1996. It covers experimental investigations of fungi and their traditional allies that relate structure and function to growth, reproduction, morphogenesis, and differentiation.
External links
Elsevier academic journals
English-language journals
Monthly journals
Mycology journals
Academic journals established in 1977
Document 1:::
Mold control and prevention is a conservation activity that is performed in libraries and archives to protect books, documents and other materials from deterioration caused by mold growth. Mold prevention consists of different methods, such as chemical treatments, careful environmental control, and manual
cleaning. Preservationists use one or a combination of these methods to combat mold spores in library and archival collections.
Due to the resilient nature of mold and its potential for damage to library collections, mold prevention has become an important activity among preservation librarians. Although mold is naturally present in both indoor and outdoor environments, under the right circumstances it can become active after being in a dormant state. Mold growth responds to increased moisture, high humidity, and warm temperatures. Library collections are particularly vulnerable to mold since mold thrives off of organic, cellulose-based materials such as paper, wood, and textiles made of natural fibers. Changes in the moisture in the atmosphere can lead to mold growth and irreparable damage to library collections.
Mold
Mold is a generic term for a specific type of fungi. Mildew may also refer to types of mold. Since there are so many species of mold, their appearance varies in color and growth habit. In general, active mold has a musty odor and appears fuzzy, slimy, or damp. Inactive mold looks dry and powdery.
Mold propagates via spores, which are always present in the environment. Mold spores can be transferred to an object by mechanical instruments or air circulation. When spores attach to another organism, and the environment is favorable, they begin to germinate. Mold produce mycelium which growth pattern resembles cobwebs. Mycelium allows the mold to obtain food and nutrients through the host. Inevitably, the mycelium produces spore sacs and release new spores into the air. Eventually the spores land on new material, and the reproductive cycle begins aga
Document 2:::
Conidial anastomosis tubes (CATs) are cells formed from the conidia (a type of fungal asexual spores) of many filamentous fungi. These cells have a tubular shape and form an anastomosis (bridge) that allows fusion between conidia.
CATs and germ tubes (germination tubes) are some of the specialized hyphae (long cells formed by filamentous fungal species) that are formed by fungal conidia. CATs are morphologically and physiologically distinct from germ tubes and are under separate genetic control.
Germ tubes, produced during conidial germination, are different from CATs because: CATs are thinner, shorter, lack branches, exhibit determinate growth, and home toward each other.
CAT biology is not completely understood. Initially, conidia are induced to form CATs. Once they are formed, they grow homing toward each other, and eventually they fuse. Once fusion occurs, the respective nuclei can pass through the fused CATs from one conidium to the other. These are events of fungal vegetative growth (asexual reproduction) and not sexual reproduction. Part of the CAT fusion (cell fusion) have been shown to be a coordinated behaviour.
The filamentous fungus Neurospora crassa (a bread mould and fungal model organism) produces CATs from conidia and conidial germ tubes. In contrast, the fungal plant pathogen, Colletotrichum lindemuthianum, only produces CATs from conidia and not from germ tubes.
Fusion between these cells seems to be important for some fungi during early stages of colony establishment. The production of these cells has been suggested to occur in 73 different species of fungi.
Document 3:::
The Fungus Federation of Santa Cruz (FFSC) is a North American mycological club that evolved as a result of David Arora’s mushroom classes and early Fungus Fairs in the Santa Cruz, California area in the 1970’s.
Mission
The mission of the Fungus Federation of Santa Cruz is "to foster and expand, through education and by example, the understanding and appreciation of mycology and to assist the general public and related institutions or groups to further this goal".
Organization
FFSC was incorporated as a 501(c)(3) non-profit organization in 1984.
Activities
There are many facets to the FFSC, with something to interest everyone. One of the FFSC's largest public and most popular events is an annual Fungus Fair. Members and non-members get together for local and long-distance forays, fun foodie events, meetings, and educational events. The FFSC also provides grants to mycology students and identification services to local hospitals.
The FFSC is currently embarking upon a project to fund DNA sequencing of herbarium specimens at the University of California, Santa Cruz. This initiative is part of the greater North American Mycoflora Project, a joint venture of the Mycological Society of America, and the North American Mycological Association. Their motto: “Without a sequenced specimen, it’s a rumor”.
Membership
Membership is open to anyone who is interested in fungi. There is a small yearly membership fee which is discounted to the existing members. More information about the Fungus Federation of Santa Cruz membership can be found on FFSC Members Page.
Document 4:::
The life stage at which a fungus lives, grows, and develops, gathering nutrients and energy.
The fungus uses this stage to proliferate itself through asexually created mitotic spores.
Cycles through somatic hyphae, zoosporangia, zoospores, encystation & germination, and back to somatic hyphae.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What particles allow fungi to reproduce through unfavorable conditions?
A. atoms
B. bosons
C. spores
D. quarks
Answer:
|
|
sciq-5341
|
multiple_choice
|
The pituitary gland is called the “master gland” of what system?
|
[
"endocrine",
"hormonal",
"digestive",
"nervous"
] |
A
|
Relavent Documents:
Document 0:::
Hypothalamic-pituitary axis
Hypothalamus
Pineal body (epiphysis)
Pituitary gland (hypophysis)
The pituitary gland (or hypophysis) is an endocrine gland about the size of a pea and weighing in humans. It is a protrusion off the bottom of the hypothalamus at the base of the brain, and rests in a small, bony cavity (sella turcica) covered by a dural fold (diaphragma sellae). The pituitary is functionally connected to the hypothalamus by the median eminence via a small tube called the infundibular stem or pituitary stalk. The anterior pituitary (adenohypophysis) is connected to the hypothalamus via the hypothalamo–hypophyseal portal vessels, which allows for quicker and more efficient communication between the hypothalamus and the pituitary.
Anterior pituitary lobe (adenohypophysis)
Posterior pituitary lobe (neurohypophysis)
Oxytocin and anti-diuretic hormone are not secreted in the posterior lobe, merely stored.
Thyroid
Digestive system
Stomach
Duodenum (small intestine)
Liver
Pancreas
The pancreas is a heterocrine gland as it functions both as an endocrine and as an exocrine gland.
Kidney
Adrenal glands
Adrenal cortex
Adrenal medulla
Reproductive
Testes
Ovarian follicle and corpus luteum
Placenta (when pregnant)
Uterus (when pregnant)
Calcium regulation
Parathyroid
Skin
Other
Heart
Bone
Skeletal muscle
In 1998, skeletal muscle was identified as an endocrine organ due to its now well-established role in the secretion of myokines. The use of the term myokine to describe cytokines and other peptides produced by muscle as signalling molecules was proposed in 2003.
Adipose tissue
Signalling molecules released by adipose tissue are referred to as adipokines.
Document 1:::
Sudomotor function refers to the autonomic nervous system control of sweat gland activity in response to various environmental and individual factors. Sweat production is a vital thermoregulatory mechanism used by the body to prevent heat-related illness as the evaporation of sweat is the body’s most effective method of heat reduction and the only cooling method available when the air temperature rises above skin temperature. In addition, sweat plays key roles in grip, microbial defense, and wound healing.
Physiology
Human sweat glands are primarily classified as either eccrine or apocrine glands. Eccrine glands open directly onto the surface of the skin, while apocrine glands open into hair follicles. Eccrine glands are the predominant sweat gland in the human body with numbers totaling up to 4 million. They are located within the reticular dermal layer of the skin and distributed across nearly the entire surface of the body with the largest numbers occurring in the palms and soles.
Eccrine sweat is secreted in response to both emotional and thermal stimulation. Eccrine glands are primarily innervated by small-diameter, unmyelinated class C-fibers from postganglionic sympathetic cholinergic neurons. Increases in body and skin temperature are detected by visceral and peripheral thermoreceptors, which send signals via class C and Aδ-fiber afferent somatic neurons through the lateral spinothalamic tract to the preoptic nucleus of the hypothalamus for processing. In addition, there are warm-sensitive neurons located within the preoptic nucleus that detect increases in core body temperature. Efferent pathways then descend ipsilaterally from the hypothalamus through the pons and medulla to preganglionic sympathetic cholinergic neurons in the intermediolateral column of the spinal cord. The preganglionic neurons synapse with postganglionic cholinergic sudomotor (and to a lesser extent adrenergic) neurons in the paravertebral sympathetic ganglia. When the action potentia
Document 2:::
The pineal gland (also known as the pineal body, conarium, or epiphysis cerebri) is a small endocrine gland in the brain of most vertebrates. The pineal gland produces melatonin, a serotonin-derived hormone which modulates sleep patterns in both circadian and seasonal cycles. The shape of the gland resembles a pine cone, which gives it its name. The pineal gland is located in the epithalamus, near the center of the brain, between the two hemispheres, tucked in a groove where the two halves of the thalamus join. It is one of the neuroendocrine secretory circumventricular organs in which capillaries are mostly permeable to solutes in the blood.
The pineal gland is present in almost all vertebrates, but is absent in protochordates in which there is a simple pineal homologue. The hagfish, considered as a primitive vertebrate, has a rudimentary structure regarded as the "pineal equivalent" in the dorsal diencephalon. In some species of amphibians and reptiles, the gland is linked to a light-sensing organ, variously called the parietal eye, the pineal eye or the third eye. Reconstruction of the biological evolution pattern suggests that the pineal gland was originally a kind of atrophied photoreceptor that developed into a neuroendocrine organ.
Ancient Greeks were the first to notice the pineal gland and believed it to be a valve, a guardian for the flow of pneuma. Galen in the 2nd century C.E. could not find any functional role and regarded the gland as a structural support for the brain tissue. He gave the name konario, meaning cone or pinecone, which during Renaissance was translated to Latin as pinealis. In the 17th century, René Descartes revived the mystical purpose and described the gland as the "principal seat of the soul". In the mid-20th century, the real biological role as a neuroendocrine organ was established.
Etymology
The word pineal, from Latin pinea (pine-cone), was first used in the late 17th century to refer to the cone shape of the brain gland.
Str
Document 3:::
Uterine glands or endometrial glands are tubular glands, lined by a simple columnar epithelium, found in the functional layer of the endometrium that lines the uterus. Their appearance varies during the menstrual cycle. During the proliferative phase, uterine glands appear long due to estrogen secretion by the ovaries. During the secretory phase, the uterine glands become very coiled with wide lumens and produce a glycogen-rich secretion known as histotroph or uterine milk. This change corresponds with an increase in blood flow to spiral arteries due to increased progesterone secretion from the corpus luteum. During the pre-menstrual phase, progesterone secretion decreases as the corpus luteum degenerates, which results in decreased blood flow to the spiral arteries. The functional layer of the uterus containing the glands becomes necrotic, and eventually sloughs off during the menstrual phase of the cycle.
They are of small size in the unimpregnated uterus, but shortly after impregnation become enlarged and elongated, presenting a contorted or waved appearance.
Function
Hormones produced in early pregnancy stimulate the uterine glands to secrete a number of substances to give nutrition and protection to the embryo and fetus, and the fetal membranes. These secretions are known as histiotroph, alternatively histotroph, and also as uterine milk. Important uterine milk proteins are glycodelin-A, and osteopontin.
Some secretory components from the uterine glands are taken up by the secondary yolk sac lining the exocoelomic cavity during pregnancy, and may thereby assist in providing fetal nutrition.
Additional images
Document 4:::
Heterocrine glands (or composite glands) are the glands which function as both exocrine gland and endocrine gland. These glands exhibit a unique and diverse secretory function encompassing the release of proteins and non-proteinaceous compounds, endocrine and exocrine secretions into both the bloodstream and ducts respectively, thereby bridging the realms of internal and external communication within the body. This duality allows them to serve crucial roles in regulating various physiological processes and maintaining homeostasis. These include the gonads (testes and ovaries), pancreas and salivary glands.
Pancreas releases digestive enzymes into the small intestine via ducts (exocrine) and secretes insulin and glucagon into the bloodstream (endocrine) to regulate blood sugar level. Testes produce sperm, which is released through ducts (exocrine), and they also secrete testosterone into the bloodstream (endocrine). Similarly, ovaries release ova through ducts (exocrine) and produce estrogen and progesterone (endocrine). Salivary glands secrete saliva through ducts to aid in digestion (exocrine) and produce epidermal growth factor and insulin-like growth factor (endocrine).
Anatomy
Heterocrine glands typically have a complex structure that enables them to produce and release different types of secretions. The two primary components of these glands are:
Endocrine component: Heterocrine glands produce hormones, which are chemical messengers that travel through the bloodstream to target organs or tissues. These hormones play a vital role in regulating numerous physiological processes, such as metabolism, growth, and the immune response.
Exocrine component: In addition to their endocrine function, heterocrine glands secrete substances directly into ducts or cavities, which can be released through various body openings. These exocrine secretions can include enzymes, mucus, and other substances that aid in digestion, lubrication, or protection.
Characteristics and Func
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The pituitary gland is called the “master gland” of what system?
A. endocrine
B. hormonal
C. digestive
D. nervous
Answer:
|
|
sciq-10745
|
multiple_choice
|
Which period after birth has the most rapid growth?
|
[
"early childhood",
"adolescence",
"infancy",
"middle childhood"
] |
C
|
Relavent Documents:
Document 0:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 1:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
Document 2:::
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
Document 3:::
The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work.
History
It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council.
Function
Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres.
STEM ambassadors
To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell.
Funding
STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments.
See also
The WISE Campaign
Engineering and Physical Sciences Research Council
National Centre for Excellence in Teaching Mathematics
Association for Science Education
Glossary of areas of mathematics
Glossary of astronomy
Glossary of biology
Glossary of chemistry
Glossary of engineering
Glossary of physics
Document 4:::
Progress tests are longitudinal, feedback oriented educational assessment tools for the evaluation of development and sustainability of cognitive knowledge during a learning process. A progress test is a written knowledge exam (usually involving multiple choice questions) that is usually administered to all students in the "A" program at the same time and at regular intervals (usually twice to four times yearly) throughout the entire academic program. The test samples the complete knowledge domain expected of new graduates upon completion of their courses, regardless of the year level of the student). The differences between students’ knowledge levels show in the test scores; the further a student has progressed in the curriculum the higher the scores. As a result, these resultant scores provide a longitudinal, repeated measures, curriculum-independent assessment of the objectives (in knowledge) of the entire programme.
History
Since its inception in the late 1970s at both Maastricht University and the University of Missouri–Kansas City independently, the progress test of applied knowledge has been increasingly used in medical and health sciences programs across the globe. They are well established and increasingly used in medical education in both undergraduate and postgraduate medical education. They are used formatively and summatively.
Use in academic programs
The progress test is currently used by national progress test consortia in the United Kingdom, Italy, The Netherlands, in Germany (including Austria), and in individual schools in Africa, Saudi Arabia, South East Asia, the Caribbean, Australia, New Zealand, Sweden, Finland, UK, and the USA. The National Board of Medical Examiners in the USA also provides progress testing in various countries The feasibility of an international approach to progress testing has been recently acknowledged and was first demonstrated by Albano et al. in 1996, who compared test scores across German, Dutch and Italian medi
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which period after birth has the most rapid growth?
A. early childhood
B. adolescence
C. infancy
D. middle childhood
Answer:
|
|
sciq-4555
|
multiple_choice
|
When a metal is oxidized and a nonmetal is reduced in a redox reaction, what is the resulting compound called?
|
[
"magnetic compound",
"ionic compound",
"soluble compound",
"alloy"
] |
B
|
Relavent Documents:
Document 0:::
In situ chemical reduction (ISCR) is a type of environmental remediation technique used for soil and/or groundwater remediation to reduce the concentrations of targeted environmental contaminants to acceptable levels. It is the mirror process of In Situ Chemical Oxidation (ISCO). ISCR is usually applied in the environment by injecting chemically reductive additives in liquid form into the contaminated area or placing a solid medium of chemical reductants in the path of a contaminant plume. It can be used to remediate a variety of organic compounds, including some that are resistant to natural degradation.
The in situ in ISCR is just Latin for "in place", signifying that ISCR is a chemical reduction reaction that occurs at the site of the contamination. Like ISCO, it is able to decontaminate many compounds, and, in theory, ISCR could be more effective in ground water remediation than ISCO.
Chemical reduction is one half of a redox reaction, which results in the gain of electrons. One of the reactants in the reaction becomes oxidized, or loses electrons, while the other reactant becomes reduced, or gains electrons. In ISCR, reducing compounds, compounds that accept electrons given by other compounds in a reaction, are used to change the contaminants into harmless compounds.
History
Early work examined the dechlorinations with copper. Substrates included DDT, endrin, chloroform, and hexachlorocyclopentadiene. Aluminum and magnesium behave similarly in the laboratory. Ground water treatment most generally focuses on the use of iron.
Reductants
Zero valent metals (ZVMs)
Zero-valent metals are the main reductants used in ISCR. The most common metal used is iron, in the form of ZVI (zero valent iron), and it is also the metal longest in use. However, some studies show that zero valent zinc (ZVZ) could be up to ten times more effective at eradicating the contaminants than ZVI. Some applications of ZVMs are to clean up Trichloroethylene (TCE) and Hexavalent chromium
Document 1:::
With Sn2+ ions, N2O is formed:
2 HNO2 + 6 HCl + 2 SnCl2 → 2 SnCl4 + N2O + 3 H2O + 2 KCl
With SO2 gas, NH2OH is formed:
2 HNO2 + 6 H2O + 4 SO2 → 3 H2SO4 + K2SO4 + 2 NH2OH
With Zn in alkali solution, NH3 is formed:
5 H2O + KNO2 + 3 Zn → NH3 + KOH + 3 Zn(OH)2
With , both HN3
Document 2:::
Iron(II) hydroxide or ferrous hydroxide is an inorganic compound with the formula Fe(OH)2. It is produced when iron(II) salts, from a compound such as iron(II) sulfate, are treated with hydroxide ions. Iron(II) hydroxide is a white solid, but even traces of oxygen impart a greenish tinge. The air-oxidised solid is sometimes known as "green rust".
Preparation and reactions
Iron(II) hydroxide is poorly soluble in water (1.43 × 10−3 g/L), or 1.59 × 10−5 mol/L. It precipitates from the reaction of iron(II) and hydroxide salts:
FeSO4 + 2 NaOH → Fe(OH)2 + Na2SO4
If the solution is not deoxygenated and iron not totally reduced in Fe(II), the precipitate can vary in colour starting from green to reddish brown depending on the iron(III) content. Iron(II) ions are easily substituted by iron(III) ions produced by its progressive oxidation.
It is also easily formed as a by-product of other reactions, a.o., in the synthesis of siderite, an iron carbonate (FeCO3), if the crystal growth conditions are imperfectly controlled.
Structure
Fe(OH)2 is a layer double hydroxide (LDH) easily accommodating in its crystal lattice ferric ions () produced by oxidation of ferrous ions () by the atmospheric oxygen ().
Related materials
Green rust is a recently discovered mineralogical form. All forms of green rust (including fougerite) are more complex and variable than the ideal iron(II) hydroxide compound.
Reactions
Under anaerobic conditions, the iron(II) hydroxide can be oxidised by the protons of water to form magnetite (iron(II,III) oxide) and molecular hydrogen.
This process is described by the Schikorr reaction:
3 Fe(OH)2 → Fe3O4 + H2 + 2 H2O
Anions such as selenite and selenate can be easily adsorbed on the positively charged surface of iron(II) hydroxide, where they are subsequently reduced by Fe2+. The resulting products are poorly soluble (Se0, FeSe, or FeSe2).
Natural occurrence
Document 3:::
In chemistry, a superoxide is a compound that contains the superoxide ion, which has the chemical formula . The systematic name of the anion is dioxide(1−). The reactive oxygen ion superoxide is particularly important as the product of the one-electron reduction of dioxygen , which occurs widely in nature. Molecular oxygen (dioxygen) is a diradical containing two unpaired electrons, and superoxide results from the addition of an electron which fills one of the two degenerate molecular orbitals, leaving a charged ionic species with a single unpaired electron and a net negative charge of −1. Both dioxygen and the superoxide anion are free radicals that exhibit paramagnetism. Superoxide was historically also known as "hyperoxide".
Salts
Superoxide forms salts with alkali metals and alkaline earth metals. The salts caesium superoxide (), rubidium superoxide (), potassium superoxide (), and sodium superoxide () are prepared by the reaction of with the respective alkali metal.
The alkali salts of are orange-yellow in color and quite stable, if they are kept dry. Upon dissolution of these salts in water, however, the dissolved undergoes disproportionation (dismutation) extremely rapidly (in a pH-dependent manner):
2 O2- + H2O -> 3/2 O2 + 2 OH-
This reaction (with moisture and carbon dioxide in exhaled air) is the basis of the use of potassium superoxide as an oxygen source in chemical oxygen generators, such as those used on the Space Shuttle and on submarines. Superoxides are also used in firefighters' oxygen tanks to provide a readily available source of oxygen. In this process, acts as a Brønsted base, initially forming the hydroperoxyl radical ().
The superoxide anion, , and its protonated form, hydroperoxyl, are in equilibrium in an aqueous solution:
O2- + H2O <=> HO2 + OH-
Given that the hydroperoxyl radical has a pKa of around 4.8, superoxide predominantly exists in the anionic form at neutral pH.
Potassium superoxide is soluble in dimethyl sulfoxide (
Document 4:::
Zinc oxide is an inorganic compound with the formula . It is a white powder that is insoluble in water. ZnO is used as an additive in numerous materials and products including cosmetics, food supplements, rubbers, plastics, ceramics, glass, cement, lubricants, paints, sunscreens, ointments, adhesives, sealants, pigments, foods, batteries, ferrites, fire retardants, semi conductors, and first-aid tapes. Although it occurs naturally as the mineral zincite, most zinc oxide is produced synthetically.
History
Zinc compounds were probably used by early humans, in processed and unprocessed forms, as a paint or medicinal ointment, but their composition is uncertain. The use of pushpanjan, probably zinc oxide, as a salve for eyes and open wounds, is mentioned in the Indian medical text the Charaka Samhita, thought to date from 500 BC or before. Zinc oxide ointment is also mentioned by the Greek physician Dioscorides (1st century AD). Galen suggested treating ulcerating cancers with zinc oxide, as did Avicenna in his The Canon of Medicine. It is used as an ingredient in products such as baby powder and creams against diaper rashes, calamine cream, anti-dandruff shampoos, and antiseptic ointments.
The Romans produced considerable quantities of brass (an alloy of zinc and copper) as early as 200 BC by a cementation process where copper was reacted with zinc oxide. The zinc oxide is thought to have been produced by heating zinc ore in a shaft furnace. This liberated metallic zinc as a vapor, which then ascended the flue and condensed as the oxide. This process was described by Dioscorides in the 1st century AD. Zinc oxide has also been recovered from zinc mines at Zawar in India, dating from the second half of the first millennium BC.
From the 12th to the 16th century zinc and zinc oxide were recognized and produced in India using a primitive form of the direct synthesis process. From India, zinc manufacture moved to China in the 17th century. In 1743, the first European zinc
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
When a metal is oxidized and a nonmetal is reduced in a redox reaction, what is the resulting compound called?
A. magnetic compound
B. ionic compound
C. soluble compound
D. alloy
Answer:
|
|
sciq-2835
|
multiple_choice
|
Wavelength and frequency are defined in the same way for electromagnetic waves as they are for which other waves?
|
[
"mechanical",
"light",
"sonar",
"gravitational"
] |
A
|
Relavent Documents:
Document 0:::
In the physical sciences, the wavenumber (or wave number), also known as repetency, is the spatial frequency of a wave, measured in cycles per unit distance (ordinary wavenumber) or radians per unit distance (angular wavenumber). It is analogous to temporal frequency, which is defined as the number of wave cycles per unit time (ordinary frequency) or radians per unit time (angular frequency).
In multidimensional systems, the wavenumber is the magnitude of the wave vector. The space of wave vectors is called reciprocal space. Wave numbers and wave vectors play an essential role in optics and the physics of wave scattering, such as X-ray diffraction, neutron diffraction, electron diffraction, and elementary particle physics. For quantum mechanical waves, the wavenumber multiplied by the reduced Planck's constant is the canonical momentum.
Wavenumber can be used to specify quantities other than spatial frequency. For example, in optical spectroscopy, it is often used as a unit of temporal frequency assuming a certain speed of light.
Definition
Wavenumber, as used in spectroscopy and most chemistry fields, is defined as the number of wavelengths per unit distance, typically centimeters (cm−1):
where λ is the wavelength. It is sometimes called the "spectroscopic wavenumber". It equals the spatial frequency.
For example, a wavenumber in inverse centimeters can be converted to a frequency in gigahertz by multiplying by 29.9792458 cm/ns (the speed of light, in centimeters per nanosecond); conversely, an electromagnetic wave at 29.9792458 GHz has a wavelength of 1 cm in free space.
In theoretical physics, a wave number, defined as the number of radians per unit distance, sometimes called "angular wavenumber", is more often used:
When wavenumber is represented by the symbol , a frequency is still being represented, albeit indirectly. As described in the spectroscopy section, this is done through the relationship , where s is a frequency in hertz. This is done for con
Document 1:::
In mathematics, physics, and engineering, spatial frequency is a characteristic of any structure that is periodic across position in space. The spatial frequency is a measure of how often sinusoidal components (as determined by the Fourier transform) of the structure repeat per unit of distance.
The SI unit of spatial frequency is the reciprocal metre (m-1), although cycles per meter (c/m) is also common. In image-processing applications, spatial frequency is often expressed in units of cycles per millimeter (c/mm) or also line pairs per millimeter (LP/mm).
In wave propagation, the spatial frequency is also known as wavenumber. Ordinary wavenumber is defined as the reciprocal of wavelength and is commonly denoted by or sometimes :
Angular wavenumber , expressed in radian per metre (rad/m), is related to ordinary wavenumber and wavelength by
Visual perception
In the study of visual perception, sinusoidal gratings are frequently used to probe the capabilities of the visual system, such as contrast sensitivity. In these stimuli, spatial frequency is expressed as the number of cycles per degree of visual angle. Sine-wave gratings also differ from one another in amplitude (the magnitude of difference in intensity between light and dark stripes), orientation, and phase.
Spatial-frequency theory
The spatial-frequency theory refers to the theory that the visual cortex operates on a code of spatial frequency, not on the code of straight edges and lines hypothesised by Hubel and Wiesel on the basis of early experiments on V1 neurons in the cat. In support of this theory is the experimental observation that the visual cortex neurons respond even more robustly to sine-wave gratings that are placed at specific angles in their receptive fields than they do to edges or bars. Most neurons in the primary visual cortex respond best when a sine-wave grating of a particular frequency is presented at a particular angle in a particular location in the visual field. (However, a
Document 2:::
In signal processing, the energy of a continuous-time signal x(t) is defined as the area under the squared magnitude of the considered signal i.e., mathematically
Unit of will be (unit of signal)2.
And the energy of a discrete-time signal x(n) is defined mathematically as
Relationship to energy in physics
Energy in this context is not, strictly speaking, the same as the conventional notion of energy in physics and the other sciences. The two concepts are, however, closely related, and it is possible to convert from one to the other:
where Z represents the magnitude, in appropriate units of measure, of the load driven by the signal.
For example, if x(t) represents the potential (in volts) of an electrical signal propagating across a transmission line, then Z would represent the characteristic impedance (in ohms) of the transmission line. The units of measure for the signal energy would appear as volt2·seconds, which is not dimensionally correct for energy in the sense of the physical sciences. After dividing by Z, however, the dimensions of E would become volt2·seconds per ohm,
which is equivalent to joules, the SI unit for energy as defined in the physical sciences.
Spectral energy density
Similarly, the spectral energy density of signal x(t) is
where X(f) is the Fourier transform of x(t).
For example, if x(t) represents the magnitude of the electric field component (in volts per meter) of an optical signal propagating through free space, then the dimensions of X(f) would become volt·seconds per meter and would represent the signal's spectral energy density (in volts2·second2 per meter2) as a function of frequency f (in hertz). Again, these units of measure are not dimensionally correct in the true sense of energy density as defined in physics. Dividing by Zo, the characteristic impedance of free space (in ohms), the dimensions become joule-seconds per meter2 or, equivalently, joules per meter2 per hertz, which is dimensionally correct in SI
Document 3:::
The mode of electromagnetic describes the field pattern of the propagating waves. Electromagnetic modes are analogous to the normal modes of vibration in other systems, such as mechanical systems.
Some of the classifications of electromagnetic modes include;
Free space modes
Plane waves, waves in which the electric and magnetic fields are both orthogonal to the direction of travel of the wave. These are the waves that exist in free space far from any antenna.
Modes in waveguides and transmission lines
Transverse modes, modes that have at least one of the electric field and magnetic field entirely in a transverse direction.
Transverse electromagnetic mode (TEM), as with a free space plane wave, both the electric field and magnetic field are entirely transverse.
Transverse electric (TE) modes, only the electric field is entirely transverse. Also notated as H modes to indicate there is a longitudinal magnetic component.
Transverse magnetic (TM) modes, only the magnetic field is entirely transverse. Also notated as E modes to indicate there is a longitudinal electric component.
Hybrid electromagnetic (HEM) modes, both the electric and magnetic fields have a component in the longitudinal direction. They can be analysed as a linear superposition of the corresponding TE and TM modes.
HE modes, hybrid modes in which the TE component dominates.
EH modes, hybrid modes in which the TM component dominates.
Longitudinal-section modes
Longitudinal-section electric (LSE) modes, hybrid modes in which the electric field in one of the transverse directions is zero
Longitudinal-section magnetic (LSM) modes, hybrid modes in which the magnetic field in one of the transverse directions is zero
Modes in other structures
Bloch modes, modes of Bloch waves; these occur in periodically repeating structures.
Mode names are sometimes prefixed with quasi-, meaning that the mode is not quite pure. For instance, quasi-TEM mode has a small component of longitudinal field.
Document 4:::
The transmission curve or transmission characteristic is the mathematical function or graph that describes the transmission fraction of an optical or electronic filter as a function of frequency or wavelength. It is an instance of a transfer function but, unlike the case of, for example, an amplifier, output never exceeds input (maximum transmission is 100%). The term is often used in commerce, science, and technology to characterise filters.
The term has also long been used in fields such as geophysics and astronomy to characterise the properties of regions through which radiation passes, such as the ionosphere.
See also
Electronic filter — examples of transmission characteristics of electronic filters
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Wavelength and frequency are defined in the same way for electromagnetic waves as they are for which other waves?
A. mechanical
B. light
C. sonar
D. gravitational
Answer:
|
|
scienceQA-5517
|
multiple_choice
|
What do these two changes have in common?
an iceberg melting slowly
carving a piece of wood
|
[
"Both are caused by cooling.",
"Both are chemical changes.",
"Both are caused by heating.",
"Both are only physical changes."
] |
D
|
Step 1: Think about each change.
An iceberg melting is a change of state. So, it is a physical change. An iceberg is made of frozen water. As it melts, the water changes from a solid to a liquid. But a different type of matter is not formed.
Carving a piece of wood is a physical change. The wood changes shape, but it is still made of the same type of matter.
Step 2: Look at each answer choice.
Both are only physical changes.
Both changes are physical changes. No new matter is created.
Both are chemical changes.
Both changes are physical changes. They are not chemical changes.
Both are caused by heating.
An iceberg melting is caused by heating. But carving a piece of wood is not.
Both are caused by cooling.
Neither change is caused by cooling.
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.
Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate.
A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density.
An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge.
Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change.
Examples
Heating and cooling
Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation.
Magnetism
Ferro-magnetic materials can become magnetic. The process is reve
Document 2:::
Adaptive comparative judgement is a technique borrowed from psychophysics which is able to generate reliable results for educational assessment – as such it is an alternative to traditional exam script marking. In the approach, judges are presented with pairs of student work and are then asked to choose which is better, one or the other. By means of an iterative and adaptive algorithm, a scaled distribution of student work can then be obtained without reference to criteria.
Introduction
Traditional exam script marking began in Cambridge 1792 when, with undergraduate numbers rising, the importance of proper ranking of students was growing. So in 1792 the new Proctor of Examinations, William Farish, introduced marking, a process in which every examiner gives a numerical score to each response by every student, and the overall total mark puts the students in the final rank order. Francis Galton (1869) noted that, in an unidentified year about 1863, the Senior Wrangler scored 7,634 out of a maximum of 17,000, while the Second Wrangler scored 4,123. (The 'Wooden Spoon' scored only 237.)
Prior to 1792, a team of Cambridge examiners convened at 5pm on the last day of examining, reviewed the 19 papers each student had sat – and published their rank order at midnight. Marking solved the problems of numbers and prevented unfair personal bias, and its introduction was a step towards modern objective testing, the format it is best suited to. But the technology of testing that followed, with its major emphasis on reliability and the automatisation of marking, has been an uncomfortable partner for some areas of educational achievement: assessing writing or speaking, and other kinds of performance need something more qualitative and judgemental.
The technique of Adaptive Comparative Judgement is an alternative to marking. It returns to the pre-1792 idea of sorting papers according to their quality, but retains the guarantee of reliability and fairness. It is by far the most rel
Document 3:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 4:::
Test equating traditionally refers to the statistical process of determining comparable scores on different forms of an exam. It can be accomplished using either classical test theory or item response theory.
In item response theory, equating is the process of placing scores from two or more parallel test forms onto a common score scale. The result is that scores from two different test forms can be compared directly, or treated as though they came from the same test form. When the tests are not parallel, the general process is called linking. It is the process of equating the units and origins of two scales on which the abilities of students have been estimated from results on different tests. The process is analogous to equating degrees Fahrenheit with degrees Celsius by converting measurements from one scale to the other. The determination of comparable scores is a by-product of equating that results from equating the scales obtained from test results.
Purpose
Suppose that Dick and Jane both take a test to become licensed in a certain profession. Because the high stakes (you get to practice the profession if you pass the test) may create a temptation to cheat, the organization that oversees the test creates two forms. If we know that Dick scored 60% on form A and Jane scored 70% on form B, do we know for sure which one has a better grasp of the material? What if form A is composed of very difficult items, while form B is relatively easy? Equating analyses are performed to address this very issue, so that scores are as fair as possible.
Equating in item response theory
In item response theory, person "locations" (measures of some quality being assessed by a test) are estimated on an interval scale; i.e., locations are estimated in relation to a unit and origin. It is common in educational assessment to employ tests in order to assess different groups of students with the intention of establishing a common scale by equating the origins, and when appropri
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do these two changes have in common?
an iceberg melting slowly
carving a piece of wood
A. Both are caused by cooling.
B. Both are chemical changes.
C. Both are caused by heating.
D. Both are only physical changes.
Answer:
|
sciq-4892
|
multiple_choice
|
How do humans learn behaviors?
|
[
"conditioning",
"pressure",
"Aural pressure",
"Verbal pressure"
] |
A
|
Relavent Documents:
Document 0:::
Learning is the process of acquiring new understanding, knowledge, behaviors, skills, values, attitudes, and preferences. The ability to learn is possessed by humans, animals, and some machines; there is also evidence for some kind of learning in certain plants. Some learning is immediate, induced by a single event (e.g. being burned by a hot stove), but much skill and knowledge accumulate from repeated experiences. The changes induced by learning often last a lifetime, and it is hard to distinguish learned material that seems to be "lost" from that which cannot be retrieved.
Human learning starts at birth (it might even start before in terms of an embryo's need for both interaction with, and freedom within its environment within the womb.) and continues until death as a consequence of ongoing interactions between people and their environment. The nature and processes involved in learning are studied in many established fields (including educational psychology, neuropsychology, experimental psychology, cognitive sciences, and pedagogy), as well as emerging fields of knowledge (e.g. with a shared interest in the topic of learning from safety events such as incidents/accidents, or in collaborative learning health systems). Research in such fields has led to the identification of various sorts of learning. For example, learning may occur as a result of habituation, or classical conditioning, operant conditioning or as a result of more complex activities such as play, seen only in relatively intelligent animals. Learning may occur consciously or without conscious awareness. Learning that an aversive event cannot be avoided or escaped may result in a condition called learned helplessness. There is evidence for human behavioral learning prenatally, in which habituation has been observed as early as 32 weeks into gestation, indicating that the central nervous system is sufficiently developed and primed for learning and memory to occur very early on in development.
Play h
Document 1:::
Observational learning is learning that occurs through observing the behavior of others. It is a form of social learning which takes various forms, based on various processes. In humans, this form of learning seems to not need reinforcement to occur, but instead, requires a social model such as a parent, sibling, friend, or teacher with surroundings. Particularly in childhood, a model is someone of authority or higher status in an environment. In animals, observational learning is often based on classical conditioning, in which an instinctive behavior is elicited by observing the behavior of another (e.g. mobbing in birds), but other processes may be involved as well.
Human observational learning
Many behaviors that a learner observes, remembers, and imitates are actions that models display and display modeling, even though the model may not intentionally try to instill a particular behavior. A child may learn to swear, smack, smoke, and deem other inappropriate behavior acceptable through poor modeling. Albert Bandura claims that children continually learn desirable and undesirable behavior through observational learning. Observational learning suggests that an individual's environment, cognition, and behavior all incorporate and ultimately determine how the individual functions and models.
Through observational learning, individual behaviors can spread across a culture through a process called diffusion chain. This basically occurs when an individual first learns a behavior by observing another individual and that individual serves as a model through whom other individuals learn the behavior, and so on.
Culture plays a role in whether observational learning is the dominant learning style in a person or community. Some cultures expect children to actively participate in their communities and are therefore exposed to different trades and roles on a daily basis. This exposure allows children to observe and learn the different skills and practices that are valued i
Document 2:::
Social learning refers to learning that is facilitated by observation of, or interaction with, another animal or its products. Social learning has been observed in a variety of animal taxa, such as insects, fish, birds, reptiles, amphibians and mammals (including primates).
Social learning is fundamentally different from individual learning, or asocial learning, which involves learning the appropriate responses to an environment through experience and trial and error. Though asocial learning may result in the acquisition of reliable information, it is often costly for the individual to obtain. Therefore, individuals that are able to capitalize on other individuals' self-acquired information may experience a fitness benefit. However, because social learning relies on the actions of others rather than direct contact, it can be unreliable. This is especially true in variable environments, where appropriate behaviors may change frequently. Consequently, social learning is most beneficial in stable environments, in which predators, food, and other stimuli are not likely to change rapidly.
When social learning is actively facilitated by an experienced individual, it is classified as teaching. Mechanisms of inadvertent social learning relate primarily to psychological processes in the observer, whereas teaching processes relate specifically to activities of the demonstrator. Studying the mechanisms of information transmission allows researchers to better understand how animals make decisions by observing others' behaviors and obtaining information.
Social learning mechanisms
Social learning occurs when one individual influences the learning of another through various processes. In local enhancement and opportunity providing, the attention of an individual is drawn to a specific location or situation. In stimulus enhancement, emulation, observational conditioning, the observer learns the relationship between a stimulus and a result but does not directly copy the behavio
Document 3:::
In cognitive psychology, sequence learning is inherent to human ability because it is an integrated part of conscious and nonconscious learning as well as activities. Sequences of information or sequences of actions are used in various everyday tasks: "from sequencing sounds in speech, to sequencing movements in typing or playing instruments, to sequencing actions in driving an automobile." Sequence learning can be used to study skill acquisition and in studies of various groups ranging from neuropsychological patients to infants. According to Ritter and Nerb, “The order in which material is presented can strongly influence what is learned, how fast performance increases, and sometimes even whether the material is learned at all.” Sequence learning, more known and understood as a form of explicit learning, is now also being studied as a form of implicit learning as well as other forms of learning. Sequence learning can also be referred to as sequential behavior, behavior sequencing, and serial order in behavior.
History
In the first half of the 20th century, Margaret Floy Washburn, John B. Watson, and other behaviorists believed behavioral sequencing to be governed by the reflex chain, which states that stimulation caused by an initial movement triggers an additional movement, which triggers another additional movement, and so on. In 1951, Karl Lashley, a neurophysiologist at Harvard University, published “The Problem of Serial Order in Behavior,” addressing the current beliefs about sequence learning and introducing his hypothesis. He criticized the previous view on the basis of six lines of evidence:
The first line is that movements can occur even when sensory feedback is interrupted. The second is that some movement sequences occur too quickly for elements of the sequences to be triggered by feedback from the preceding elements. Next is that the errors in behavior suggest internal plans for what will be done later. Also, the time to initiate a movement sequence
Document 4:::
Behavior (American English) or behaviour (British English) is the range of actions and mannerisms made by individuals, organisms, systems or artificial entities in some environment. These systems can include other systems or organisms as well as the inanimate physical environment. It is the computed response of the system or organism to various stimuli or inputs, whether internal or external, conscious or subconscious, overt or covert, and voluntary or involuntary.
Taking a behavior informatics perspective, a behavior consists of actor, operation, interactions, and their properties. This can be represented as a behavior vector.
Models
Biology
Although disagreement exists as to how to precisely define behavior in a biological context, one common interpretation based on a meta-analysis of scientific literature states that "behavior is the internally coordinated responses (actions or inactions) of whole living organisms (individuals or groups) to internal or external stimuli".
A broader definition of behavior, applicable to plants and other organisms, is similar to the concept of phenotypic plasticity. It describes behavior as a response to an event or environment change during the course of the lifetime of an individual, differing from other physiological or biochemical changes that occur more rapidly, and excluding changes that are a result of development (ontogeny).
Behaviors can be either innate or learned from the environment.
Behavior can be regarded as any action of an organism that changes its relationship to its environment. Behavior provides outputs from the organism to the environment.
Human behavior
The endocrine system and the nervous system likely influence human behavior. Complexity in the behavior of an organism may be correlated to the complexity of its nervous system. Generally, organisms with more complex nervous systems have a greater capacity to learn new responses and thus adjust their behavior.
Animal behavior
Ethology is the scientifi
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How do humans learn behaviors?
A. conditioning
B. pressure
C. Aural pressure
D. Verbal pressure
Answer:
|
|
sciq-8510
|
multiple_choice
|
Although individuals of a given species are genetically similar, they are not identical; every individual has a unique set of these?
|
[
"traits",
"mutations",
"habits",
"chromosomes"
] |
A
|
Relavent Documents:
Document 0:::
The Generalist Genes hypothesis of learning abilities and disabilities was originally coined in an article by Plomin & Kovas (2005).
The Generalist Genes hypothesis suggests that most genes associated with common learning disabilities and abilities are generalist in three ways.
Firstly, the same genes that influence common learning abilities (e.g., high reading aptitude) are also responsible for common learning disabilities (e.g., reading disability): they are strongly genetically correlated.
Secondly, many of the genes associated with one aspect of a learning disability (e.g., vocabulary problems) also influence other aspects of this learning disability (e.g., grammar problems).
Thirdly, genes that influence one learning disability (e.g., reading disability) are largely the same as those that influence other learning disabilities (e.g., mathematics disability).
The Generalist Genes hypothesis has important implications for education, cognitive sciences and molecular genetics.
Document 1:::
Genetic variation is the difference in DNA among individuals or the differences between populations among the same species. The multiple sources of genetic variation include mutation and genetic recombination. Mutations are the ultimate sources of genetic variation, but other mechanisms, such as genetic drift, contribute to it, as well.
Among individuals within a population
Genetic variation can be identified at many levels. Identifying genetic variation is possible from observations of phenotypic variation in either quantitative traits (traits that vary continuously and are coded for by many genes (e.g., leg length in dogs)) or discrete traits (traits that fall into discrete categories and are coded for by one or a few genes (e.g., white, pink, or red petal color in certain flowers)).
Genetic variation can also be identified by examining variation at the level of enzymes using the process of protein electrophoresis. Polymorphic genes have more than one allele at each locus. Half of the genes that code for enzymes in insects and plants may be polymorphic, whereas polymorphisms are less common among vertebrates.
Ultimately, genetic variation is caused by variation in the order of bases in the nucleotides in genes. New technology now allows scientists to directly sequence DNA, which has identified even more genetic variation than was previously detected by protein electrophoresis. Examination of DNA has shown genetic variation in both coding regions and in the noncoding intron region of genes.
Genetic variation will result in phenotypic variation if variation in the order of nucleotides in the DNA sequence results in a difference in the order of amino acids in proteins coded by that DNA sequence, and if the resultant differences in amino-acid sequence influence the shape, and thus the function of the enzyme.
Between populations
Differences between populations resulting from geographic separation is known as geographic variation. Natural selection, genetic
Document 2:::
Genetics is the study of genes and tries to explain what they are and how they work. Genes are how living organisms inherit features or traits from their ancestors; for example, children usually look like their parents because they have inherited their parents' genes. Genetics tries to identify which traits are inherited and to explain how these traits are passed from generation to generation.
Some traits are part of an organism's physical appearance, such as eye color, height or weight. Other sorts of traits are not easily seen and include blood types or resistance to diseases. Some traits are inherited through genes, which is the reason why tall and thin people tend to have tall and thin children. Other traits come from interactions between genes and the environment, so a child who inherited the tendency of being tall will still be short if poorly nourished. The way our genes and environment interact to produce a trait can be complicated. For example, the chances of somebody dying of cancer or heart disease seems to depend on both their genes and their lifestyle.
Genes are made from a long molecule called DNA, which is copied and inherited across generations. DNA is made of simple units that line up in a particular order within it, carrying genetic information. The language used by DNA is called genetic code, which lets organisms read the information in the genes. This information is the instructions for the construction and operation of a living organism.
The information within a particular gene is not always exactly the same between one organism and another, so different copies of a gene do not always give exactly the same instructions. Each unique form of a single gene is called an allele. As an example, one allele for the gene for hair color could instruct the body to produce much pigment, producing black hair, while a different allele of the same gene might give garbled instructions that fail to produce any pigment, giving white hair. Mutations are random
Document 3:::
The Principle of genetics is a genetics textbook authored by D. Peter Snustad & Michael J. Simmons, an emeritus professor of biology, published by John Wiley & Sons, Inc..
The 6th edition of the book was published on 2012.
Description
The book is sectioned into four parts. The first part, Genetics and the Scientific Method briefly review the History of genetics and the various methods used in genetic study. The second part focus on Mendelian inheritance, the third part deals with Molecular genetics and the last section deals with Quantitative genetics and Evolutionary Genetics.
Review
The book had been reviewed and rated high by several editors and geneticists.
Document 4:::
The genotype–phenotype map is a conceptual model in genetic architecture. Coined in a 1991 paper by Pere Alberch, it models the interdependency of genotype (an organism's full hereditary information) with phenotype (an organism's actual observed properties).
Application
The map visualises a relationship between genotype & phenotype which, crucially:
is of greater complexity than a straightforward one-to-one mapping of genotype to/from phenotype.
accommodates a parameter space, along which at different points a given phenotype is said to be more or less stable.
accommodates transformational boundaries in the parameter space, which divide phenotype states from one another.
accounts for different polymorphism and/or polyphenism in populations, depending on their area of parameter space they occupy.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Although individuals of a given species are genetically similar, they are not identical; every individual has a unique set of these?
A. traits
B. mutations
C. habits
D. chromosomes
Answer:
|
|
sciq-1044
|
multiple_choice
|
How does air always flow?
|
[
"in to out",
"high to low",
"left to right",
"low to high"
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Airflow, or air flow, is the movement of air. The primary cause of airflow is the existence of air. Air behaves in a fluid manner, meaning particles naturally flow from areas of higher pressure to those where the pressure is lower. Atmospheric air pressure is directly related to altitude, temperature, and composition.In engineering, airflow is a measurement of the amount of air per unit of time that flows through a particular device.
It can be described as a volumetric flow rate (volume of air per unit time) or a mass flow rate (mass of air per unit time). What relates both forms of description is the air density, which is a function of pressure and temperature through the ideal gas law. The flow of air can be induced through mechanical means (such as by operating an electric or manual fan) or can take place passively, as a function of pressure differentials present in the environment.
Types of airflow
Like any fluid, air may exhibit both laminar and turbulent flow patterns. Laminar flow occurs when air can flow smoothly, and exhibits a parabolic velocity profile; turbulent flow occurs when there is an irregularity (such as a disruption in the surface across which the fluid is flowing), which alters the direction of movement. Turbulent flow exhibits a flat velocity profile. Velocity profiles of fluid movement describe the spatial distribution of instantaneous velocity vectors across a given cross section. The size and shape of the geometric configuration that the fluid is traveling through, the fluid properties (such as viscosity), physical disruptions to the flow, and engineered components (e.g. pumps) that add energy to the flow are factors that determine what the velocity profile looks like. Generally, in encased flows, instantaneous velocity vectors are larger in magnitude in the middle of the profile due to the effect of friction from the material of the pipe, duct, or channel walls on nearby layers of fluid. In tropospheric atmospheric flows, velocity incr
Document 2:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 3:::
In physics and chemistry, effusion is the process in which a gas escapes from a container through a hole of diameter considerably smaller than the mean free path of the molecules. Such a hole is often described as a pinhole and the escape of the gas is due to the pressure difference between the container and the exterior. Under these conditions, essentially all molecules which arrive at the hole continue and pass through the hole, since collisions between molecules in the region of the hole are negligible. Conversely, when the diameter is larger than the mean free path of the gas, flow obeys the Sampson flow law.
In medical terminology, an effusion refers to accumulation of fluid in an anatomic space, usually without loculation. Specific examples include subdural, mastoid, pericardial and pleural effusions.
Etymology
The word effusion derives from the Latin word, effundo, which means "shed, pour forth, pour out, utter, lavish, waste."
Effusion into vacuum
Effusion from an equilibrated container into outside vacuum can be calculated based on kinetic theory. The number of atomic or molecular collisions with a wall of a container per unit area per unit time (impingement rate) is given by:
assuming mean free path is much greater than pinhole diameter and the gas can be treated as an ideal gas.
If a small area on the container is punched to become a small hole, the effusive flow rate will be
where is the molar mass, is the Avogadro constant, and is the gas constant.
The average velocity of effused particles is
Combined with the effusive flow rate, the recoil/thrust force on the system itself is
An example is the recoil force on a balloon with a small hole flying in vacuum.
Measures of flow rate
According to the kinetic theory of gases, the kinetic energy for a gas at a temperature is
where is the mass of one molecule, is the root-mean-square speed of the molecules, and is the Boltzmann constant. The average molecular speed can be calculated from the Ma
Document 4:::
In biophysical fluid dynamics, Murray's law is a potential relationship between radii at junctions in a network of fluid-carrying tubular pipes. Its simplest version proposes that whenever a branch of radius splits into two branches of radii and , then all three radii should obey the equation If network flow is smooth and leak-free, then systems that obey Murray's law minimize the resistance to flow through the network. For turbulent networks, the law takes the same form but with a different characteristic exponent .
Murray's law is observed in the vascular and respiratory systems of animals, xylem in plants, and the respiratory system of insects. In principle, Murray's law also applies to biomimetic engineering, but human designs rarely exploit the law.
Murray's law is named after Cecil D. Murray, a physiologist at Bryn Mawr College, who first argued that efficient transport might determine the structure of the human vascular system.
Assumptions
Murray's law assumes material is passively transported by the flow of fluid in a network of tubular pipes, and that said network requires energy both to maintain flow and structural integrity. Variation in the fluid viscosity across scales will affect the Murray's law exponent, but is usually too small to matter.
At least two different conditions are known in which the cube exponent is optimal.
In the first, organisms have free (variable) circulatory volume. Also, maintenance energy is not proportional to the pipe material, but instead the quantity of working fluid. The latter assumption is justified in metabolically active biological fluids, such as blood. It is also justified for metabolically inactive fluids, such as air, as long as the energetic "cost" of the infrastructure scales with the cross-sectional area of each tube; such is the case for all known biological tubules.
In the second, organisms have fixed circulatory volume and pressure, but wish to minimize the resistance to flow through the system.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How does air always flow?
A. in to out
B. high to low
C. left to right
D. low to high
Answer:
|
|
sciq-3040
|
multiple_choice
|
What two planets is the asteroid belt found between?
|
[
"saturn and uranus",
"earth and venus",
"mars and venus",
"mars and jupiter"
] |
D
|
Relavent Documents:
Document 0:::
The following list of instrument-resolved minor planets consists of minor planets whose disks have been resolved, whether by telescope, a visit by an uncrewed spacecraft, or by observing the occultation of a background star from multiple sites. Disk resolution allows the density of an body to be computed, providing useful information about the internal composition. It can also be used to determine the shape of the object, to search for albedo features, and to look for companions.
Techniques
Because of their distance from Earth and their small dimension, minor planets such as asteroids represent a challenge for astronomical instruments to resolve. Even two of the largest objects in the asteroid belt, 2 Pallas and 4 Vesta, have maximum angular diameters of less than an arcsecond. With a ground-based optical telescope, resolution of these objects through the Earth's thick atmosphere can require techniques such as speckle interferometry or adaptive optics.
Radio telescopes such as Arecibo or Goldstone have been used to observe asteroids. This technique can be used to measure the Doppler shifts and radar cross-sections of the bodies, while more detailed studies allow three-dimensional shape models to be built. The first radar detection of a minor planet was 1566 Icarus by JPL astronomer Richard M. Goldstein in June 1968. This was followed by 1685 Toro in 1972. A regular program of radar observation of the asteroid belt asteroids was begun in 1980 at Arecibo. Goldstone joined the effort in 1990. Together, they observed 37 main-belt asteroids between 1980–1997.
A more direct approach to asteroid study, allowing the object to be examined greater detail, is to send a spacecraft to either make a fly-by or go into orbit. The first such asteroid to be imaged in this manner was 951 Gaspra in 1991 by the Galileo spacecraft. In 2000, the NEAR Shoemaker spacecraft went into orbit around 433 Eros after making a fly-by of 253 Mathilde in 1997.
Objects
The tables below list selec
Document 1:::
This is a list of potentially habitable exoplanets. The list is mostly based on estimates of habitability by the Habitable Exoplanets Catalog (HEC), and data from the NASA Exoplanet Archive. The HEC is maintained by the Planetary Habitability Laboratory at the University of Puerto Rico at Arecibo. There is also a speculative list being developed of superhabitable planets.
Surface planetary habitability is thought to require orbiting at the right distance from the host star for liquid surface water to be present, in addition to various geophysical and geodynamical aspects, atmospheric density, radiation type and intensity, and the host star's plasma environment.
List
This is a list of exoplanets within the circumstellar habitable zone that are under 10 Earth masses and smaller than 2.5 Earth radii, and thus have a chance of being rocky. Note that inclusion on this list does not guarantee habitability, and in particular the larger planets are unlikely to have a rocky composition. Earth is included for comparison.
Note that mass and radius values prefixed with "~" have not been measured, but are estimated from a mass-radius relationship.
Previous candidates
Some exoplanet candidates detected by radial velocity that were originally thought to be potentially habitable were later found to most likely be artifacts of stellar activity. These include Gliese 581 d & g, Gliese 667 Ce & f, Gliese 682 b & c, Kapteyn b, and Gliese 832 c.
HD 85512 b was initially estimated to be potentially habitable, but updated models for the boundaries of the habitable zone placed the planet interior to the HZ, and it is now considered non-habitable. Kepler-69c has gone through a similar process; though initially estimated to be potentially habitable, it was quickly realized that the planet is more likely to be similar to Venus, and is thus no longer considered habitable. Several other planets, such as Gliese 180 b, also appear to be examples of planets once considered potentially habit
Document 2:::
This article is a list of notable unsolved problems in astronomy. Some of these problems are theoretical, meaning that existing theories may be incapable of explaining certain observed phenomena or experimental results. Others are experimental, meaning that experiments necessary to test proposed theory or investigate a phenomenon in greater detail have not yet been performed. Some pertain to unique events or occurrences that have not repeated themselves and whose causes remain unclear.
Planetary astronomy
Our solar system
Orbiting bodies and rotation:
Are there any non-dwarf planets beyond Neptune?
Why do extreme trans-Neptunian objects have elongated orbits?
Rotation rate of Saturn:
Why does the magnetosphere of Saturn rotate at a rate close to that at which the planet's clouds rotate?
What is the rotation rate of Saturn's deep interior?
Satellite geomorphology:
What is the origin of the chain of high mountains that closely follows the equator of Saturn's moon, Iapetus?
Are the mountains the remnant of hot and fast-rotating young Iapetus?
Are the mountains the result of material (either from the rings of Saturn or its own ring) that over time collected upon the surface?
Extra-solar
How common are Solar System-like planetary systems? Some observed planetary systems contain Super-Earths and Hot Jupiters that orbit very close to their stars. Systems with Jupiter-like planets in Jupiter-like orbits appear to be rare. There are several possibilities why Jupiter-like orbits are rare, including that data is lacking or the grand tack hypothesis.
Stellar astronomy and astrophysics
Solar cycle:
How does the Sun generate its periodically reversing large-scale magnetic field?
How do other Sol-like stars generate their magnetic fields, and what are the similarities and differences between stellar activity cycles and that of the Sun?
What caused the Maunder Minimum and other grand minima, and how does the solar cycle recover from a minimum state?
Coronal heat
Document 3:::
Jean-Luc Margot (born 1969) is a Belgian-born astronomer and a UCLA professor with expertise in planetary sciences and SETI.
Career
Margot has discovered and studied several binary asteroids with radar and optical telescopes. His discoveries include (87) Sylvia I Romulus, (22) Kalliope I Linus, S/2003 (379) 1, (702) Alauda I Pichi üñëm, and the binary nature of (69230) Hermes.
In 2000, he obtained the first images of binary near-Earth asteroids and described formation of the binary by a spin-up process. Margot and his research group have studied the influence of sunlight on the orbits and spins of asteroids, the Yarkovsky and YORP effects.
In 2007, Margot and collaborators determined that Mercury has a molten core from the analysis of small variations in the rotation rate of the planet. These observations also enabled a measurement of the size of the core based on a concept proposed by Stan Peale.
In 2012, Margot and graduate student Julia Fang analyzed Kepler space telescope data to infer the architecture of planetary systems. They described planetary systems as "flatter than pancakes." They also showed that many planetary systems are dynamically packed.
Margot proposed an extension to the IAU definition of planet that applies to exoplanets.
Between 2006 and 2021, Margot and collaborators measured the spin of Venus with a radar speckle tracking technique. They measured the orientation and precession of the spin axis. They also measured the duration of the length of day and the amplitude of length-of-day variations, which they attribute to transfer of momentum between the atmosphere and the solid planet.
Since 2016, he has conducted searches for technosignatures using large radio telescopes with UCLA students. Volunteers can contribute to SETI through the "Are we alone in the universe?" citizen science collaboration.
Honors and awards
Margot was awarded the H. C. Urey Prize by the American Astronomical Society in 2004. The asteroid 9531 Jean-Luc is name
Document 4:::
The interstellar space opera epic Star Wars uses science and technology in its settings and storylines. The series has showcased many technological concepts, both in the movies and in the expanded universe of novels, comics and other forms of media. The Star Wars movies' primary objective is to build upon drama, philosophy, political science and less on scientific knowledge. Many of the on-screen technologies created or borrowed for the Star Wars universe were used mainly as plot devices.
The iconic status that Star Wars has gained in popular culture and science fiction allows it to be used as an accessible introduction to real scientific concepts. Many of the features or technologies used in the Star Wars universe are not yet considered possible. Despite this, their concepts are still probable.
Tatooine's twin stars
In the past, scientists thought that planets would be unlikely to form around binary stars. However, recent simulations indicate that planets are just as likely to form around binary star systems as single-star systems. Of the 3457 exoplanets currently known, 146 actually orbit binary star systems (and 39 orbit multiple star systems with three or more stars). Specifically, they orbit what are known as "wide" binary star systems where the two stars are fairly far apart (several AU). Tatooine appears to be of the other type — a "close" binary, where the stars are very close, and the planets orbit their common center of mass.
The first observationally confirmed binary — Kepler-16b — is a close binary. Exoplanet researchers' simulations indicate that planets form frequently around close binaries, though gravitational effects from the dual star system tend to make them very difficult to find with current Doppler and transit methods of planetary searches. In studies looking for dusty disks—where planet formation is likely—around binary stars, such disks were found in wide or narrow binaries, or those whose stars are more than 50 or less than 3 AU apart, r
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What two planets is the asteroid belt found between?
A. saturn and uranus
B. earth and venus
C. mars and venus
D. mars and jupiter
Answer:
|
|
sciq-7324
|
multiple_choice
|
What is the point on the ground that is located directly above where underground rocks fracture (or the "focus" point)?
|
[
"seismic point",
"epicenter",
"fault line",
"danger zone"
] |
B
|
Relavent Documents:
Document 0:::
The epicenter (), epicentre, or epicentrum in seismology is the point on the Earth's surface directly above a hypocenter or focus, the point where an earthquake or an underground explosion originates.
Determination
The primary purpose of a seismometer is to locate the initiating points of earthquake epicenters. The secondary purpose, of determining the 'size' or magnitude must be calculated after the precise location is known.
The earliest seismographs were designed to give a sense of the direction of the first motions from an earthquake. The Chinese frog seismograph would have dropped its ball in the general compass direction of the earthquake, assuming a strong positive pulse. We now know that first motions can be in almost any direction depending on the type of initiating rupture (focal mechanism).
The first refinement that allowed a more precise determination of the location was the use of a time scale. Instead of merely noting, or recording, the absolute motions of a pendulum, the displacements were plotted on a moving graph, driven by a clock mechanism. This was the first seismogram, which allowed precise timing of the first ground motion, and an accurate plot of subsequent motions.
From the first seismograms, as seen in the figure, it was noticed that the trace was divided into two major portions. The first seismic wave to arrive was the P-wave, followed closely by the S-wave. Knowing the relative 'velocities of propagation', it was a simple matter to calculate the distance of the earthquake.
One seismograph would give the distance, but that could be plotted as a circle, with an infinite number of possibilities. Two seismographs would give two intersecting circles, with two possible locations. Only with a third seismograph would there be a precise location.
Modern earthquake location still requires a minimum of three seismometers. Most likely, there are many, forming a seismic array. The emphasis is on precision since much can be learned about the fau
Document 1:::
Rock mechanics is a theoretical and applied science of the mechanical behavior of rocks and rock masses.
Compared to geology, it is the branch of mechanics concerned with the response of rock and rock masses to the force fields of their physical environment.
Background
Rock mechanics is part of a much broader subject of geomechanics, which is concerned with the mechanical responses of all geological materials, including soils.
Rock mechanics is concerned with the application of the principles of engineering mechanics to the design of structures built in or on rock. The structure could include many objects such as a drilling well, a mine shaft, a tunnel, a reservoir dam, a repository component, or a building. Rock mechanics is used in many engineering disciplines, but is primarily used in Mining, Civil, Geotechnical, Transportation, and Petroleum Engineering.
Rock mechanics answers questions such as, "is reinforcement necessary for a rock, or will it be able to handle whatever load it is faced with?" It also includes the design of reinforcement systems, such as rock bolting patterns.
Assessing the Project Site
Before any work begins, the construction site must be investigated properly to inform of the geological conditions of the site. Field observations, deep drilling, and geophysical surveys, can all give necessary information to develop a safe construction plan and create a site geological model. The level of investigation conducted at this site depends on factors such as budget, time frame, and expected geological conditions.
The first step of the investigation is the collection of maps and aerial photos to analyze. This can provide information about potential sinkholes, landslides, erosion, etc. Maps can provide information on the rock type of the site, geological structure, and boundaries between bedrock units.
Boreholes
Creating a borehole is a technique that consists of drilling through the ground in various areas at various depths, to get a bett
Document 2:::
Seismic moment is a quantity used by seismologists to measure the size of an earthquake. The scalar seismic moment is defined by the equation
, where
is the shear modulus of the rocks involved in the earthquake (in pascals (Pa), i.e. newtons per square meter)
is the area of the rupture along the geologic fault where the earthquake occurred (in square meters), and
is the average slip (displacement offset between the two sides of the fault) on (in meters).
thus has dimensions of torque, measured in newton meters. The connection between seismic moment and a torque is natural in the body-force equivalent representation of seismic sources as a double-couple (a pair of force couples with opposite torques): the seismic moment is the torque of each of the two couples. Despite having the same dimensions as energy, seismic moment is not a measure of energy. The relations between seismic moment, potential energy drop and radiated energy are indirect and approximative.
The seismic moment of an earthquake is typically estimated using whatever information is available to constrain its factors. For modern earthquakes, moment is usually estimated from ground motion recordings of earthquakes known as seismograms. For earthquakes that occurred in times before modern instruments were available, moment may be estimated from geologic estimates of the size of the fault rupture and the slip.
Seismic moment is the basis of the moment magnitude scale introduced by Hiroo Kanamori, which is often used to compare the size of different earthquakes and is especially useful for comparing the sizes of large (great) earthquakes.
The seismic moment is not restricted to earthquakes. For a more general seismic source described by a seismic moment tensor (a symmetric tensor, but not necessarily a double couple tensor), the seismic moment is
See also
Richter magnitude scale
Moment magnitude scale
Sources
.
.
.
.
Seismology measurement
Moment (physics)
Document 3:::
Seismic tomography is a technique for imaging the subsurface of the Earth with seismic waves produced by earthquakes or explosions. P-, S-, and surface waves can be used for tomographic models of different resolutions based on seismic wavelength, wave source distance, and the seismograph array coverage. The data received at seismometers are used to solve an inverse problem, wherein the locations of reflection and refraction of the wave paths are determined. This solution can be used to create 3D images of velocity anomalies which may be interpreted as structural, thermal, or compositional variations. Geoscientists use these images to better understand core, mantle, and plate tectonic processes.
Theory
Tomography is solved as an inverse problem. Seismic travel time data are compared to an initial Earth model and the model is modified until the best possible fit between the model predictions and observed data is found. Seismic waves would travel in straight lines if Earth was of uniform composition, but the compositional layering, tectonic structure, and thermal variations reflect and refract seismic waves. The location and magnitude of these variations can be calculated by the inversion process, although solutions to tomographic inversions are non-unique.
Seismic tomography is similar to medical x-ray computed tomography (CT scan) in that a computer processes receiver data to produce a 3D image, although CT scans use attenuation instead of traveltime difference. Seismic tomography has to deal with the analysis of curved ray paths which are reflected and refracted within the Earth, and potential uncertainty in the location of the earthquake hypocenter. CT scans use linear x-rays and a known source.
History
Seismic tomography requires large datasets of seismograms and well-located earthquake or explosion sources. These became more widely available in the 1960s with the expansion of global seismic networks, and in the 1970s when digital seismograph data archives were
Document 4:::
Seismic wide-angle reflection and refraction is a technique used in geophysical investigations of Earth's crust and upper mantle. It allows the development of a detailed model of seismic velocities beneath Earth's surface well beyond the reach of exploration boreholes. The velocities can then be used, often in combination with the interpretation of standard seismic reflection data and gravity data, to interpret the geology of the subsurface.
Theory
In comparison to the typical seismic reflection survey, which is restricted to relatively small incidence angles due to the limited offsets between source and receiver, wide-angle reflection and refraction (WARR) data are acquired with long offsets, allowing the recording of both refracted and wide-angle reflection arrivals.
Acquisition
The acquisition setup depends on the type of seismic source being used and the target of the investigation.
Source
The source of the seismic waves may be either "passive", e.g. naturally occurring sources, such as earthquakes, or anthropogenic sources, such as quarry blasts, or "active", sometimes referred to as "controlled source", e.g. explosive charges set off in shallow boreholes or seismic vibrators onshore or air guns offshore. Exceptionally, the sound waves from nuclear explosions have been used to look at the structure of the upper mantle down to the base of the transition zone at 660 km depth.
Receiver
The sound waves are normally recorded using 3-component seismometers, with ocean-bottom seismometers (OBS) used offshore. The three components allow the recording of S-waves as well as the P-waves that single component instruments can record. The offset range used depends on the depth of the target. For the top few kilometres of the crust, such as when investigating beneath a thick layer of basalt, a range of 10–20 km may be appropriate, while for the lower crust and mantle, offsets greater than 100 km are normally necessary.
Modelling
The processing approach used in standard
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the point on the ground that is located directly above where underground rocks fracture (or the "focus" point)?
A. seismic point
B. epicenter
C. fault line
D. danger zone
Answer:
|
|
sciq-7057
|
multiple_choice
|
What is the only substance on earth that is stable in all three states?
|
[
"air",
"mercury",
"water",
"carbon"
] |
C
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 2:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
Document 3:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
Document 4:::
Mathematics education in the United States varies considerably from one state to the next, and even within a single state. However, with the adoption of the Common Core Standards in most states and the District of Columbia beginning in 2010, mathematics content across the country has moved into closer agreement for each grade level. The SAT, a standardized university entrance exam, has been reformed to better reflect the contents of the Common Core. However, many students take alternatives to the traditional pathways, including accelerated tracks. As of 2023, twenty-seven states require students to pass three math courses before graduation from high school, and seventeen states and the District of Columbia require four.
Compared to other developed countries in the Organisation for Economic Co-operation and Development (OECD), the average level of mathematical literacy of American students is mediocre. As in many other countries, math scores dropped even further during the COVID-19 pandemic. Secondary-school algebra proves to be the turning point of difficulty many students struggle to surmount, and as such, many students are ill-prepared for collegiate STEM programs, or future high-skilled careers. Meanwhile, the number of eighth-graders enrolled in Algebra I has fallen between the early 2010s and early 2020s. Across the United States, there is a shortage of qualified mathematics instructors. Despite their best intentions, parents may transmit their mathematical anxiety to their children, who may also have school teachers who fear mathematics. About one in five American adults are functionally innumerate. While an overwhelming majority agree that mathematics is important, many, especially the young, are not confident of their own mathematical ability.
Curricular content and standards
Each U.S. state sets its own curricular standards, and details are usually set by each local school district. Although there are no federal standards, since 2015 most states have bas
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the only substance on earth that is stable in all three states?
A. air
B. mercury
C. water
D. carbon
Answer:
|
|
sciq-814
|
multiple_choice
|
What are considered solid lipids that animals use to store energy?
|
[
"acids",
"sugars",
"proteins",
"fats"
] |
D
|
Relavent Documents:
Document 0:::
Animal nutrition focuses on the dietary nutrients needs of animals, primarily those in agriculture and food production, but also in zoos, aquariums, and wildlife management.
Constituents of diet
Macronutrients (excluding fiber and water) provide structural material (amino acids from which proteins are built, and lipids from which cell membranes and some signaling molecules are built) and energy. Some of the structural material can be used to generate energy internally, though the net energy depends on such factors as absorption and digestive effort, which vary substantially from instance to instance. Vitamins, minerals, fiber, and water do not provide energy, but are required for other reasons. A third class dietary material, fiber (i.e., non-digestible material such as cellulose), seems also to be required, for both mechanical and biochemical reasons, though the exact reasons remain unclear.
Molecules of carbohydrates and fats consist of carbon, hydrogen, and oxygen atoms. Carbohydrates range from simple monosaccharides (glucose, fructose, galactose) to complex polysaccharides (starch). Fats are triglycerides, made of assorted fatty acid monomers bound to glycerol backbone. Some fatty acids, but not all, are essential in the diet: they cannot be synthesized in the body. Protein molecules contain nitrogen atoms in addition to carbon, oxygen, and hydrogen. The fundamental components of protein are nitrogen-containing amino acids. Essential amino acids cannot be made by the animal. Some of the amino acids are convertible (with the expenditure of energy) to glucose and can be used for energy production just as ordinary glucose. By breaking down existing protein, some glucose can be produced internally; the remaining amino acids are discarded, primarily as urea in urine. This occurs normally only during prolonged starvation.
Other dietary substances found in plant foods (phytochemicals, polyphenols) are not identified as essential nutrients but appear to impact healt
Document 1:::
Fat globules (also known as mature lipid droplets) are individual pieces of intracellular fat in human cell biology. The lipid droplet's function is to store energy for the organism's body and is found in every type of adipocytes. They can consist of a vacuole, droplet of triglyceride, or any other blood lipid, as opposed to fat cells in between other cells in an organ. They contain a hydrophobic core and are encased in a phospholipid monolayer membrane. Due to their hydrophobic nature, lipids and lipid digestive derivatives must be transported in the globular form within the cell, blood, and tissue spaces.
The formation of a fat globule starts within the membrane bilayer of the endoplasmic reticulum. It starts as a bud and detaches from the ER membrane to join other droplets. After the droplets fuse, a mature droplet (full-fledged globule) is formed and can then partake in neutral lipid synthesis or lipolysis.
Globules of fat are emulsified in the duodenum into smaller droplets by bile salts during food digestion, speeding up the rate of digestion by the enzyme lipase at a later point in digestion. Bile salts possess detergent properties that allow them to emulsify fat globules into smaller emulsion droplets, and then into even smaller micelles. This increases the surface area for lipid-hydrolyzing enzymes to act on the fats.
Micelles are roughly 200 times smaller than fat emulsion droplets, allowing them to facilitate the transport of monoglycerides and fatty acids across the surface of the enterocyte, where absorption occurs.
Milk fat globules (MFGs) are another form of intracellular fat found in the mammary glands of female mammals. Their function is to provide enriching glycoproteins from the female to their offspring. They are formed in the endoplasmic reticulum found in the mammary epithelial lactating cell. The globules are made up of triacylglycerols encased in cellular membranes and proteins like adipophilin and TIP 47. The proteins are spread througho
Document 2:::
A saponifiable lipid is part of the ester functional group. They are made up of long chain carboxylic (of fatty) acids connected to an alcoholic functional group through the ester linkage which can undergo a saponification reaction. The fatty acids are released upon base-catalyzed ester hydrolysis to form ionized salts. The primary saponifiable lipids are free fatty acids, neutral glycerolipids, glycerophospholipids, sphingolipids, and glycolipids.
By comparison, the non-saponifiable class of lipids is made up of terpenes, including fat-soluble A and E vitamins, and certain steroids, such as cholesterol.
Applications
Saponifiable lipids have relevant applications as a source of biofuel and can be extracted from various forms of biomass to produce biodiesel.
See also
Lipids
Simple lipid
Document 3:::
Fatty acid metabolism consists of various metabolic processes involving or closely related to fatty acids, a family of molecules classified within the lipid macronutrient category. These processes can mainly be divided into (1) catabolic processes that generate energy and (2) anabolic processes where they serve as building blocks for other compounds.
In catabolism, fatty acids are metabolized to produce energy, mainly in the form of adenosine triphosphate (ATP). When compared to other macronutrient classes (carbohydrates and protein), fatty acids yield the most ATP on an energy per gram basis, when they are completely oxidized to CO2 and water by beta oxidation and the citric acid cycle. Fatty acids (mainly in the form of triglycerides) are therefore the foremost storage form of fuel in most animals, and to a lesser extent in plants.
In anabolism, intact fatty acids are important precursors to triglycerides, phospholipids, second messengers, hormones and ketone bodies. For example, phospholipids form the phospholipid bilayers out of which all the membranes of the cell are constructed from fatty acids. Phospholipids comprise the plasma membrane and other membranes that enclose all the organelles within the cells, such as the nucleus, the mitochondria, endoplasmic reticulum, and the Golgi apparatus. In another type of anabolism, fatty acids are modified to form other compounds such as second messengers and local hormones. The prostaglandins made from arachidonic acid stored in the cell membrane are probably the best-known of these local hormones.
Fatty acid catabolism
Fatty acids are stored as triglycerides in the fat depots of adipose tissue. Between meals they are released as follows:
Lipolysis, the removal of the fatty acid chains from the glycerol to which they are bound in their storage form as triglycerides (or fats), is carried out by lipases. These lipases are activated by high epinephrine and glucagon levels in the blood (or norepinephrine secreted by s
Document 4:::
An energy budget is a balance sheet of energy income against expenditure. It is studied in the field of Energetics which deals with the study of energy transfer and transformation from one form to another. Calorie is the basic unit of measurement. An organism in a laboratory experiment is an open thermodynamic system, exchanging energy with its surroundings in three ways - heat, work and the potential energy of biochemical compounds.
Organisms use ingested food resources (C=consumption) as building blocks in the synthesis of tissues (P=production) and as fuel in the metabolic process that power this synthesis and other physiological processes (R=respiratory loss). Some of the resources are lost as waste products (F=faecal loss, U=urinary loss). All these aspects of metabolism can be represented in energy units. The basic model of energy budget may be shown as:
P = C - R - U - F or
P = C - (R + U + F) or
C = P + R + U + F
All the aspects of metabolism can be represented in energy units (e.g. joules (J);1 calorie = 4.2 kJ).
Energy used for metabolism will be
R = C - (F + U + P)
Energy used in the maintenance will be
R + F + U = C - P
Endothermy and ectothermy
Energy budget allocation varies for endotherms and ectotherms. Ectotherms rely on the environment as a heat source while endotherms maintain their body temperature through the regulation of metabolic processes. The heat produced in association with metabolic processes facilitates the active lifestyles of endotherms and their ability to travel far distances over a range of temperatures in the search for food. Ectotherms are limited by the ambient temperature of the environment around them but the lack of substantial metabolic heat production accounts for an energetically inexpensive metabolic rate. The energy demands for ectotherms are generally one tenth of that required for endotherms.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are considered solid lipids that animals use to store energy?
A. acids
B. sugars
C. proteins
D. fats
Answer:
|
|
sciq-5232
|
multiple_choice
|
The eardrum is part of what part of the ear?
|
[
"main",
"inner",
"thin",
"outer"
] |
D
|
Relavent Documents:
Document 0:::
Earwax, also known by the medical term cerumen, is a waxy substance secreted in the ear canal of humans and other mammals. Earwax can be many colors, including brown, orange, red, yellowish, and gray. Earwax protects the skin of the human ear canal, assists in cleaning and lubrication, and provides protection against bacteria, fungi, particulate matter, and water.
Major components of earwax include cerumen, produced by a type of modified sweat gland, and sebum, an oily substance. Both components are made by glands located in the outer ear canal. The chemical composition of earwax includes long chain fatty acids, both saturated and unsaturated, alcohols, squalene, and cholesterol. Earwax also contains dead skin cells and hair.
Excess or compacted cerumen is the buildup of ear wax causing a blockage in the ear canal and it can press against the eardrum or block the outside ear canal or hearing aids, potentially causing hearing loss.
Physiology
Cerumen is produced in the cartilaginous outer third portion of the ear canal. It is a mixture of secretions from sebaceous glands and less-viscous ones from modified apocrine sweat glands. The primary components of both wet and dry earwax are shed layers of skin, with, on average, 60% of the earwax consisting of keratin, 12–20% saturated and unsaturated long-chain fatty acids, alcohols, squalene and 6–9% cholesterol.
Wet or dry
There are two genetically-determined types of earwax: the wet type, which is dominant, and the dry type, which is recessive. This distinction is caused by a single base change in the "ATP-binding cassette C11 gene". Dry-type individuals are homozygous for adenine (AA) whereas wet-type requires at least one guanine (AG or GG). Dry earwax is gray or tan and brittle, and is about 20% lipid. It has a smaller concentration of lipid and pigment granules than wet earwax. Wet earwax is light brown or dark brown and has a viscous and sticky consistency, and is about 50% lipid. Wet-type earwax is associated
Document 1:::
In the anatomy of humans and various other tetrapods, the eardrum, also called the tympanic membrane or myringa, is a thin, cone-shaped membrane that separates the external ear from the middle ear. Its function is to transmit sound from the air to the ossicles inside the middle ear, and then to the oval window in the fluid-filled cochlea. Hence, it ultimately converts and amplifies vibration in the air to vibration in cochlear fluid. The malleus bone bridges the gap between the eardrum and the other ossicles.
Rupture or perforation of the eardrum can lead to conductive hearing loss. Collapse or retraction of the eardrum can cause conductive hearing loss or cholesteatoma.
Structure
Orientation and relations
The tympanic membrane is oriented obliquely in the anteroposterior, mediolateral, and superoinferior planes. Consequently, its superoposterior end lies lateral to its anteroinferior end.
Anatomically, it relates superiorly to the middle cranial fossa, posteriorly to the ossicles and facial nerve, inferiorly to the parotid gland, and anteriorly to the temporomandibular joint.
Regions
The eardrum is divided into two general regions: the pars flaccida and the pars tensa.
The relatively fragile pars flaccida lies above the lateral process of the malleus between the notch of Rivinus and the anterior and posterior malleal folds. Consisting of two layers and appearing slightly pinkish in hue, it is associated with Eustachian tube dysfunction and cholesteatomas.
The larger pars tensa consists of three layers: skin, fibrous tissue, and mucosa. Its thick periphery forms a fibrocartilaginous ring called the annulus tympanicus or Gerlach's ligament. while the central umbo tents inward at the level of the tip of malleus. The middle fibrous layer, containing radial, circular, and parabolic fibers, encloses the handle of malleus. Though comparatively robust, the pars tensa is the region more commonly associated with perforations.
Umbo
The manubrium () of the malleus is f
Document 2:::
Audiology (from Latin , "to hear"; and from Greek , -logia) is a branch of science that studies hearing, balance, and related disorders. Audiologists treat those with hearing loss and proactively prevent related damage. By employing various testing strategies (e.g. behavioral hearing tests, otoacoustic emission measurements, and electrophysiologic tests), audiologists aim to determine whether someone has normal sensitivity to sounds. If hearing loss is identified, audiologists determine which portions of hearing (high, middle, or low frequencies) are affected, to what degree (severity of loss), and where the lesion causing the hearing loss is found (outer ear, middle ear, inner ear, auditory nerve and/or central nervous system). If an audiologist determines that a hearing loss or vestibular abnormality is present, they will provide recommendations for interventions or rehabilitation (e.g. hearing aids, cochlear implants, appropriate medical referrals).
In addition to diagnosing audiologic and vestibular pathologies, audiologists can also specialize in rehabilitation of tinnitus, hyperacusis, misophonia, auditory processing disorders, cochlear implant use and/or hearing aid use. Audiologists can provide hearing health care from birth to end-of-life.
Audiologist
An audiologist is a health care provider specializing in identifying, diagnosing, treating, and monitoring disorders of the auditory and vestibular systems. Audiologists are trained to diagnose, manage and/or treat hearing, tinnitus, or balance problems. They dispense, manage, and rehabilitate hearing aids and assess candidacy for and map hearing implants, such as cochlear implants, middle ear implants and bone conduction implants. They counsel families through a new diagnosis of hearing loss in infants, and help teach coping and compensation skills to late-deafened adults. They also help design and implement personal and industrial hearing safety programs, newborn hearing screening programs, school hearing
Document 3:::
A middle ear implant is a hearing device that is surgically implanted into the middle ear. They help people with conductive, sensorineural or mixed hearing loss to hear.
Middle ear implants work by improving the conduction of sound vibrations from the middle ear to the inner ear. There are two types of middle ear devices: active and passive. Active middle ear implants (AMEI) consist of an external audio processor and an internal implant, which actively vibrates the structures of the middle ear. Passive middle ear implants (PMEIs) are sometimes known as ossicular replacement prostheses, TORPs or PORPs. They replace damaged or missing parts of the middle ear, creating a bridge between the outer ear and the inner ear, so that sound vibrations can be conducted through the middle ear and on to the cochlea. Unlike AMEIs, PMEIs contain no electronics and are not powered by an external source.
PMEIs are the usual first-line surgical treatment for conductive hearing loss, due to their lack of external components and cost-effectiveness. However, each patient is assessed individually as to whether an AMEI or PMEI would bring more benefit. This is especially true if the patient has already had several surgeries with PMEIs.
Active middle ear implant
Parts
An active middle ear implant (AMEI) has two parts: an internal implant and an external audio processor. The microphone of the audio processor picks up sounds from the environment. The processor then converts these acoustic signals into digital signals and sends them to the implant through the skin. The implant sends the signals to the Floating Mass Transducer (FMT): a small vibratory part that is surgically fixed either on one of the three ossicles or against the round window of the cochlea. The FMT vibrates and sends sound vibrations to the cochlea. The cochlea converts these vibrations into nerve signals and sends them to the brain, where they are interpreted as sound.
Indications
AMEIs are intended for patients wit
Document 4:::
The middle ear is the portion of the ear medial to the eardrum, and distal to the oval window of the cochlea (of the inner ear).
The mammalian middle ear contains three ossicles (malleus, incus, and stapes), which transfer the vibrations of the eardrum into waves in the fluid and membranes of the inner ear. The hollow space of the middle ear is also known as the tympanic cavity and is surrounded by the tympanic part of the temporal bone. The auditory tube (also known as the Eustachian tube or the pharyngotympanic tube) joins the tympanic cavity with the nasal cavity (nasopharynx), allowing pressure to equalize between the middle ear and throat.
The primary function of the middle ear is to efficiently transfer acoustic energy from compression waves in air to fluid–membrane waves within the cochlea.
Structure
Ossicles
The middle ear contains three tiny bones known as the ossicles: malleus, incus, and stapes. The ossicles were given their Latin names for their distinctive shapes; they are also referred to as the hammer, anvil, and stirrup, respectively. The ossicles directly couple sound energy from the eardrum to the oval window of the cochlea. While the stapes is present in all tetrapods, the malleus and incus evolved from lower and upper jaw bones present in reptiles.
The ossicles are classically supposed to mechanically convert the vibrations of the eardrum into amplified pressure waves in the fluid of the cochlea (or inner ear), with a lever arm factor of 1.3. Since the effective vibratory area of the eardrum is about 14 fold larger than that of the oval window, the sound pressure is concentrated, leading to a pressure gain of at least 18.1. The eardrum is merged to the malleus, which connects to the incus, which in turn connects to the stapes. Vibrations of the stapes footplate introduce pressure waves in the inner ear. There is a steadily increasing body of evidence that shows that the lever arm ratio is actually variable, depending on frequency. Betwe
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The eardrum is part of what part of the ear?
A. main
B. inner
C. thin
D. outer
Answer:
|
|
sciq-2055
|
multiple_choice
|
There are five primary tastes in humans: sweet, sour, bitter, salty, and what?
|
[
"tangy",
"umami",
"hot",
"aroma"
] |
B
|
Relavent Documents:
Document 0:::
A taste receptor or tastant is a type of cellular receptor which facilitates the sensation of taste. When food or other substances enter the mouth, molecules interact with saliva and are bound to taste receptors in the oral cavity and other locations. Molecules which give a sensation of taste are considered "sapid".
Vertebrate taste receptors are divided into two families:
Type 1, sweet, first characterized in 2001: –
Type 2, bitter, first characterized in 2000: In humans there are 25 known different bitter receptors, in cats there are 12, in chickens there are three, and in mice there are 35 known different bitter receptors.
Visual, olfactive, "sapictive" (the perception of tastes), trigeminal (hot, cool), mechanical, all contribute to the perception of taste. Of these, transient receptor potential cation channel subfamily V member 1 (TRPV1) vanilloid receptors are responsible for the perception of heat from some molecules such as capsaicin, and a CMR1 receptor is responsible for the perception of cold from molecules such as menthol, eucalyptol, and icilin.
Tissue distribution
The gustatory system consists of taste receptor cells in taste buds. Taste buds, in turn, are contained in structures called papillae. There are three types of papillae involved in taste: fungiform papillae, foliate papillae, and circumvallate papillae. (The fourth type - filiform papillae do not contain taste buds). Beyond the papillae, taste receptors are also in the palate and early parts of the digestive system like the larynx and upper esophagus. There are three cranial nerves that innervate the tongue; the vagus nerve, glossopharyngeal nerve, and the facial nerve. The glossopharyngeal nerve and the chorda tympani branch of the facial nerve innervate the TAS1R and TAS2R taste receptors. Next to the taste receptors in on the tongue, the gut epithelium is also equipped with a subtle chemosensory system that communicates the sensory information to several effector systems involved
Document 1:::
The gustatory system or sense of taste is the sensory system that is partially responsible for the perception of taste (flavor). Taste is the perception stimulated when a substance in the mouth reacts chemically with taste receptor cells located on taste buds in the oral cavity, mostly on the tongue. Taste, along with the sense of smell and trigeminal nerve stimulation (registering texture, pain, and temperature), determines flavors of food and other substances. Humans have taste receptors on taste buds and other areas, including the upper surface of the tongue and the epiglottis. The gustatory cortex is responsible for the perception of taste.
The tongue is covered with thousands of small bumps called papillae, which are visible to the naked eye. Within each papilla are hundreds of taste buds. The exception to this is the filiform papillae that do not contain taste buds. There are between 2000 and 5000 taste buds that are located on the back and front of the tongue. Others are located on the roof, sides and back of the mouth, and in the throat. Each taste bud contains 50 to 100 taste receptor cells.
Taste receptors in the mouth sense the five basic tastes: sweetness, sourness, saltiness, bitterness, and savoriness (also known as savory or umami). Scientific experiments have demonstrated that these five tastes exist and are distinct from one another. Taste buds are able to tell different tastes apart when they interact with different molecules or ions. Sweetness, savoriness, and bitter tastes are triggered by the binding of molecules to G protein-coupled receptors on the cell membranes of taste buds. Saltiness and sourness are perceived when alkali metals or hydrogen ions meet taste buds, respectively.
The basic tastes contribute only partially to the sensation and flavor of food in the mouth—other factors include smell, detected by the olfactory epithelium of the nose; texture, detected through a variety of mechanoreceptors, muscle nerves, etc.; temperature, det
Document 2:::
The primary gustatory cortex (GC) is a brain structure responsible for the perception of taste. It consists of two substructures: the anterior insula on the insular lobe and the frontal operculum on the inferior frontal gyrus of the frontal lobe. Because of its composition the primary gustatory cortex is sometimes referred to in literature as the AI/FO(Anterior Insula/Frontal Operculum). By using extracellular unit recording techniques, scientists have elucidated that neurons in the AI/FO respond to sweetness, saltiness, bitterness, and sourness, and they code the intensity of the taste stimulus.
Role in the taste pathway
Like the olfactory system, the taste system is defined by its specialized peripheral receptors and central pathways that relay and process taste information. Peripheral taste receptors are found on the upper surface of the tongue, soft palate, pharynx, and the upper part of the esophagus. Taste cells synapse with primary sensory axons that run in the chorda tympani and greater superficial petrosal branches of the facial nerve (cranial nerve VII), the lingual branch of the glossopharyngeal nerve (cranial nerve IX), and the superior laryngeal branch of the vagus nerve (Cranial nerve X) to innervate the taste buds in the tongue, palate, epiglottis, and esophagus respectively. The central axons of these primary sensory neurons in the respective cranial nerve ganglia project to rostral and lateral regions of the nucleus of the solitary tract in the medulla, which is also known as the gustatory nucleus of the solitary tract complex. Axons from the rostral (gustatory) part of the solitary nucleus project to the ventral posterior complex of the thalamus, where they terminate in the medial half of the ventral posterior medial nucleus. This nucleus projects in turn to several regions of the neocortex which includes the gustatory cortex (the frontal operculum and the insula), which becomes activated when the subject is consuming and experiencing t
Document 3:::
Aftertaste is the taste intensity of a food or beverage that is perceived immediately after that food or beverage is removed from the mouth. The aftertastes of different foods and beverages can vary by intensity and over time, but the unifying feature of aftertaste is that it is perceived after a food or beverage is either swallowed or spat out. The neurobiological mechanisms of taste (and aftertaste) signal transduction from the taste receptors in the mouth to the brain have not yet been fully understood. However, the primary taste processing area located in the insula has been observed to be involved in aftertaste perception.
Temporal taste perception
Characteristics of a food's aftertaste are quality, intensity, and duration. Quality describes the actual taste of a food and intensity conveys the magnitude of that taste. Duration describes how long a food's aftertaste sensation lasts. Foods that have lingering aftertastes typically have long sensation durations.
Because taste perception is unique to every person, descriptors for taste quality and intensity have been standardized, particularly for use in scientific studies. For taste quality, foods can be described by the commonly used terms "sweet", "sour", "salty", "bitter", "umami", or "no taste". Description of aftertaste perception relies heavily upon the use of these words to convey the taste that is being sensed after a food has been removed from the mouth.
The description of taste intensity is also subject to variability among individuals. Variations of the Borg Category Ratio Scale or other similar metrics are often used to assess the intensities of foods. The scales typically have categories that range from either zero or one through ten (or sometimes beyond ten) that describe the taste intensity of a food. A score of zero or one would correspond to unnoticeable or weak taste intensities, while a higher score would correspond to moderate or strong taste intensities. It is the prolonged moderate or stro
Document 4:::
Beer tasting is a way to learn more about the history, ingredients and production of beer as well as different beer styles, hops, yeast and beer presentation. A common way is to analyse the appearance, smell and taste of the beer. Then a final judgement of the beer's quality is done. There are many scales for rating beer among beer journalists and beer experts. Different magazines and experts often use their own scale, for example the famous British sommelier Jancis Robinson uses a scale between 1 and 20 and the famous American sommelier Joshua M. Bernstein uses a scale between 1 and 100. However it is common for professional organisations such as the Wine & Spirit Education Trust to rate beer with verbal grades: faulty - poor - acceptable - good - very good - outstanding, corresponding to a scale from 1 to 5.
Themes
First, a selection of beers is chosen for the tasting. A theme can be for example Belgian beers or a selection of beers of varying bitterness. Beers are often tasted in an order from lightest to heaviest, driest to sweetest and cheapest to most expensive. This forms a basic structure of the tasting, but it is more important to organise the tasting according to how the human tastebuds work. As tasting progresses, the tastebuds become less sensitive and can even be anaesthesised. After the beers have been chosen, suitable snacks are provided and information about each beer producer and region is given. To make the tasting diverse, four to six different beers to taste at the same time are generally provided. Interesting tasting themes can be for example different types of beer such as stout, wheat beer or India pale ale, different countries such as Belgian beers or American pale ales. It is also common to couple the beers with various food, such as a tasting of beers and cheeses.
Glass
Choosing a glass for beer tasting is more important than one might think. An ISO standard tasting glass is often used in professional tastings, which is the standard for t
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
There are five primary tastes in humans: sweet, sour, bitter, salty, and what?
A. tangy
B. umami
C. hot
D. aroma
Answer:
|
|
sciq-7120
|
multiple_choice
|
Synostoses unite the sacral vertebrae that fuse together to form the what?
|
[
"permanent sacrum",
"young sacrum",
"adult sacrum",
"lower sacrum"
] |
C
|
Relavent Documents:
Document 0:::
The sacrum (: sacra or sacrums), in human anatomy, is a large, triangular bone at the base of the spine that forms by the fusing of the sacral vertebrae (S1S5) between ages 18 and 30.
The sacrum situates at the upper, back part of the pelvic cavity, between the two wings of the pelvis. It forms joints with four other bones. The two projections at the sides of the sacrum are called the alae (wings), and articulate with the ilium at the L-shaped sacroiliac joints. The upper part of the sacrum connects with the last lumbar vertebra (L5), and its lower part with the coccyx (tailbone) via the sacral and coccygeal cornua.
The sacrum has three different surfaces which are shaped to accommodate surrounding pelvic structures. Overall it is concave (curved upon itself). The base of the sacrum, the broadest and uppermost part, is tilted forward as the sacral promontory internally. The central part is curved outward toward the posterior, allowing greater room for the pelvic cavity.
In all other quadrupedal vertebrates, the pelvic vertebrae undergo a similar developmental process to form a sacrum in the adult, even while the bony tail (caudal) vertebrae remain unfused. The number of sacral vertebrae varies slightly. For instance, the S1S5 vertebrae of a horse will fuse, the S1S3 of a dog will fuse, and four pelvic vertebrae of a rat will fuse between the lumbar and the caudal vertebrae of its tail.
The Stegosaurus dinosaur had a greatly enlarged neural canal in the sacrum, characterized as a "posterior brain case".
Structure
The sacrum is a complex structure providing support for the spine and accommodation for the spinal nerves. It also articulates with the hip bones. The sacrum has a base, an apex, and three surfaces – a pelvic, dorsal and a lateral surface. The base of the sacrum, which is broad and expanded, is directed upward and forward. On either side of the base is a large projection known as an ala of sacrum and these alae (wings) articulate with the sacroiliac joi
Document 1:::
The spinalis is a portion of the erector spinae, a bundle of muscles and tendons, located nearest to the spine. It is divided into three parts: Spinalis dorsi, spinalis cervicis, and spinalis capitis.
Spinalis dorsi
Spinalis dorsi, the medial continuation of the sacrospinalis, is scarcely separable as a distinct muscle. It is situated at the medial side of the longissimus dorsi, and is intimately blended with it; it arises by three or four tendons from the spinous processes of the first two lumbar and the last two thoracic vertebrae: these, uniting, form a small muscle which is inserted by separate tendons into the spinous processes of the upper thoracic vertebrae, the number varying from four to eight.
It is intimately united with the semispinalis dorsi, situated beneath it.
Spinalis cervicis
Spinalis cervicis, or spinalis colli, is an inconstant muscle, which arises from the lower part of the nuchal ligament, the spinous process of the seventh cervical, and sometimes from the spinous processes of the first and second thoracic vertebrae, and is inserted into the spinous process of the axis, and occasionally into the spinous processes of the two cervical vertebrae below it.
Spinalis capitis
Spinalis capitis (biventer cervicis) is usually inseparably connected with the semispinalis capitis.
Spinalis capitis is not well characterized in modern anatomy textbooks and atlases, and is often
omitted from anatomical illustration. However, it can be identified as fibers that extend from the spinous processes of TV1 and CV7 to the cranium, often blending with semispinalis capitis
See also
Iliocostalis
Longissimus
Semispinalis muscle
Document 2:::
The lumbar trunks are formed by the union of the efferent vessels from the lateral aortic lymph nodes.
They receive the lymph from the lower limbs, from the walls and viscera of the pelvis, from the kidneys and suprarenal glands and the deep lymphatics of the greater part of the abdominal wall.
Ultimately, the lumbar trunks empty into the cisterna chyli, a dilatation at the beginning of the thoracic duct.
Document 3:::
Each vertebra (: vertebrae) is an irregular bone with a complex structure composed of bone and some hyaline cartilage, that make up the vertebral column or spine, of vertebrates. The proportions of the vertebrae differ according to their spinal segment and the particular species.
The basic configuration of a vertebra varies; the bone is the body, and the central part of the body
is the centrum. The upper and lower surfaces of the vertebra body give attachment to the intervertebral discs. The posterior part of a vertebra forms a vertebral arch, in eleven parts, consisting of two pedicles (pedicle of vertebral arch), two laminae, and seven processes. The laminae give attachment to the ligamenta flava (ligaments of the spine). There are vertebral notches formed from the shape of the pedicles, which form the intervertebral foramina when the vertebrae articulate. These foramina are the entry and exit conduits for the spinal nerves. The body of the vertebra and the vertebral arch form the vertebral foramen, the larger, central opening that accommodates the spinal canal, which encloses and protects the spinal cord.
Vertebrae articulate with each other to give strength and flexibility to the spinal column, and the shape at their back and front aspects determines the range of movement. Structurally, vertebrae are essentially alike across the vertebrate species, with the greatest difference seen between an aquatic animal and other vertebrate animals. As such, vertebrates take their name from the vertebrae that compose the vertebral column.
Structure
General structure
In the human vertebral column the size of the vertebrae varies according to placement in the vertebral column, spinal loading, posture and pathology. Along the length of the spine the vertebrae change to accommodate different needs related to stress and mobility. Each vertebra is an irregular bone.
Every vertebra has a body (vertebral body), which consists of a large anterior middle portion called the cen
Document 4:::
The splenius capitis () () is a broad, straplike muscle in the back of the neck. It pulls on the base of the skull from the vertebrae in the neck and upper thorax. It is involved in movements such as shaking the head.
Structure
It arises from the lower half of the nuchal ligament, from the spinous process of the seventh cervical vertebra, and from the spinous processes of the upper three or four thoracic vertebrae.
The fibers of the muscle are directed upward and laterally and are inserted, under cover of the sternocleidomastoideus, into the mastoid process of the temporal bone, and into the rough surface on the occipital bone just below the lateral third of the superior nuchal line. The splenius capitis is deep to sternocleidomastoideus at the mastoid process, and to the trapezius for its lower portion. It is one of the muscles that forms the floor of the posterior triangle of the neck.
The splenius capitis muscle is innervated by the posterior ramus of spinal nerves C3 and C4.
Function
The splenius capitis muscle is a prime mover for head extension. The splenius capitis can also allow lateral flexion and rotation of the cervical spine.
Additional images
See also
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Synostoses unite the sacral vertebrae that fuse together to form the what?
A. permanent sacrum
B. young sacrum
C. adult sacrum
D. lower sacrum
Answer:
|
|
ai2_arc-781
|
multiple_choice
|
Which of the following changes occurs as a solid is heated?
|
[
"The kinetic energy of the solid decreases.",
"The average density of the solid increases.",
"The specific heat capacity of the solid decreases.",
"The average molecular speed in the solid increases."
] |
D
|
Relavent Documents:
Document 0:::
Thermofluids is a branch of science and engineering encompassing four intersecting fields:
Heat transfer
Thermodynamics
Fluid mechanics
Combustion
The term is a combination of "thermo", referring to heat, and "fluids", which refers to liquids, gases and vapors. Temperature, pressure, equations of state, and transport laws all play an important role in thermofluid problems. Phase transition and chemical reactions may also be important in a thermofluid context. The subject is sometimes also referred to as "thermal fluids".
Heat transfer
Heat transfer is a discipline of thermal engineering that concerns the transfer of thermal energy from one physical system to another. Heat transfer is classified into various mechanisms, such as heat conduction, convection, thermal radiation, and phase-change transfer. Engineers also consider the transfer of mass of differing chemical species, either cold or hot, to achieve heat transfer.
Sections include :
Energy transfer by heat, work and mass
Laws of thermodynamics
Entropy
Refrigeration Techniques
Properties and nature of pure substances
Applications
Engineering : Predicting and analysing the performance of machines
Thermodynamics
Thermodynamics is the science of energy conversion involving heat and other forms of energy, most notably mechanical work. It studies and interrelates the macroscopic variables, such as temperature, volume and pressure, which describe physical, thermodynamic systems.
Fluid mechanics
Fluid Mechanics the study of the physical forces at work during fluid flow. Fluid mechanics can be divided into fluid kinematics, the study of fluid motion, and fluid kinetics, the study of the effect of forces on fluid motion. Fluid mechanics can further be divided into fluid statics, the study of fluids at rest, and fluid dynamics, the study of fluids in motion. Some of its more interesting concepts include momentum and reactive forces in fluid flow and fluid machinery theory and performance.
Sections include:
Flu
Document 1:::
A phase-change material (PCM) is a substance which releases/absorbs sufficient energy at phase transition to provide useful heat or cooling. Generally the transition will be from one of the first two fundamental states of matter - solid and liquid - to the other. The phase transition may also be between non-classical states of matter, such as the conformity of crystals, where the material goes from conforming to one crystalline structure to conforming to another, which may be a higher or lower energy state.
The energy released/absorbed by phase transition from solid to liquid, or vice versa, the heat of fusion is generally much higher than the sensible heat. Ice, for example, requires 333.55 J/g to melt, but then water will rise one degree further with the addition of just 4.18 J/g. Water/ice is therefore a very useful phase change material and has been used to store winter cold to cool buildings in summer since at least the time of the Achaemenid Empire.
By melting and solidifying at the phase-change temperature (PCT), a PCM is capable of storing and releasing large amounts of energy compared to sensible heat storage. Heat is absorbed or released when the material changes from solid to liquid and vice versa or when the internal structure of the material changes; PCMs are accordingly referred to as latent heat storage (LHS) materials.
There are two principal classes of phase-change material: organic (carbon-containing) materials derived either from petroleum, from plants or from animals; and salt hydrates, which generally either use natural salts from the sea or from mineral deposits or are by-products of other processes. A third class is solid to solid phase change.
PCMs are used in many different commercial applications where energy storage and/or stable temperatures are required, including, among others, heating pads, cooling for telephone switching boxes, and clothing.
By far the biggest potential market is for building heating and cooling. In this ap
Document 2:::
A cooling curve is a line graph that represents the change of phase of matter, typically from a gas to a solid or a liquid to a solid. The independent variable (X-axis) is time and the dependent variable (Y-axis) is temperature. Below is an example of a cooling curve used in castings.
The initial point of the graph is the starting temperature of the matter, here noted as the "pouring temperature". When the phase change occurs, there is a "thermal arrest"; that is, the temperature stays constant. This is because the matter has more internal energy as a liquid or gas than in the state that it is cooling to. The amount of energy required for a phase change is known as latent heat. The "cooling rate" is the slope of the cooling curve at any point.
Alloy have range of melting point. It solidifies as above. First, molten alloy reaches to liquidus temperature and then freezing range starts. At solidus temperature, molten alloys becomes solid.
Document 3:::
Heatwork is the combined effect of temperature and time. It is important to several industries:
Ceramics
Glass and metal annealing
Metal heat treating
Pyrometric devices can be used to gauge heat work as they deform or contract due to heatwork to produce temperature equivalents. Within tolerances, firing can be undertaken at lower temperatures for a longer period to achieve comparable results. When the amount of heatwork of two firings is the same, the pieces may look identical, but there may be differences not visible, such as mechanical strength and microstructure. Heatwork is taught in material science courses, but is not a precise measurement or a valid scientific concept.
External links
Temperature equivalents table & description of Bullers Rings.
Temperature equivalents table & description of Nimra Cerglass pyrometric cones.
Temperature equivalents table & description of Orton pyrometric cones.
Temperature equivalents table of Seger pyrometric cones.
Temperature Equivalents, °F & °C for Bullers Ring.
Glass physics
Pottery
Metallurgy
Ceramic engineering
Document 4:::
Heat transfer physics describes the kinetics of energy storage, transport, and energy transformation by principal energy carriers: phonons (lattice vibration waves), electrons, fluid particles, and photons. Heat is energy stored in temperature-dependent motion of particles including electrons, atomic nuclei, individual atoms, and molecules. Heat is transferred to and from matter by the principal energy carriers. The state of energy stored within matter, or transported by the carriers, is described by a combination of classical and quantum statistical mechanics. The energy is different made (converted) among various carriers.
The heat transfer processes (or kinetics) are governed by the rates at which various related physical phenomena occur, such as (for example) the rate of particle collisions in classical mechanics. These various states and kinetics determine the heat transfer, i.e., the net rate of energy storage or transport. Governing these process from the atomic level (atom or molecule length scale) to macroscale are the laws of thermodynamics, including conservation of energy.
Introduction
Heat is thermal energy associated with temperature-dependent motion of particles. The macroscopic energy equation for infinitesimal volume used in heat transfer analysis is
where is heat flux vector, is temporal change of internal energy ( is density, is specific heat capacity at constant pressure, is temperature and is time), and is the energy conversion to and from thermal energy ( and are for principal energy carriers). So, the terms represent energy transport, storage and transformation. Heat flux vector is composed of three macroscopic fundamental modes, which are conduction (, : thermal conductivity), convection (, : velocity), and radiation (, : angular frequency, : polar angle, : spectral, directional radiation intensity, : unit vector), i.e., .
Once states and kinetics of the energy conversion and thermophysical properties are known, the fate of heat
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which of the following changes occurs as a solid is heated?
A. The kinetic energy of the solid decreases.
B. The average density of the solid increases.
C. The specific heat capacity of the solid decreases.
D. The average molecular speed in the solid increases.
Answer:
|
|
sciq-4979
|
multiple_choice
|
What is the biggest group of animals on the planet?
|
[
"mammles",
"arthropods",
"carnivores",
"herbivores"
] |
B
|
Relavent Documents:
Document 0:::
Animals are multicellular, eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. As of 2022, 2.16 million living animal species have been described—of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are around 7.77 million animal species. Animals range in length from to . They have complex interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology.
Most living animal species are in Bilateria, a clade whose members have a bilaterally symmetric body plan. The Bilateria include the protostomes, containing animals such as nematodes, arthropods, flatworms, annelids and molluscs, and the deuterostomes, containing the echinoderms and the chordates, the latter including the vertebrates. Life forms interpreted as early animals were present in the Ediacaran biota of the late Precambrian. Many modern animal phyla became clearly established in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago.
Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on ad
Document 1:::
Animals are multicellular eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, are able to move, reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Over 1.5 million living animal species have been described—of which around 1 million are insects—but it has been estimated there are over 7 million in total. Animals range in size from 8.5 millionths of a metre to long and have complex interactions with each other and their environments, forming intricate food webs. The study of animals is called zoology.
Animals may be listed or indexed by many criteria, including taxonomy, status as endangered species, their geographical location, and their portrayal and/or naming in human culture.
By common name
List of animal names (male, female, young, and group)
By aspect
List of common household pests
List of animal sounds
List of animals by number of neurons
By domestication
List of domesticated animals
By eating behaviour
List of herbivorous animals
List of omnivores
List of carnivores
By endangered status
IUCN Red List endangered species (Animalia)
United States Fish and Wildlife Service list of endangered species
By extinction
List of extinct animals
List of extinct birds
List of extinct mammals
List of extinct cetaceans
List of extinct butterflies
By region
Lists of amphibians by region
Lists of birds by region
Lists of mammals by region
Lists of reptiles by region
By individual (real or fictional)
Real
Lists of snakes
List of individual cats
List of oldest cats
List of giant squids
List of individual elephants
List of historical horses
List of leading Thoroughbred racehorses
List of individual apes
List of individual bears
List of giant pandas
List of individual birds
List of individual bovines
List of individual cetaceans
List of individual dogs
List of oldest dogs
List of individual monkeys
List of individual pigs
List of w
Document 2:::
In zoology, megafauna (from Greek μέγας megas "large" and Neo-Latin fauna "animal life") are large animals. The most common thresholds to be a megafauna are weighing over (i.e., having a mass comparable to or larger than a human) or weighing over a tonne, (i.e., having a mass comparable to or larger than an ox). The first of these include many species not popularly thought of as overly large, and being the only few large animals left in a given range/area, such as white-tailed deer, Thomson's gazelle, and red kangaroo.
In practice, the most common usage encountered in academic and popular writing describes land mammals roughly larger than a human that are not (solely) domesticated. The term is especially associated with the Pleistocene megafauna – the land animals that are considered archetypical of the last ice age, such as mammoths, the majority of which in northern Eurasia, Australia-New Guinea and the Americas became extinct within the last forty thousand years.
Among living animals, the term megafauna is most commonly used for the largest extant terrestrial mammals, which includes (but is not limited to) elephants, giraffes, hippopotamuses, rhinoceroses, and large bovines. Of these five categories of large herbivores, only bovines are presently found outside of Africa and southern Asia, but all the others were formerly more wide-ranging, with their ranges and populations continually shrinking and decreasing over time. Wild equines are another example of megafauna, but their current ranges are largely restricted to the Old World, specifically Africa and Asia. Megafaunal species may be categorized according to their dietary type: megaherbivores (e.g., elephants), megacarnivores (e.g., lions), and, more rarely, megaomnivores (e.g., bears). The megafauna is also categorized by the class of animals that it belongs to, which are mammals, birds, reptiles, amphibians, fish, and invertebrates.
Other common uses are for giant aquatic species, especially whales, as
Document 3:::
In zoology, mammalogy is the study of mammals – a class of vertebrates with characteristics such as homeothermic metabolism, fur, four-chambered hearts, and complex nervous systems. Mammalogy has also been known as "mastology," "theriology," and "therology." The archive of number of mammals on earth is constantly growing, but is currently set at 6,495 different mammal species including recently extinct. There are 5,416 living mammals identified on earth and roughly 1,251 have been newly discovered since 2006. The major branches of mammalogy include natural history, taxonomy and systematics, anatomy and physiology, ethology, ecology, and management and control. The approximate salary of a mammalogist varies from $20,000 to $60,000 a year, depending on their experience. Mammalogists are typically involved in activities such as conducting research, managing personnel, and writing proposals.
Mammalogy branches off into other taxonomically-oriented disciplines such as primatology (study of primates), and cetology (study of cetaceans). Like other studies, mammalogy is also a part of zoology which is also a part of biology, the study of all living things.
Research purposes
Mammalogists have stated that there are multiple reasons for the study and observation of mammals. Knowing how mammals contribute or thrive in their ecosystems gives knowledge on the ecology behind it. Mammals are often used in business industries, agriculture, and kept for pets. Studying mammals habitats and source of energy has led to aiding in survival. The domestication of some small mammals has also helped discover several different diseases, viruses, and cures.
Mammalogist
A mammalogist studies and observes mammals. In studying mammals, they can observe their habitats, contributions to the ecosystem, their interactions, and the anatomy and physiology. A mammalogist can do a broad variety of things within the realm of mammals. A mammalogist on average can make roughly $58,000 a year. This dep
Document 4:::
Animal science is described as "studying the biology of animals that are under the control of humankind". It can also be described as the production and management of farm animals. Historically, the degree was called animal husbandry and the animals studied were livestock species, like cattle, sheep, pigs, poultry, and horses. Today, courses available look at a broader area, including companion animals, like dogs and cats, and many exotic species. Degrees in Animal Science are offered at a number of colleges and universities. Animal science degrees are often offered at land-grant universities, which will often have on-campus farms to give students hands-on experience with livestock animals.
Education
Professional education in animal science prepares students for careers in areas such as animal breeding, food and fiber production, nutrition, animal agribusiness, animal behavior, and welfare. Courses in a typical Animal Science program may include genetics, microbiology, animal behavior, nutrition, physiology, and reproduction. Courses in support areas, such as genetics, soils, agricultural economics and marketing, legal aspects, and the environment also are offered.
Bachelor degree
At many universities, a Bachelor of Science (BS) degree in Animal Science allows emphasis in certain areas. Typical areas are species-specific or career-specific. Species-specific areas of emphasis prepare students for a career in dairy management, beef management, swine management, sheep or small ruminant management, poultry production, or the horse industry. Other career-specific areas of study include pre-veterinary medicine studies, livestock business and marketing, animal welfare and behavior, animal nutrition science, animal reproduction science, or genetics. Youth programs are also an important part of animal science programs.
Pre-veterinary emphasis
Many schools that offer a degree option in Animal Science also offer a pre-veterinary emphasis such as Iowa State University, th
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the biggest group of animals on the planet?
A. mammles
B. arthropods
C. carnivores
D. herbivores
Answer:
|
|
sciq-11279
|
multiple_choice
|
What happens to structural genes in the presence of tryptophan?
|
[
"they are exterminated",
"they are transcribed",
"they are not transcribed",
"they are not oxidised"
] |
C
|
Relavent Documents:
Document 0:::
The Biffen Lecture is a lectureship organised by the John Innes Centre, named after Rowland Biffen.
Lecturers
Source: John Innes Centre
2001 John Doebley
2002 Francesco Salamini
2003 Steven D. Tanksley
2004 Michael Freeling
2006 Dick Flavell
2008 Rob Martienssen – 'Propagating silent heterochromatin with RNA interference in plants and fission yeast'
2009 Susan McCouch, Department of Plant Breeding & Genetics, Cornell University – 'Gene flow and genetic isolation during crop evolution'
2010 Peter Langridge, University of Adelaide, Australia – 'Miserable but worth the trouble: Genomics, wheat and difficult environments'
2012 Sarah Hake, Plant Gene Expression Center, USDA-ARS – 'Patterning the maize leaf'
2014 Professor Pamela Ronald, Department of Plant Pathology & The Genome Center, University of California Davis – ‘Engineering crops for resistance to disease and tolerance of stress’
2015 Professor Lord May, Department of Zoology, University of Oxford – ‘Unanswered questions in ecology, and why they matter’
2016 Edward Buckler, US Department of Agriculture – ‘Breeding 4.0? Sorting through the adaptive and deleterious variants in maize and beyond’
See also
Bateson Lecture
Chatt Lecture
Darlington Lecture
Haldane Lecture
List of genetics awards
Document 1:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
Document 2:::
This is a list of topics in molecular biology. See also index of biochemistry articles.
Document 3:::
Function Maize gene for first step in biosynthesis of benzoxazin, which aids in resistance to insect pests, pathogenic fungi and bacteria.
First report Hamilton 1964, as a mutant sensitive to the herbicide atrazine, and lacking benzoxazinoids (less than 1% of non-mutant plants).
Molecular characterization reveals that the BX1 protein is a homologue to the alpha-subunit of tryptophan synthase. The reference mutant allele has a deletion of about 900 bp, located at the 5'-terminus and comprising sequence upstream of the transcription start site and the first exon. Additional alleles are given by a Mu transposon insertion in the fourth exon (Frey et al. 1997 ) and a Ds transposon insertion in the maize inbred line W22 genetic background (Betsiashvili et al. 2014). Gene sequence diversity analysis has been performed for 281 inbred lines of maize, and the results suggest that bx1 is responsible for much of the natural variation in DIMBOA (a benzoxazinoid compound) synthesis (Butron et al. 2010). Genetic variation in benzoxazinoid content influences maize resistance to several insect pests (Meihls et al. 2013; McMullen et al. 2009).
Map location
AB chromosome translocation analyses place on short arm of chromosome 4 (4S; Simcox and Weber 1985 ). There is close linkage to other genes in the benzoxazinoid synthesis pathway [bx2, bx3, bx4, bx5 Frey et al. 1995, 1997 ). Gene bx1 is 2490 bp from bx2 (Frey et al. 1997 ); between umc123 and agrc94 on 4S (Melanson et al. 1997 ). Mapping probes: SSR p-umc1022 (Sharopova et al. 2002 ); Overgo (physical map probe) PCO06449 (Gardiner et al. 2004 ).
Document 4:::
The following outline is provided as an overview of and topical guide to biochemistry:
Biochemistry – study of chemical processes in living organisms, including living matter. Biochemistry governs all living organisms and living processes.
Applications of biochemistry
Testing
Ames test – salmonella bacteria is exposed to a chemical under question (a food additive, for example), and changes in the way the bacteria grows are measured. This test is useful for screening chemicals to see if they mutate the structure of DNA and by extension identifying their potential to cause cancer in humans.
Pregnancy test – one uses a urine sample and the other a blood sample. Both detect the presence of the hormone human chorionic gonadotropin (hCG). This hormone is produced by the placenta shortly after implantation of the embryo into the uterine walls and accumulates.
Breast cancer screening – identification of risk by testing for mutations in two genes—Breast Cancer-1 gene (BRCA1) and the Breast Cancer-2 gene (BRCA2)—allow a woman to schedule increased screening tests at a more frequent rate than the general population.
Prenatal genetic testing – testing the fetus for potential genetic defects, to detect chromosomal abnormalities such as Down syndrome or birth defects such as spina bifida.
PKU test – Phenylketonuria (PKU) is a metabolic disorder in which the individual is missing an enzyme called phenylalanine hydroxylase. Absence of this enzyme allows the buildup of phenylalanine, which can lead to mental retardation.
Genetic engineering – taking a gene from one organism and placing it into another. Biochemists inserted the gene for human insulin into bacteria. The bacteria, through the process of translation, create human insulin.
Cloning – Dolly the sheep was the first mammal ever cloned from adult animal cells. The cloned sheep was, of course, genetically identical to the original adult sheep. This clone was created by taking cells from the udder of a six-year-old
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What happens to structural genes in the presence of tryptophan?
A. they are exterminated
B. they are transcribed
C. they are not transcribed
D. they are not oxidised
Answer:
|
|
sciq-856
|
multiple_choice
|
The milky way galaxy is which shape type of galaxy?
|
[
"helical",
"cylindrical",
"spherical",
"spiral"
] |
D
|
Relavent Documents:
Document 0:::
Types
Quasar
Supermassive black hole
Hypercompact stellar system (hypothetical object organized around a supermassive black hole)
Intermediate-mass black holes and candidates
Cigar Galaxy (Messier 82, NGC 3034)
GCIRS 13E
HLX-1
M82 X-1
Messier 15 (NGC 7078)
Messier 110 (NGC 205)
Sculptor Galaxy (NGC 253)
Triangulum Galaxy (Messier 33, NGC 598
Document 1:::
The Morphs collaboration was a coordinated study to determine the morphologies of galaxies in distant clusters and to investigate the evolution of galaxies as a function of environment and epoch. Eleven clusters were examined and a detailed ground-based and space-based study was carried out.
The project was begun in 1997 based upon the earlier observations by two groups using data from images derived from the pre-refurbished Hubble Space Telescope. It was a collaboration of Alan Dressler and Augustus Oemler, Jr., at Observatory of the Carnegie Institute of Washington, Warrick J. Couch at the University of New South Wales, Richard Ellis at Caltech, Bianca Poggianti at the University of Padua, Amy Barger at the University of Hawaii's Institute for Astronomy, Harvey Butcher at ASTRON, and Ray M. Sharples and Ian Smail at Durham University. Results were published through 2000.
The collaboration sought answers to the differences in the origins of the various galaxy types — elliptical, lenticular, and spiral. The studies found that elliptical galaxies were the oldest and formed from the violent merger of other galaxies about two to three billion years after the Big Bang. Star formation in elliptical galaxies ceased about that time. On the other hand, new stars are still forming in the spiral arms of spiral galaxies. Lenticular galaxies (SO) are intermediate between the first two. They contain structures similar to spiral arms, but devoid of the gas and new stars of the spiral galaxies. Lenticular galaxies are the prevalent form in rich galaxy clusters, which suggests that spirals may be transformed into lenticular galaxies as time progresses. The exact process may be related to high galactic density, or to the total mass in a rich cluster's central core. The Morphs collaboration found that one of the principal mechanisms of this transformation involves the interaction among spiral galaxies, as they fall toward the core of the cluster.
The Inamori Magellan Areal Camer
Document 2:::
Galactic clusters are gravitationally bound large-scale structures of multiple galaxies. The evolution of these aggregates is determined by time and manner of formation and the process of how their structures and constituents have been changing with time. Gamow (1952) and Weizscker (1951) showed that the observed rotations of galaxies are important for cosmology. They postulated that the rotation of galaxies might be a clue of physical conditions under which these systems formed. Thus, understanding the distribution of spatial orientations of the spin vectors of galaxies is critical to understanding the origin of the angular momenta of galaxies.
There are mainly three scenarios for the origin of galaxy clusters and superclusters. These models are based on different assumptions of the primordial conditions, so they predict different spin vector alignments of the galaxies. The three hypotheses are the pancake model, the hierarchy model, and the primordial vorticity theory. The three are mutually exclusive as they produce contradictory predictions. However, the predictions made by all three theories are based on the precepts of cosmology. Thus, these models can be tested using a database with appropriate methods of analysis.
Galaxies
A galaxy is a large gravitational aggregation of stars, dust, gas, and an unknown component termed dark matter. The Milky Way Galaxy is only one of the billions of galaxies in the known universe. Galaxies are classified into spirals, ellipticals, irregular, and peculiar. Sizes can range from only a few thousand stars (dwarf irregulars) to 1013 stars in giant ellipticals. Elliptical galaxies are spherical or elliptical in appearance. Spiral galaxies range from S0, the lenticular galaxies, to Sb, which have a bar across the nucleus, to Sc galaxies which have strong spiral arms. In total count, ellipticals amount to 13%, S0 to 22%, Sa, b, c galaxies to 61%, irregulars to 3.5%, and peculiars to 0.9%.
At the center of most galaxies is a
Document 3:::
The rotation curve of a disc galaxy (also called a velocity curve) is a plot of the orbital speeds of visible stars or gas in that galaxy versus their radial distance from that galaxy's centre. It is typically rendered graphically as a plot, and the data observed from each side of a spiral galaxy are generally asymmetric, so that data from each side are averaged to create the curve. A significant discrepancy exists between the experimental curves observed, and a curve derived by applying gravity theory to the matter observed in a galaxy. Theories involving dark matter are the main postulated solutions to account for the variance.
The rotational/orbital speeds of galaxies/stars do not follow the rules found in other orbital systems such as stars/planets and planets/moons that have most of their mass at the centre. Stars revolve around their galaxy's centre at equal or increasing speed over a large range of distances. In contrast, the orbital velocities of planets in planetary systems and moons orbiting planets decline with distance according to Kepler’s third law. This reflects the mass distributions within those systems. The mass estimations for galaxies based on the light they emit are far too low to explain the velocity observations.
The galaxy rotation problem is the discrepancy between observed galaxy rotation curves and the theoretical prediction, assuming a centrally dominated mass associated with the observed luminous material. When mass profiles of galaxies are calculated from the distribution of stars in spirals and mass-to-light ratios in the stellar disks, they do not match with the masses derived from the observed rotation curves and the law of gravity. A solution to this conundrum is to hypothesize the existence of dark matter and to assume its distribution from the galaxy's center out to its halo.
Though dark matter is by far the most accepted explanation of the rotation problem, other proposals have been offered with varying degrees of success.
Document 4:::
Eris is a computer simulation of the Milky Way galaxy's physics. It was done by astrophysicists from the Institute for Theoretical Physics at the University of Zurich, Switzerland and University of California, Santa Cruz. The simulation project was undertaken at the NASA Advanced Supercomputer Division's Pleiades and the Swiss National Supercomputing Centre for nearly eight months, which would have otherwise taken 570 years in a personal computer. The Eris simulation is the first successful detailed simulation of a Milky Way like galaxy. The results of the simulation were announced in August 2011.
Background
Simulation projects intending to simulate spiral galaxies have been undertaken for the past 20 years. All of these projects had failed as the simulation results showed central bulges which are huge compared to the disk size.
Simulation
The simulation was undertaken using supercomputers which include the Pleiades supercomputer, the Swiss National Supercomputing Centre and the supercomputers at the University of California, Santa Cruz. The simulation used 1.4 million processor-hours of the Pleiades supercomputer.
It is based on the theory that in the early universe, cold and slow moving dark matter particles clumped together. These dark matter clumps then formed the "scaffolding" around galaxies and galactic clusters. The motions of more than 60 million particles which represented dark matter and galactic gas were simulated for a period of 13 billion years. The software platform Gasoline was used for the simulation.
Simulation results
The Eris simulation is the first successful simulation to have resolved the high-density gas clouds where stars formed. The simulation result consisted of a galaxy which is very similar to the Milky Way galaxy. Some of the parameters which were similar to Milky Way are stellar content, gas content, kinematic decomposition, brightness profile and the bulge-to-disk ratio.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The milky way galaxy is which shape type of galaxy?
A. helical
B. cylindrical
C. spherical
D. spiral
Answer:
|
|
sciq-5000
|
multiple_choice
|
Like other amphibians, frogs generally lay their eggs in moist environments, which are required since the eggs lack what feature?
|
[
"tubes",
"membrane",
"shells",
"nucleus"
] |
C
|
Relavent Documents:
Document 0:::
The common frog or grass frog (Rana temporaria), also known as the European common frog, European common brown frog, European grass frog, European Holarctic true frog, European pond frog or European brown frog, is a semi-aquatic amphibian of the family Ranidae, found throughout much of Europe as far north as Scandinavia and as far east as the Urals, except for most of the Iberian Peninsula, southern Italy, and the southern Balkans. The farthest west it can be found is Ireland. It is also found in Asia, and eastward to Japan. The nominative, and most common, subspecies Rana temporaria temporaria is a largely terrestrial frog native to Europe. It is distributed throughout northern Europe and can be found in Ireland, the Isle of Lewis and as far east as Japan.
Common frogs metamorphose through three distinct developmental life stages — aquatic larva, terrestrial juvenile, and adult. They have corpulent bodies with a rounded snout, webbed feet and long hind legs adapted for swimming in water and hopping on land. Common frogs are often confused with the common toad (Bufo bufo), but frogs can easily be distinguished as they have longer legs, hop, and have a moist skin, whereas toads crawl and have a dry 'warty' skin. The spawn of the two species also differs, in that frog spawn is laid in clumps and toad spawn is laid in long strings.
There are 3 subspecies of the common frog, R. t. temporaria, R. t. honnorati and R. t. palvipalmata. R. t. temporaria is the most common subspecies of this frog.
Description
The adult common frog has a body length of . In addition, its back and flanks vary in colour from olive green to grey-brown, brown, olive brown, grey, yellowish and rufous. However, it can lighten and darken its skin to match its surroundings. Some individuals have more unusual colouration—both black and red individuals have been found in Scotland, and albino frogs have been found with yellow skin and red eyes. During the mating season the male common frog tends to tu
Document 1:::
Early stages of embryogenesis of tailless amphibians
Embryogenesis in living creatures occurs in different ways depending on class and species. One of the most basic criteria of such development is independence from a water habitat.
Amphibians were the earliest animals to adapt themselves to a mixed environment containing both water and dry land.
The embryonic development of tailless amphibians is presented below using the African clawed frog (Xenopus laevis) and the northern leopard frog (Rana pipiens) as examples.
The oocyte in these frog species is a polarized cell - it has specified axes and poles. The animal pole of the cell contains pigment cells, whereas the vegetal pole (the yolk) contains most of the nutritive material. The pigment is composed of light-absorbing melanin.
The sperm cell enters the oocyte in the region of the animal pole. Two blocks - defensive mechanisms meant to prevent polyspermy - occur: the fast block and the slow block. A relatively short time after fertilization, the cortical cytoplasm (located just beneath the cell membrane) rotates by 30 degrees. This results in the creation of the gray crescent. Its establishment determines the location of the dorsal and ventral (up-down) axis, as well as of the anterior and posterior (front-back) axis and the dextro-sinistral (left-right) axis of the embryo.
Embryo cleavage
The cleavage (cell division) of a frog’s embryo is complete and uneven, because most of the yolk is gathered in the vegetal region. The first cleavage runs across the animal-vegetal axis, dividing the gray crescent into two parts. The second cleavage also cuts through the gray crescent, although always running perpendicularly to the first one. This results in the creation of four identical blastomeres - separate cells now forming the embryo. The third cleavage runs equatorially and closer to the animal pole, thus creating blastomeres of unequal size (micromeres in the animal region and macromeres in the vegetal region).
Document 2:::
A trophic egg is an egg whose function is not reproduction but nutrition; in essence, the trophic egg serves as food for offspring hatched from viable eggs. In most species that produce them, a trophic egg is usually an unfertilised egg. The production of trophic eggs has been observed in a highly diverse range of species, including fish, amphibians, spiders and insects. The function is not limited to any particular level of parental care, but occurs in some sub-social species of insects, the spider A. ferox, and a few other species like the frogs Leptodactylus fallax and Oophaga, and the catfish Bagrus meridionalis.
Parents of some species deliver trophic eggs directly to their offspring, whereas some other species simply produce the trophic eggs after laying the viable eggs; they then leave the trophic eggs where the viable offspring are likely to find them.
The mackerel sharks present the most extreme example of proximity between reproductive eggs and trophic eggs; their viable offspring feed on trophic eggs in utero.
Despite the diversity of species and life strategies in which trophic eggs occur, all trophic egg functions are similarly derived from similar ancestral functions, which once amounted to the sacrifice of potential future offspring in order to provide food for the survival of rival (usually earlier) offspring. In more derived examples the trophic eggs are not viable, being neither fertilised, nor even fully formed in some cases, so they do not represent actually potential offspring, although they still represent parental investment corresponding to the amount of food it took to produce them.
Morphology
Trophic eggs are not always morphologically distinct from normal reproductive eggs; however if there is no physical distinction there tends to be some kind of specialised behaviour in the way that trophic eggs are delivered by the parents.
In some beetles, trophic eggs are paler in colour and softer in texture than reproductive eggs, with a smooth
Document 3:::
The western clawed frog (Xenopus tropicalis) is a species of frog in the family Pipidae, also known as tropical clawed frog. It is the only species in the genus Xenopus to have a diploid genome. Its genome has been sequenced, making it a significant model organism for genetics that complements the related species Xenopus laevis (the African clawed frog), a widely used vertebrate model for developmental biology. X. tropicalis also has a number of advantages over X. laevis in research, such as a much shorter generation time (<5 months), smaller size ( body length), and a larger number of eggs per spawn.
It is found in Benin, Burkina Faso, Cameroon, Ivory Coast, Equatorial Guinea, Gambia, Ghana, Guinea, Guinea-Bissau, Liberia, Nigeria, Senegal, Sierra Leone, Togo, and possibly Mali. Its natural habitats are subtropical or tropical moist lowland forests, moist savanna, rivers, intermittent rivers, swamps, freshwater lakes, intermittent freshwater lakes, freshwater marshes, intermittent freshwater marshes, rural gardens, heavily degraded former forests, water storage areas, ponds, aquaculture ponds, and canals and ditches.
Description
The western clawed frog is a medium-sized species with a somewhat flattened body and a snout-vent length of , females being larger than males. The eyes are bulging and situated high on the head and there is a short tentacle just below each eye. A row of unpigmented dermal tubercles runs along the flank from just behind the eye, and are thought to represent a lateral line organ. The limbs are short and plump, and the fully webbed feet have horny claws. The skin is finely granular. The dorsal surface varies from pale to dark brown and has small grey and black spots. The ventral surface is dull white or yellowish with some dark mottling.
Distribution and habitat
The western clawed frog is an aquatic species and is found in the West African rainforest belt with a range stretching from Senegal to Cameroon and eastern Zaire. It is generally co
Document 4:::
An eggshell is the outer covering of a hard-shelled egg and of some forms of eggs with soft outer coats.
Worm eggs
Nematode eggs present a two layered structure: an external vitellin layer made of chitin that confers mechanical resistance and an internal lipid-rich layer that makes the egg chamber impermeable.
Insect eggs
Insects and other arthropods lay a large variety of styles and shapes of eggs. Some of them have gelatinous or skin-like coverings, others have hard eggshells. Softer shells are mostly protein. It may be fibrous or quite liquid. Some arthropod eggs do not actually have shells, rather, their outer covering is actually the outermost embryonic membrane, the choroid, which protects inner layers. This can be a complex structure, and it may have different layers, including an outermost layer called an exochorion. Eggs which must survive in dry conditions usually have hard eggshells, made mostly of dehydrated or mineralized proteins with pore systems to allow respiration. Arthropod eggs can have extensive ornamentation on their outer surfaces.
Fish, amphibian and reptile eggs
Fish and amphibians generally lay eggs which are surrounded by the extraembryonic membranes but do not develop a shell, hard or soft, around these membranes. Some fish and amphibian eggs have thick, leathery coats, especially if they must withstand physical force or desiccation. These types of eggs can also be very small and fragile.
While many reptiles lay eggs with flexible, calcified eggshells, there are some that lay hard eggs. Eggs laid by snakes generally have leathery shells which often adhere to one another. Depending on the species, turtles and tortoises lay hard or soft eggs. Several species lay eggs which are nearly indistinguishable from bird eggs.
Bird eggs
The bird egg is a fertilized gamete (or, in the case of some birds, such as chickens, possibly unfertilized) located on the yolk surface and surrounded by albumen, or egg white. The albumen in turn is surro
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Like other amphibians, frogs generally lay their eggs in moist environments, which are required since the eggs lack what feature?
A. tubes
B. membrane
C. shells
D. nucleus
Answer:
|
|
sciq-9531
|
multiple_choice
|
A wildfire clears a forest of vegetation and animal life, returning their nutrients to the ground leaving a foundation for rapid recolonization. what is this a classic example of?
|
[
"typical succession",
"primary succession",
"cause succession",
"secondary succession"
] |
D
|
Relavent Documents:
Document 0:::
Primary succession is the beginning step of ecological succession after an extreme disturbance, which usually occurs in an environment devoid of vegetation and other organisms. These environments are typically lacking in soil, as disturbances like lava flow or retreating glaciers scour the environment clear of nutrients.
In contrast, secondary succession occurs on substrates that previously supported vegetation before an ecological disturbance. This occurs when smaller disturbances like floods, hurricanes, tornadoes, and fires destroy only the local plant life and leave soil nutrients for immediate establishment by intermediate community species.
Occurrence
In primary succession pioneer species like lichen, algae and fungi as well as abiotic factors like wind and water start to "normalise" the habitat or in other words start to develop soil and other important mechanisms for greater diversity to flourish. Primary succession begins on rock formations, such as volcanoes or mountains, or in a place with no organisms or soil. Primary succession leads to conditions nearer optimum for vascular plant growth; pedogenesis or the formation of soil, and the increased amount of shade are the most important processes.
These pioneer lichen, algae, and fungi are then dominated and often replaced by plants that are better adapted to less harsh conditions, these plants include vascular plants like grasses and some shrubs that are able to live in thin soils that are often mineral-based. Water and nutrient levels increase with the amount of succession exhibited.
The early stages of primary succession are dominated by species with small propagules (seed and spores) which can be dispersed long distances. The early colonizers—often algae, fungi, and lichens—stabilize the substrate. Nitrogen supplies are limited in new soils, and nitrogen-fixing species tend to play an important role early in primary succession. Unlike in primary succession, the species that dominate secondary success
Document 1:::
Secondary succession is the secondary ecological succession of a plant's life. As opposed to the first, primary succession, secondary succession is a process started by an event (e.g. forest fire, harvesting, hurricane, etc.) that reduces an already established ecosystem (e.g. a forest or a wheat field) to a smaller population of species, and as such secondary succession occurs on preexisting soil whereas primary succession usually occurs in a place lacking soil. Many factors can affect secondary succession, such as trophic interaction, initial composition, and competition-colonization trade-offs. The factors that control the increase in abundance of a species during succession may be determined mainly by seed production and dispersal, micro climate; landscape structure (habitat patch size and distance to outside seed sources); bulk density, pH, and soil texture (sand and clay).
Secondary succession is the ecological succession that occurs after the initial succession has been disrupted and some plants and animals still exist. It is usually faster than primary succession as soil is already present, and seeds, roots, and the underground vegetative organs of plants may still survive in the soil.
Examples
Imperata
Imperata grasslands are caused by human activities such as logging, forest clearing for shifting cultivation, agriculture and grazing, and also by frequent fires. The latter is a frequent result of human interference. However, when not maintained by frequent fires and human disturbances, they regenerate naturally and speedily to secondary young forest. The time of succession in Imperata grassland (for example in Samboja Lestari area), Imperata cylindrica has the highest coverage but it becomes less dominant from the fourth year onwards. While Imperata decreases, the percentage of shrubs and young trees clearly increases with time. In the burned plots, Melastoma malabathricum, Eupatorium inulaefolium, Ficus sp., and Vitex pinnata. strongly increase with
Document 2:::
Fire ecology is a scientific discipline concerned with the effects of fire on natural ecosystems. Many ecosystems, particularly prairie, savanna, chaparral and coniferous forests, have evolved with fire as an essential contributor to habitat vitality and renewal. Many plant species in fire-affected environments use fire to germinate, establish, or to reproduce. Wildfire suppression not only endangers these species, but also the animals that depend upon them.
Wildfire suppression campaigns in the United States have historically molded public opinion to believe that wildfires are harmful to nature. Ecological research has shown, however, that fire is an integral component in the function and biodiversity of many natural habitats, and that the organisms within these communities have adapted to withstand, and even to exploit, natural wildfire. More generally, fire is now regarded as a 'natural disturbance', similar to flooding, windstorms, and landslides, that has driven the evolution of species and controls the characteristics of ecosystems.
Fire suppression, in combination with other human-caused environmental changes, may have resulted in unforeseen consequences for natural ecosystems. Some large wildfires in the United States have been blamed on years of fire suppression and the continuing expansion of people into fire-adapted ecosystems as well as climate change. Land managers are faced with tough questions regarding how to restore a natural fire regime, but allowing wildfires to burn is likely the least expensive and most effective method in many situations.
History
Fire has played a major role in shaping the world's vegetation. The biological process of photosynthesis began to concentrate the atmospheric oxygen needed for combustion during the Devonian approximately 350 million years ago. Then, approximately 125 million years ago, fire began to influence the habitat of land plants.
In the 20th century ecologist Charles Cooper made a plea for fire as an eco
Document 3:::
The relationships between fire, vegetation, and climate create what is known as a fire regime. Within a fire regime, fire ecologists study the relationship between diverse ecosystems and fire; not only how fire affects vegetation, but also how vegetation affects the behavior of fire. The study of neighboring vegetation types that may be highly flammable and less flammable has provided insight into how these vegetation types can exist side by side, and are maintained by the presence or absence of fire events. Ecologists have studied these boundaries between different vegetation types, such as a closed canopy forest and a grassland, and hypothesized about how climate and soil fertility create these boundaries in vegetation types. Research in the field of pyrogeography shows how fire also plays an important role in the maintenance of dominant vegetation types, and how different vegetation types with distinct relationships to fire can exist side by side in the same climate conditions. These relationships can be described in conceptual models called fire–vegetation feedbacks, and alternative stable states.
Fire–vegetation feedbacks
Vegetation can be understood as highly flammable (pyrophilic) and less flammable (pyrophobic). A fire–vegetation feedback describes the relationship between fire and the dominant vegetation type. An example of a highly flammable vegetation type is a grassland. Frequent fire will maintain grassland as the dominant vegetation in a positive feedback loop. This happens because frequent fire will kill trees trying to establish in the area, yet the intervals between each fire will allow for new grasses to establish, grow into fuel, and burn again. Therefore, frequent fire on a grassland area will maintain grass as the dominant vegetation and not permit the encroachment of trees. In contrast, fire will occur less frequently and less severely in closed canopy forests because the fuels are more dense, shaded, and therefore more humid thereby not ign
Document 4:::
"Auto-" meaning self or same, and "-genic" meaning producing or causing. Autogenic succession refers to ecological succession driven by biotic factors within an ecosystem and although the mechanisms of autogenic succession have long been debated, the role of living things in shaping the progression of succession was realized early on. Presently, there is more of a consensus that the mechanisms of facilitation, tolerance, and inhibition all contribute to autogenic succession. The concept of succession is most often associated with communities of vegetation and forests, though it is applicable to a broader range of ecosystems. In contrast, allogenic succession is driven by the abiotic components of the ecosystem.
How it occurs
The plants themselves (biotic components) cause succession to occur.
Light captured by leaves
Production of detritus
Water and nutrient uptake
Nitrogen fixation
anthropogenic climate change
These aspects lead to a gradual ecological change in a particular spot of land, known as a progression of inhabiting species. Autogenic succession can be viewed as a secondary succession because of pre-existing plant life. A 2000 case study in the journal Oecologia tested the hypothesis that areas with high plant diversity could suppress weed growth more effectively than those with lower plant diversity.
Facilitation
Improvement of site factors like increased organic matter
Inhibition
Hinders species or growth
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A wildfire clears a forest of vegetation and animal life, returning their nutrients to the ground leaving a foundation for rapid recolonization. what is this a classic example of?
A. typical succession
B. primary succession
C. cause succession
D. secondary succession
Answer:
|
|
sciq-6349
|
multiple_choice
|
The only obvious difference between boys and girls at birth is what type of organs?
|
[
"reproductive",
"digestive",
"nervous",
"respiratory"
] |
A
|
Relavent Documents:
Document 0:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
Document 1:::
Reproductive biology includes both sexual and asexual reproduction.
Reproductive biology includes a wide number of fields:
Reproductive systems
Endocrinology
Sexual development (Puberty)
Sexual maturity
Reproduction
Fertility
Human reproductive biology
Endocrinology
Human reproductive biology is primarily controlled through hormones, which send signals to the human reproductive structures to influence growth and maturation. These hormones are secreted by endocrine glands, and spread to different tissues in the human body. In humans, the pituitary gland synthesizes hormones used to control the activity of endocrine glands.
Reproductive systems
Internal and external organs are included in the reproductive system. There are two reproductive systems including the male and female, which contain different organs from one another. These systems work together in order to produce offspring.
Female reproductive system
The female reproductive system includes the structures involved in ovulation, fertilization, development of an embryo, and birth.
These structures include:
Ovaries
Oviducts
Uterus
Vagina
Mammary Glands
Estrogen is one of the sexual reproductive hormones that aid in the sexual reproductive system of the female.
Male reproductive system
The male reproductive system includes testes, rete testis, efferent ductules, epididymis, sex accessory glands, sex accessory ducts and external genitalia.
Testosterone, an androgen, although present in both males and females, is relatively more abundant in males. Testosterone serves as one of the major sexual reproductive hormones in the male reproductive system However, the enzyme aromatase is present in testes and capable of synthesizing estrogens from androgens. Estrogens are present in high concentrations in luminal fluids of the male reproductive tract. Androgen and estrogen receptors are abundant in epithelial cells of the male reproductive tract.
Animal Reproductive Biology
Animal reproduction oc
Document 2:::
The Terminologia Embryologica (TE) is a standardized list of words used in the description of human embryologic and fetal structures. It was produced by the Federative International Committee on Anatomical Terminology on behalf of the International Federation of Associations of Anatomists and posted on the Internet since 2010. It has been approved by the General Assembly of the IFAA during the seventeenth International Congress of Anatomy in Cape Town (August 2009).
It is analogous to the Terminologia Anatomica (TA), which standardizes terminology for adult human anatomy and which deals primarily with naked-eye adult anatomy. It succeeds the Nomina Embryologica, which was included as a component of the Nomina Anatomica.
It was not included in the original version of the TA.
Codes
e1.0: General terms
e2.0: Ontogeny
e3.0: Embryogeny
e4.0: General histology
e5.0: Bones; Skeletal system
e5.1: Joints; Articular system
e5.2: Muscles; Muscular system
e5.3: Face
e5.4: Alimentary system
e5.5: Respiratory system
e5.6: Urinary system
e5.7: Genital systems
e5.8: Coelom and septa
e5.9: Mesenchymal mesenteric masses
e5.10: Endocrine glands
e5.11: Cardiovascular system
e5.12: Lymphoid system
e5.13: Nervous system
e5.14: Central nervous system
e5.15: Peripheral nervous system
e5.16: Sense organs
e5.17: The integument
e6.0: Extraembryonic and fetal membranes
e7.0: Embryogenesis (-> 13 st)
e7.0: Embryogenesis (14 st ->)
e7.1: Fetogenesis
e7.2: Features of mature neonate
e8.0: Dysmorphia terms
See also
Terminologia Anatomica
Terminologia Histologica
International Morphological Terminology
Federative International Committee on Anatomical Terminology
Document 3:::
Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research.
Americas
Human Biology major at Stanford University, Palo Alto (since 1970)
Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government.
Human and Social Biology (Caribbean)
Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment.
Human Biology Program at University of Toronto
The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications.
Asia
BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002)
BSc (honours) Human Biology at AIIMS (New
Document 4:::
Fetal pigs are unborn pigs used in elementary as well as advanced biology classes as objects for dissection. Pigs, as a mammalian species, provide a good specimen for the study of physiological systems and processes due to the similarities between many pig and human organs.
Use in biology labs
Along with frogs and earthworms, fetal pigs are among the most common animals used in classroom dissection. There are several reasons for this, the main reason being that pigs, like humans, are mammals. Shared traits include common hair, mammary glands, live birth, similar organ systems, metabolic levels, and basic body form. They also allow for the study of fetal circulation, which differs from that of an adult. Secondly, fetal pigs are easy to obtain because they are by-products of the pork industry. Fetal pigs are the unborn piglets of sows that were killed by the meat-packing industry. These pigs are not bred and killed for this purpose, but are extracted from the deceased sow’s uterus. Fetal pigs not used in classroom dissections are often used in fertilizer or simply discarded. Thirdly, fetal pigs are cheap, which is an essential component for dissection use by schools. They can be ordered for about $30 at biological product companies. Fourthly, fetal pigs are easy to dissect because of their soft tissue and incompletely developed bones that are still made of cartilage. In addition, they are relatively large with well-developed organs that are easily visible. As long as the pork industry exists, fetal pigs will be relatively abundant, making them the prime choice for classroom dissections.
Alternatives
Several peer-reviewed comparative studies have concluded that the educational outcomes of students who are taught basic and advanced biomedical concepts and skills using non-animal methods are equivalent or superior to those of their peers who use animal-based laboratories such as animal dissection.
A systematic review concluded that students taught using non-animal m
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The only obvious difference between boys and girls at birth is what type of organs?
A. reproductive
B. digestive
C. nervous
D. respiratory
Answer:
|
|
sciq-6520
|
multiple_choice
|
What do atoms make by rearranging their chemical bonds in a reactant?
|
[
"minerals",
"products",
"solutions",
"compounds"
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Molecular binding is an attractive interaction between two molecules that results in a stable association in which the molecules are in close proximity to each other. It is formed when atoms or molecules bind together by sharing of electrons. It often, but not always, involves some chemical bonding.
In some cases, the associations can be quite strong—for example, the protein streptavidin and the vitamin biotin have a dissociation constant (reflecting the ratio between bound and free biotin) on the order of 10−14—and so the reactions are effectively irreversible. The result of molecular binding is sometimes the formation of a molecular complex in which the attractive forces holding the components together are generally non-covalent, and thus are normally energetically weaker than covalent bonds.
Molecular binding occurs in biological complexes (e.g., between pairs or sets of proteins, or between a protein and a small molecule ligand it binds) and also in abiologic chemical systems, e.g. as in cases of coordination polymers and coordination networks such as metal-organic frameworks.
Types
Molecular binding can be classified into the following types:
Non-covalent – no chemical bonds are formed between the two interacting molecules hence the association is fully reversible
Reversible covalent – a chemical bond is formed, however the free energy difference separating the noncovalently-bonded reactants from bonded product is near equilibrium and the activation barrier is relatively low such that the reverse reaction which cleaves the chemical bond easily occurs
Irreversible covalent – a chemical bond is formed in which the product is thermodynamically much more stable than the reactants such that the reverse reaction does not take place.
Bound molecules are sometimes called a "molecular complex"—the term generally refers to non-covalent associations. Non-covalent interactions can effectively become irreversible; for example, tight binding inhibitors of enzymes
Document 2:::
Gas phase ion chemistry is a field of science encompassed within both chemistry and physics. It is the science that studies ions and molecules in the gas phase, most often enabled by some form of mass spectrometry. By far the most important applications for this science is in studying the thermodynamics and kinetics of reactions. For example, one application is in studying the thermodynamics of the solvation of ions. Ions with small solvation spheres of 1, 2, 3... solvent molecules can be studied in the gas phase and then extrapolated to bulk solution.
Theory
Transition state theory
Transition state theory is the theory of the rates of elementary reactions which assumes a special type of chemical equilibrium (quasi-equilibrium) between reactants and activated complexes.
RRKM theory
RRKM theory is used to compute simple estimates of the unimolecular ion decomposition reaction rates from a few characteristics of the potential energy surface.
Gas phase ion formation
The process of converting an atom or molecule into an ion by adding or removing charged particles such as electrons or other ions can occur in the gas phase. These processes are an important component of gas phase ion chemistry.
Associative ionization
Associative ionization is a gas phase reaction in which two atoms or molecules interact to form a single product ion.
where species A with excess internal energy (indicated by the asterisk) interacts with B to form the ion AB+.
One or both of the interacting species may have excess internal energy.
Charge-exchange ionization
Charge-exchange ionization (also called charge-transfer ionization) is a gas phase reaction between an ion and a neutral species
in which the charge of the ion is transferred to the neutral.
Chemical ionization
In chemical ionization, ions are produced through the reaction of ions of a reagent gas with other species. Some common reagent gases include: methane, ammonia, and isobutane.
Chemi-ionization
Chemi-ionization can
Document 3:::
Reaction dynamics is a field within physical chemistry, studying why chemical reactions occur, how to predict their behavior, and how to control them. It is closely related to chemical kinetics, but is concerned with individual chemical events on atomic length scales and over very brief time periods. It considers state-to-state kinetics between reactant and product molecules in specific quantum states, and how energy is distributed between translational, vibrational, rotational, and electronic modes.
Experimental methods of reaction dynamics probe the chemical physics associated with molecular collisions. They include crossed molecular beam and infrared chemiluminescence experiments, both recognized by the 1986 Nobel Prize in Chemistry awarded to Dudley Herschbach, Yuan T. Lee, and John C. Polanyi "for their contributions concerning the dynamics of chemical elementary processes", In the crossed beam method used by Herschbach and Lee, narrow beams of reactant molecules in selected quantum states are allowed to react in order to determine the reaction probability as a function of such variables as the translational, vibrational and rotational energy of the reactant molecules and their angle of approach. In contrast the method of Polanyi measures vibrational energy of the products by detecting the infrared chemiluminescence emitted by vibrationally excited molecules, in some cases for reactants in defined energy states.
Spectroscopic observation of reaction dynamics on the shortest time scales is known as femtochemistry, since the typical times studied are of the order of 1 femtosecond = 10−15 s. This subject has been recognized by the award of the 1999 Nobel Prize in Chemistry to Ahmed Zewail.
In addition, theoretical studies of reaction dynamics involve calculating the potential energy surface for a reaction as a function of nuclear positions, and then calculating the trajectory of a point on this surface representing the state of the system. A correction can be
Document 4:::
In chemistry, the carbon-hydrogen bond ( bond) is a chemical bond between carbon and hydrogen atoms that can be found in many organic compounds. This bond is a covalent, single bond, meaning that carbon shares its outer valence electrons with up to four hydrogens. This completes both of their outer shells, making them stable.
Carbon–hydrogen bonds have a bond length of about 1.09 Å (1.09 × 10−10 m) and a bond energy of about 413 kJ/mol (see table below). Using Pauling's scale—C (2.55) and H (2.2)—the electronegativity difference between these two atoms is 0.35. Because of this small difference in electronegativities, the bond is generally regarded as being non-polar. In structural formulas of molecules, the hydrogen atoms are often omitted. Compound classes consisting solely of bonds and bonds are alkanes, alkenes, alkynes, and aromatic hydrocarbons. Collectively they are known as hydrocarbons.
In October 2016, astronomers reported that the very basic chemical ingredients of life—the carbon-hydrogen molecule (CH, or methylidyne radical), the carbon-hydrogen positive ion () and the carbon ion ()—are the result, in large part, of ultraviolet light from stars, rather than in other ways, such as the result of turbulent events related to supernovae and young stars, as thought earlier.
Bond length
The length of the carbon-hydrogen bond varies slightly with the hybridisation of the carbon atom. A bond between a hydrogen atom and an sp2 hybridised carbon atom is about 0.6% shorter than between hydrogen and sp3 hybridised carbon. A bond between hydrogen and sp hybridised carbon is shorter still, about 3% shorter than sp3 C-H. This trend is illustrated by the molecular geometry of ethane, ethylene and acetylene.
Reactions
The C−H bond in general is very strong, so it is relatively unreactive. In several compound classes, collectively called carbon acids, the C−H bond can be sufficiently acidic for proton removal. Unactivated C−H bonds are found in alkanes and are no
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do atoms make by rearranging their chemical bonds in a reactant?
A. minerals
B. products
C. solutions
D. compounds
Answer:
|
|
sciq-6214
|
multiple_choice
|
In eukaryotes, the new mrna is not yet ready for translation. it must go through more processing before it leaves where?
|
[
"protons",
"molecules",
"nucleus",
"Electrons"
] |
C
|
Relavent Documents:
Document 0:::
Genomic deoxyribonucleic acid (abbreviated as gDNA) is chromosomal DNA, in contrast to extra-chromosomal DNAs like plasmids. Most organisms have the same genomic DNA in every cell; however, only certain genes are active in each cell to allow for cell function and differentiation within the body.
The genome of an organism (encoded by the genomic DNA) is the (biological) information of heredity which is passed from one generation of organism to the next. That genome is transcribed to produce various RNAs, which are necessary for the function of the organism. Precursor mRNA (pre-mRNA) is transcribed by RNA polymerase II in the nucleus. pre-mRNA is then processed by splicing to remove introns, leaving the exons in the mature messenger RNA (mRNA). Additional processing includes the addition of a 5' cap and a poly(A) tail to the pre-mRNA. The mature mRNA may then be transported to the cytosol and translated by the ribosome into a protein. Other types of RNA include ribosomal RNA (rRNA) and transfer RNA (tRNA). These types are transcribed by RNA polymerase I and RNA polymerase III, respectively, and are essential for protein synthesis. However 5s rRNA is the only rRNA which is transcribed by RNA Polymerase III.
Document 1:::
The RNP world is a hypothesized intermediate period in the origin of life characterized by the existence of ribonucleoproteins. The period followed the hypothesized RNA world and ended with the formation of DNA and contemporary proteins. In the RNP world, RNA molecules began to synthesize peptides. These would eventually become proteins which have since assumed most of the diverse functions RNA performed previously. This transition paved the way for DNA to replace RNA as the primary store of genetic information, leading to life as we know it.
Principle of concept
Thomas Cech, in 2009, proposed the existence of the RNP world after his observation of apparent differences in the composition of catalysts in the two most fundamental processes that maintain and express genetic systems. The maintenance process, DNA replication and transcription, is accomplished purely by protein polymerases. The gene expression process, mRNA splicing and protein synthesis, is catalyzed by RNP complexes (the spliceosome and ribosome).
The difference between how these processes catalyze can be reconciled with the RNA world theory. As an older molecule than DNA, RNA had a hybrid RNA-protein-based maintenance system. Our current DNA world could have resulted from the gradual replacement of RNA catalysis machines with proteins. In this view, ribonucleoproteins and nucleotide-based cofactors are relics of an intermediary era, the RNP world.
Document 2:::
In biology, translation is the process in living cells in which proteins are produced using RNA molecules as templates. The generated protein is a sequence of amino acids. This sequence is determined by the sequence of nucleotides in the RNA. The nucleotides are considered three at a time. Each such triple results in addition of one specific amino acid to the protein being generated. The matching from nucleotide triple to amino acid is called the genetic code. The translation is performed by a large complex of functional RNA and proteins called ribosomes. The entire process is called gene expression.
In translation, messenger RNA (mRNA) is decoded in a ribosome, outside the nucleus, to produce a specific amino acid chain, or polypeptide. The polypeptide later folds into an active protein and performs its functions in the cell. The ribosome facilitates decoding by inducing the binding of complementary tRNA anticodon sequences to mRNA codons. The tRNAs carry specific amino acids that are chained together into a polypeptide as the mRNA passes through and is "read" by the ribosome.
Translation proceeds in three phases:
Initiation: The ribosome assembles around the target mRNA. The first tRNA is attached at the start codon.
Elongation: The last tRNA validated by the small ribosomal subunit (accommodation) transfers the amino acid. It carries to the large ribosomal subunit which binds it to the one of the preceding admitted tRNA (transpeptidation). The ribosome then moves to the next mRNA codon to continue the process (translocation), creating an amino acid chain.
Termination: When a stop codon is reached, the ribosome releases the polypeptide. The ribosomal complex remains intact and moves on to the next mRNA to be translated.
In prokaryotes (bacteria and archaea), translation occurs in the cytosol, where the large and small subunits of the ribosome bind to the mRNA. In eukaryotes, translation occurs in the cytoplasm or across the membrane of the endoplasmic ret
Document 3:::
This is a list of topics in molecular biology. See also index of biochemistry articles.
Document 4:::
A gene product is the biochemical material, either RNA or protein, resulting from expression of a gene. A measurement of the amount of gene product is sometimes used to infer how active a gene is. Abnormal amounts of gene product can be correlated with disease-causing alleles, such as the overactivity of oncogenes which can cause cancer.
A gene is defined as "a hereditary unit of DNA that is required to produce a functional product". Regulatory elements include:
Promoter region
TATA box
Polyadenylation sequences
Enhancers
These elements work in combination with the open reading frame to create a functional product. This product may be transcribed and be functional as RNA or is translated from mRNA to a protein to be functional in the cell.
RNA products
RNA molecules that do not code for any proteins still maintain a function in the cell. The function of the RNA depends on its classification. These roles include:
aiding protein synthesis
catalyzing reactions
regulating various processes.
Protein synthesis is aided by functional RNA molecules such as tRNA, which helps add the correct amino acid to a polypeptide chain during translation, rRNA, a major component of ribosomes (which guide protein synthesis), as well as mRNA which carry the instructions for creating the protein product.
One type of functional RNA involved in regulation are microRNA (miRNA), which works by repressing translation. These miRNAs work by binding to a complementary target mRNA sequence to prevent translation from occurring. Short-interfering RNA (siRNA) also work by negative regulation of transcription. These siRNA molecules work in RNA-induced silencing complex (RISC) during RNA interference by binding to a target DNA sequence to prevent transcription of a specific mRNA.
Protein products
Proteins are the product of a gene that are formed from translation of a mature mRNA molecule. Proteins contain 4 elements in regards to their structure: primary, secondary, tertiary and quaternary.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
In eukaryotes, the new mrna is not yet ready for translation. it must go through more processing before it leaves where?
A. protons
B. molecules
C. nucleus
D. Electrons
Answer:
|
|
ai2_arc-1067
|
multiple_choice
|
The Earth and the Moon have all of these common features except
|
[
"core.",
"crust.",
"water.",
"mantle."
] |
C
|
Relavent Documents:
Document 0:::
An Earth analog, also called an Earth analogue, Earth twin, or second Earth, is a planet or moon with environmental conditions similar to those found on Earth. The term Earth-like planet is also used, but this term may refer to any terrestrial planet.
The possibility is of particular interest to astrobiologists and astronomers under reasoning that the more similar a planet is to Earth, the more likely it is to be capable of sustaining complex extraterrestrial life. As such, it has long been speculated and the subject expressed in science, philosophy, science fiction and popular culture. Advocates of space colonization and space and survival have long sought an Earth analog for settlement. In the far future, humans might artificially produce an Earth analog by terraforming.
Before the scientific search for and study of extrasolar planets, the possibility was argued through philosophy and science fiction. Philosophers have suggested that the size of the universe is such that a near-identical planet must exist somewhere. The mediocrity principle suggests that planets like Earth should be common in the Universe, while the Rare Earth hypothesis suggests that they are extremely rare. The thousands of exoplanetary star systems discovered so far are profoundly different from the Solar System, supporting the Rare Earth Hypothesis.
On 4 November 2013, astronomers reported, based on Kepler space mission data, that there could be as many as 40 billion Earth-sized planets orbiting in the habitable zones of Sun-like stars and red dwarf stars within the Milky Way Galaxy. The nearest such planet could be expected to be within 12 light-years of the Earth, statistically. In September 2020, astronomers identified 24 superhabitable planets (planets better than Earth) contenders, from among more than 4000 confirmed exoplanets, based on astrophysical parameters, as well as the natural history of known life forms on the Earth.
On 11 January 2023, NASA scientists reported the
Document 1:::
The habitability of natural satellites describes the study of a moon's potential to provide habitats for life, though is not an indicator that it harbors it. Natural satellites are expected to outnumber planets by a large margin and the study is therefore important to astrobiology and the search for extraterrestrial life. There are, nevertheless, significant environmental variables specific to moons.
It is projected that parameters for surface habitats will be comparable to those of planets like Earth - stellar properties, orbit, planetary mass, atmosphere and geology. Of the natural satellites in the Solar System's habitable zone —the Moon, two Martian satellites (though some estimates put those outside it) and numerous Minor-planet moons — all lack the conditions for surface water. Unlike the Earth, all planetary mass moons of the Solar System are tidally locked and it is not yet known to what extent this and tidal forces influence habitability.
Research suggests that deep biospheres like that of Earth are possible. The strongest candidates therefore are currently icy satellites such as those of Jupiter and Saturn—Europa and Enceladus respectively, in which subsurface liquid water is thought to exist. While the Lunar surface is hostile to life as we know it, a deep Lunar biosphere (or that of similar bodies) cannot yet be ruled out deep exploration would be required for confirmation.
Exomoons are not yet confirmed to exist and their detection may be limited to transit-timing variation which is not currently sufficiently sensitive. It is possible that some of their attributes could be found through study of their transits. Despite this, some scientists estimate that there are as many habitable exomoons as habitable exoplanets. Given the general planet-to-satellite(s) mass ratio of 10,000, gas giants in the habitable zone are thought to be the best candidates to harbour Earth-like moons.
Tidal forces are likely to play as significant a role providing heat as st
Document 2:::
This article is a list of notable unsolved problems in astronomy. Some of these problems are theoretical, meaning that existing theories may be incapable of explaining certain observed phenomena or experimental results. Others are experimental, meaning that experiments necessary to test proposed theory or investigate a phenomenon in greater detail have not yet been performed. Some pertain to unique events or occurrences that have not repeated themselves and whose causes remain unclear.
Planetary astronomy
Our solar system
Orbiting bodies and rotation:
Are there any non-dwarf planets beyond Neptune?
Why do extreme trans-Neptunian objects have elongated orbits?
Rotation rate of Saturn:
Why does the magnetosphere of Saturn rotate at a rate close to that at which the planet's clouds rotate?
What is the rotation rate of Saturn's deep interior?
Satellite geomorphology:
What is the origin of the chain of high mountains that closely follows the equator of Saturn's moon, Iapetus?
Are the mountains the remnant of hot and fast-rotating young Iapetus?
Are the mountains the result of material (either from the rings of Saturn or its own ring) that over time collected upon the surface?
Extra-solar
How common are Solar System-like planetary systems? Some observed planetary systems contain Super-Earths and Hot Jupiters that orbit very close to their stars. Systems with Jupiter-like planets in Jupiter-like orbits appear to be rare. There are several possibilities why Jupiter-like orbits are rare, including that data is lacking or the grand tack hypothesis.
Stellar astronomy and astrophysics
Solar cycle:
How does the Sun generate its periodically reversing large-scale magnetic field?
How do other Sol-like stars generate their magnetic fields, and what are the similarities and differences between stellar activity cycles and that of the Sun?
What caused the Maunder Minimum and other grand minima, and how does the solar cycle recover from a minimum state?
Coronal heat
Document 3:::
Selenography is the study of the surface and physical features of the Moon (also known as geography of the Moon, or selenodesy). Like geography and areography, selenography is a subdiscipline within the field of planetary science. Historically, the principal concern of selenographists was the mapping and naming of the lunar terrane identifying maria, craters, mountain ranges, and other various features. This task was largely finished when high resolution images of the near and far sides of the Moon were obtained by orbiting spacecraft during the early space era. Nevertheless, some regions of the Moon remain poorly imaged (especially near the poles) and the exact locations of many features (like crater depths) are uncertain by several kilometers. Today, selenography is considered to be a subdiscipline of selenology, which itself is most often referred to as simply "lunar science." The word selenography is derived from the Greek word Σελήνη (Selene, meaning Moon) and γράφω graphō, meaning to write.
History
The idea that the Moon is not perfectly smooth originates to at least , when Democritus asserted that the Moon's "lofty mountains and hollow valleys" were the cause of its markings. However, not until the end of the 15th century AD did serious study of selenography begin. Around AD 1603, William Gilbert made the first lunar drawing based on naked-eye observation. Others soon followed, and when the telescope was invented, initial drawings of poor accuracy were made, but soon thereafter improved in tandem with optics. In the early 18th century, the librations of the Moon were measured, which revealed that more than half of the lunar surface was visible to observers on Earth. In 1750, Johann Meyer produced the first reliable set of lunar coordinates that permitted astronomers to locate lunar features.
Lunar mapping became systematic in 1779 when Johann Schröter began meticulous observation and measurement of lunar topography. In 1834 Johann Heinrich von Mädler pub
Document 4:::
The Hollow Moon and the closely related Spaceship Moon are pseudoscientific hypotheses that propose that Earth's Moon is either wholly hollow or otherwise contains a substantial interior space. No scientific evidence exists to support the idea; seismic observations and other data collected since spacecraft began to orbit or land on the Moon indicate that it has a thin crust, extensive mantle and small, dense core, although overall it is much less dense than Earth.
The first publication to mention a hollow Moon was H. G. Wells' 1901 novel The First Men in the Moon. The concept of a (partially) hollow Moon has been employed in science fiction multiple times. In 1970, two Soviet authors published a short piece in the popular press speculating that the Moon might be "the Creation of Alien Intelligence". Since the late 1970s, the hypothesis has been endorsed by conspiracy theorists like Jim Marrs and David Icke.
Introduction
The Hollow Moon hypothesis is the suggestion that the Moon is hollow, usually as a product of an alien civilization. It is often called the Spaceship Moon hypothesis and often corresponds with beliefs in UFOs or ancient astronauts.
The suggestion of a hollow moon first appeared in science fiction, when H. G. Wells wrote about a hollow Moon in his 1901 book The First Men in the Moon. The concept of hollow planets was not new; The first discussion of a Hollow Earth was by scientist Edmond Halley in 1692. Wells borrowed from earlier fictional works that described a hollow Earth, such as the 1741 novel Niels Klim's Underground Travels by Ludvig Holberg.
Both Hollow Moon and Hollow Earth are now considered to be fringe theories or conspiracy theories. The concept of the Moon as a spaceship is often mentioned as one of David Icke's beliefs.
Claims and rebuttals
Density
The fact that the Moon is less dense than the Earth is advanced by conspiracy theorists as support for claims of a hollow Moon. The Moon's mean density is 3.3 g/cm3, whereas th
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The Earth and the Moon have all of these common features except
A. core.
B. crust.
C. water.
D. mantle.
Answer:
|
|
sciq-9166
|
multiple_choice
|
What is the addition of nucleotides to the mrna strand?
|
[
"elongation",
"insertion",
"elevation",
"axons"
] |
A
|
Relavent Documents:
Document 0:::
Genomic deoxyribonucleic acid (abbreviated as gDNA) is chromosomal DNA, in contrast to extra-chromosomal DNAs like plasmids. Most organisms have the same genomic DNA in every cell; however, only certain genes are active in each cell to allow for cell function and differentiation within the body.
The genome of an organism (encoded by the genomic DNA) is the (biological) information of heredity which is passed from one generation of organism to the next. That genome is transcribed to produce various RNAs, which are necessary for the function of the organism. Precursor mRNA (pre-mRNA) is transcribed by RNA polymerase II in the nucleus. pre-mRNA is then processed by splicing to remove introns, leaving the exons in the mature messenger RNA (mRNA). Additional processing includes the addition of a 5' cap and a poly(A) tail to the pre-mRNA. The mature mRNA may then be transported to the cytosol and translated by the ribosome into a protein. Other types of RNA include ribosomal RNA (rRNA) and transfer RNA (tRNA). These types are transcribed by RNA polymerase I and RNA polymerase III, respectively, and are essential for protein synthesis. However 5s rRNA is the only rRNA which is transcribed by RNA Polymerase III.
Document 1:::
Transcription is the process of copying a segment of DNA into RNA. The segments of DNA transcribed into RNA molecules that can encode proteins are said to produce messenger RNA (mRNA). Other segments of DNA are copied into RNA molecules called non-coding RNAs (ncRNAs). mRNA comprises only 1–3% of total RNA samples. Less than 2% of the human genome can be transcribed into mRNA (Human genome#Coding vs. noncoding DNA), while at least 80% of mammalian genomic DNA can be actively transcribed (in one or more types of cells), with the majority of this 80% considered to be ncRNA.
Both DNA and RNA are nucleic acids, which use base pairs of nucleotides as a complementary language. During transcription, a DNA sequence is read by an RNA polymerase, which produces a complementary, antiparallel RNA strand called a primary transcript.
Transcription proceeds in the following general steps:
RNA polymerase, together with one or more general transcription factors, binds to promoter DNA.
RNA polymerase generates a transcription bubble, which separates the two strands of the DNA helix. This is done by breaking the hydrogen bonds between complementary DNA nucleotides.
RNA polymerase adds RNA nucleotides (which are complementary to the nucleotides of one DNA strand).
RNA sugar-phosphate backbone forms with assistance from RNA polymerase to form an RNA strand.
Hydrogen bonds of the RNA–DNA helix break, freeing the newly synthesized RNA strand.
If the cell has a nucleus, the RNA may be further processed. This may include polyadenylation, capping, and splicing.
The RNA may remain in the nucleus or exit the cytoplasm through the nuclear pore complex.
If the stretch of DNA is transcribed into an RNA molecule that encodes a protein, the RNA is termed messenger RNA (mRNA); the mRNA, in turn, serves as a template for the protein's synthesis through translation. Other stretches of DNA may be transcribed into small non-coding RNAs such as microRNA, transfer RNA (tRNA), small nucleolar
Document 2:::
In molecular biology, a library is a collection of DNA fragments that is stored and propagated in a population of micro-organisms through the process of molecular cloning. There are different types of DNA libraries, including cDNA libraries (formed from reverse-transcribed RNA), genomic libraries (formed from genomic DNA) and randomized mutant libraries (formed by de novo gene synthesis where alternative nucleotides or codons are incorporated). DNA library technology is a mainstay of current molecular biology, genetic engineering, and protein engineering, and the applications of these libraries depend on the source of the original DNA fragments. There are differences in the cloning vectors and techniques used in library preparation, but in general each DNA fragment is uniquely inserted into a cloning vector and the pool of recombinant DNA molecules is then transferred into a population of bacteria (a Bacterial Artificial Chromosome or BAC library) or yeast such that each organism contains on average one construct (vector + insert). As the population of organisms is grown in culture, the DNA molecules contained within them are copied and propagated (thus, "cloned").
Terminology
The term "library" can refer to a population of organisms, each of which carries a DNA molecule inserted into a cloning vector, or alternatively to the collection of all of the cloned vector molecules.
cDNA libraries
A cDNA library represents a sample of the mRNA purified from a particular source (either a collection of cells, a particular tissue, or an entire organism), which has been converted back to a DNA template by the use of the enzyme reverse transcriptase. It thus represents the genes that were being actively transcribed in that particular source under the physiological, developmental, or environmental conditions that existed when the mRNA was purified. cDNA libraries can be generated using techniques that promote "full-length" clones or under conditions that generate shorter f
Document 3:::
RNA editing (also RNA modification) is a molecular process through which some cells can make discrete changes to specific nucleotide sequences within an RNA molecule after it has been generated by RNA polymerase. It occurs in all living organisms and is one of the most evolutionarily conserved properties of RNAs. RNA editing may include the insertion, deletion, and base substitution of nucleotides within the RNA molecule. RNA editing is relatively rare, with common forms of RNA processing (e.g. splicing, 5'-capping, and 3'-polyadenylation) not usually considered as editing. It can affect the activity, localization as well as stability of RNAs, and has been linked with human diseases.
RNA editing has been observed in some tRNA, rRNA, mRNA, or miRNA molecules of eukaryotes and their viruses, archaea, and prokaryotes. RNA editing occurs in the cell nucleus, as well as within mitochondria and plastids. In vertebrates, editing is rare and usually consists of a small number of changes to the sequence of the affected molecules. In other organisms, such as squids, extensive editing (pan-editing) can occur; in some cases the majority of nucleotides in an mRNA sequence may result from editing. More than 160 types of RNA modifications have been described so far.
RNA-editing processes show great molecular diversity, and some appear to be evolutionarily recent acquisitions that arose independently. The diversity of RNA editing phenomena includes nucleobase modifications such as cytidine (C) to uridine (U) and adenosine (A) to inosine (I) deaminations, as well as non-template nucleotide additions and insertions. RNA editing in mRNAs effectively alters the amino acid sequence of the encoded protein so that it differs from that predicted by the genomic DNA sequence.
Detection of RNA editing
Next generation sequencing
To identify diverse post-transcriptional modifications of RNA molecules and determine the transcriptome-wide landscape of RNA modifications by means of next gener
Document 4:::
NAIL-MS (short for nucleic acid isotope labeling coupled mass spectrometry) is a technique based on mass spectrometry used for the investigation of nucleic acids and its modifications. It enables a variety of experiment designs to study the underlying mechanism of RNA biology in vivo. For example, the dynamic behaviour of nucleic acids in living cells, especially of RNA modifications, can be followed in more detail.
Theory
NAIL-MS is used to study RNA modification mechanisms. Therefore, cells in culture are first fed with stable isotope labeled nutrients and the cells incorporate these into their biomolecules. After purification of the nucleic acids, most often RNA, analysis is done by mass spectrometry. Mass spectrometry is an analytical technique that measures the mass-to-charge ratio of ions. Pairs of chemically identical nucleosides of different stable-isotope composition can be differentiated in a mass spectrometer due to their mass difference. Unlabeled nucleosides can therefore be distinguished from their stable isotope labeled isotopologues. For most NAIL-MS approaches it is crucial that the labeled nucleosides are more than 2 Da heavier than the unlabeled ones. This is because 1.1% of naturally occurring carbon atoms are 13C isotopes. In the case of nucleosides this leads to a mass increase of 1 Da in ~10% of the nucleosides. This signal would disturb the final evaluation of the measurement.
NAIL-MS can be used to investigate RNA modification dynamics by changing the labeled nutrients of the corresponding growth medium during the experiment. Furthermore, cell populations can be compared directly with each other without effects of purification bias. Furthermore, it can be used for the production of biosynthetic isotopologues of most nucleosides which are needed for quantification by mass spectrometry and even for the discovery of yet unknown RNA modifications.
General procedure
In general, cells are cultivated in unlabeled or stable (non-radioactive)
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the addition of nucleotides to the mrna strand?
A. elongation
B. insertion
C. elevation
D. axons
Answer:
|
|
sciq-8650
|
multiple_choice
|
What sac, which was sitting on top of the flat embryo, envelops the embryo as it folds?
|
[
"amniotic sac",
"uterus",
"epithelial sac",
"umbilical sac"
] |
A
|
Relavent Documents:
Document 0:::
The amniotic sac, also called the bag of waters or the membranes, is the sac in which the embryo and later fetus develops in amniotes. It is a thin but tough transparent pair of membranes that hold a developing embryo (and later fetus) until shortly before birth. The inner of these membranes, the amnion, encloses the amniotic cavity, containing the amniotic fluid and the embryo. The outer membrane, the chorion, contains the amnion and is part of the placenta. On the outer side, the amniotic sac is connected to the yolk sac, the allantois, and via the umbilical cord, the placenta.
The yolk sac, amnion, chorion, and allantois are the four extraembryonic membranes that lie outside of the embryo and are involved in providing nutrients and protection to the developing embryo. They form from the inner cell mass; the first to form is the yolk sac followed by the amnion which grows over the developing embryo. The amnion remains an important extraembryonic membrane throughout prenatal development. The third membrane is the allantois, and the fourth is the chorion which surrounds the embryo after about a month and eventually fuses with the amnion.
Amniocentesis is a medical procedure where fluid from the sac is sampled during fetal development, between 15 and 20 weeks of pregnancy, to be used in prenatal diagnosis of chromosomal abnormalities and fetal infections.
Structure
The amniotic cavity is the closed sac between the embryo and the amnion, containing the amniotic fluid. The amniotic cavity is formed by the fusion of the parts of the amniotic fold, which first makes its appearance at the cephalic extremity and subsequently at the caudal end and sides of the embryo. As the amniotic fold rises and fuses over the dorsal aspect of the embryo, the amniotic cavity is formed.
Development
At the beginning of the second week, a cavity appears within the inner cell mass, and when it enlarges, it becomes the amniotic cavity. The floor of the amniotic cavity is formed by the e
Document 1:::
A conceptus (from Latin: concipere, to conceive) is an embryo and its appendages (adnexa), the associated membranes, placenta, and umbilical cord; the products of conception or, more broadly, "the product of conception at any point between fertilization and birth." The conceptus includes all structures that develop from the zygote, both embryonic and extraembryonic. It includes the embryo as well as the embryonic part of the placenta and its associated membranes: amnion, chorion (gestational sac), and yolk sac.
Document 2:::
The placenta of humans, and certain other mammals contains structures known as cotyledons, which transmit fetal blood and allow exchange of oxygen and nutrients with the maternal blood.
Ruminants
The Artiodactyla have a cotyledonary placenta. In this form of placenta the chorionic villi form a number of separate circular structures (cotyledons) which are distributed over the surface of the chorionic sac. Sheep, goats and cattle have between 72 and 125 cotyledons whereas deer have 4-6 larger cotyledons.
Human
The form of the human placenta is generally classified as a discoid placenta. Within this the cotyledons are the approximately 15-25 separations of the decidua basalis of the placenta, separated by placental septa. Each cotyledon consists of a main stem of a chorionic villus as well as its branches and sub-branches.
Vasculature
The cotyledons receive fetal blood from chorionic vessels, which branch off cotyledon vessels into the cotyledons, which, in turn, branch into capillaries. The cotyledons are surrounded by maternal blood, which can exchange oxygen and nutrients with the fetal blood in the capillaries.
Document 3:::
The gestational sac is the large cavity of fluid surrounding the embryo. During early embryogenesis it consists of the extraembryonic coelom, also called the chorionic cavity. The gestational sac is normally contained within the uterus. It is the only available structure that can be used to determine if an intrauterine pregnancy exists until the embryo can be identified.
On obstetric ultrasound, the gestational sac is a dark (anechoic) space surrounded by a white (hyperechoic) rim.
Structure
The gestational sac is spherical in shape, and is usually located in the upper part (fundus) of the uterus. By approximately nine weeks of gestational age, due to folding of the trilaminar germ disc, the amniotic sac expands and occupy the majority of the volume of the gestational sac, eventually reducing the extraembryonic coelom (the gestational sac or the chorionic cavity) to a thin layer between the parietal somatopleuric and visceral splanchnopleuric layer of extraembryonic mesoderm.
Development
During embryogenesis, the extraembryonic coelom (or chorionic cavity) that constitutes the gestational sac is a portion of the conceptus consisting of a cavity between Heuser's membrane and the trophoblast.
During formation of the primary yolk sac, some of the migrating hypoblast cells differentiate into mesenchymal cells that fill the space between Heuser's membrane and the trophoblast, forming the extraembryonic mesoderm. As development progresses, small lacunae begin to form within the extraembryonic mesoderm which enlarges to become the extraembryonic coelom.
The Heuser's membrane cells (hypoblast cells) that migrated along the inner cytotrophoblast lining of the blastocoel, secrete an extracellular matrix along the way. Cells of the hypoblast migrate along the outer edges of this reticulum and form the extraembryonic mesoderm; this disrupts the extraembryonic reticulum. Soon pockets form in the reticulum, which ultimately coalesce to form the extraembryonic coelom.
The e
Document 4:::
During embryonic development of the eye, the outer wall of the bulb of the optic vesicles becomes thickened and invaginated, and the bulb is thus converted into a cup, the optic cup (or ophthalmic cup), consisting of two strata of cells. These two strata are continuous with each other at the cup margin, which ultimately overlaps the front of the lens and reaches as far forward as the future aperture of the pupil.
The optic cup is part of the diencephalon and gives rise to the retina of the eye.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What sac, which was sitting on top of the flat embryo, envelops the embryo as it folds?
A. amniotic sac
B. uterus
C. epithelial sac
D. umbilical sac
Answer:
|
|
sciq-1498
|
multiple_choice
|
What kind of compounds are named for their positive metal ion first, followed by their negative nonmetal ion?
|
[
"horizontal compounds",
"magnetic compounds",
"ionic compounds",
"magnetic compounds"
] |
C
|
Relavent Documents:
Document 0:::
The purpose of this annotated list is to provide a chronological, consolidated list of nonmetal monographs, which could enable the interested reader to further trace classification approaches in this area. Those marked with a ▲ classify the following 14 elements as nonmetals: H, N; O, S; the stable halogens; and the noble gases.
Document 1:::
In chemical nomenclature, the IUPAC nomenclature of organic chemistry is a method of naming organic chemical compounds as recommended by the International Union of Pure and Applied Chemistry (IUPAC). It is published in the Nomenclature of Organic Chemistry (informally called the Blue Book). Ideally, every possible organic compound should have a name from which an unambiguous structural formula can be created. There is also an IUPAC nomenclature of inorganic chemistry.
To avoid long and tedious names in normal communication, the official IUPAC naming recommendations are not always followed in practice, except when it is necessary to give an unambiguous and absolute definition to a compound. IUPAC names can sometimes be simpler than older names, as with ethanol, instead of ethyl alcohol. For relatively simple molecules they can be more easily understood than non-systematic names, which must be learnt or looked over. However, the common or trivial name is often substantially shorter and clearer, and so preferred. These non-systematic names are often derived from an original source of the compound. Also, very long names may be less clear than structural formulas.
Basic principles
In chemistry, a number of prefixes, suffixes and infixes are used to describe the type and position of the functional groups in the compound.
The steps for naming an organic compound are:
Identification of the parent hydride parent hydrocarbon chain. This chain must obey the following rules, in order of precedence:
It should have the maximum number of substituents of the suffix functional group. By suffix, it is meant that the parent functional group should have a suffix, unlike halogen substituents. If more than one functional group is present, the one with highest group precedence should be used.
It should have the maximum number of multiple bonds.
It should have the maximum length.
It should have the maximum number of substituents or branches cited as prefixes
It should have the ma
Document 2:::
Steudel R 2020, Chemistry of the Non-metals: Syntheses - Structures - Bonding - Applications, in collaboration with D Scheschkewitz, Berlin, Walter de Gruyter, . ▲
An updated translation of the 5th German edition of 2013, incorporating the literature up to Spring 2019. Twenty-three nonmetals, including B, Si, Ge, As, Se, Te, and At but not Sb (nor Po). The nonmetals are identified on the basis of their electrical conductivity at absolute zero putatively being close to zero, rather than finite as in the case of metals. That does not work for As however, which has the electronic structure of a semimetal (like Sb).
Halka M & Nordstrom B 2010, "Nonmetals", Facts on File, New York,
A reading level 9+ book covering H, C, N, O, P, S, Se. Complementary books by the same authors examine (a) the post-transition metals (Al, Ga, In, Tl, Sn, Pb and Bi) and metalloids (B, Si, Ge, As, Sb, Te and Po); and (b) the halogens and noble gases.
Woolins JD 1988, Non-Metal Rings, Cages and Clusters, John Wiley & Sons, Chichester, .
A more advanced text that covers H; B; C, Si, Ge; N, P, As, Sb; O, S, Se and Te.
Steudel R 1977, Chemistry of the Non-metals: With an Introduction to Atomic Structure and Chemical Bonding, English edition by FC Nachod & JJ Zuckerman, Berlin, Walter de Gruyter, . ▲
Twenty-four nonmetals, including B, Si, Ge, As, Se, Te, Po and At.
Powell P & Timms PL 1974, The Chemistry of the Non-metals, Chapman & Hall, London, . ▲
Twenty-two nonmetals including B, Si, Ge, As and Te. Tin and antimony are shown as being intermediate between metals and nonmetals; they are later shown as either metals or nonmetals. Astatine is counted as a metal.
Document 3:::
For solid-solution MXenes, they have the general formulas: (M'2−yM"y)C, (M'3−yM"y)C2, (M'4−yM"y)C3, o
Document 4:::
A monatomic ion (also called simple ion) is an ion consisting of exactly one atom. If, instead of being monatomic, an ion contains more than one atom, even if these are of the same element, it is called a polyatomic ion. For example, calcium carbonate consists of the monatomic cation Ca2+ and the polyatomic anion ; both pentazenium () and azide () are polyatomic as well.
A type I binary ionic compound contains a metal that forms only one type of ion. A type II ionic compound contains a metal that forms more than one type of ion, i.e., the same element in different oxidation states.
{|class="wikitable"
|-
! colspan="2" | Common type I monatomic cations
|-
| Hydrogen
| H+
|-
| Lithium
| Li+
|-
| Sodium
| Na+
|-
| Potassium
| K+
|-
| Rubidium
| Rb+
|-
| Caesium
| Cs+
|-
| Magnesium
| Mg2+
|-
| Calcium
| Ca2+
|-
| Strontium
| Sr2+
|-
| Barium
| Ba2+
|-
| Aluminium
| Al3+
|-
| Silver
| Ag+
|-
| Zinc
| Zn2+
|-
|}
{|class="wikitable"
|-
! colspan="3" | Common type II monatomic cations
|-
|-
| iron(II)
| Fe2+
| ferrous
|-
| iron(III)
| Fe3+
| ferric
|-
| copper(I)
| Cu+
| cuprous
|-
| copper(II)
| Cu2+
| cupric
|-
| cobalt(II)
| Co+2
| cobaltous
|-
| cobalt(III)
| Co3+
| cobaltic
|-
| tin(II)
| Sn2+
| stannous
|-
| tin(IV)
| Sn4+
| stannic
|}
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What kind of compounds are named for their positive metal ion first, followed by their negative nonmetal ion?
A. horizontal compounds
B. magnetic compounds
C. ionic compounds
D. magnetic compounds
Answer:
|
|
ai2_arc-406
|
multiple_choice
|
What best explains why some cooking pans have rubber handles?
|
[
"The rubber in the handles is easy to hold.",
"The rubber in the handles is a good insulator.",
"The rubber in the handles keeps the food in the pan hot.",
"The rubber in the handles keeps the metal in the pan cool."
] |
B
|
Relavent Documents:
Document 0:::
In cooking several factors, including materials, techniques, and temperature, can influence the surface chemistry of the chemical reactions and interactions that create food. All of these factors depend on the chemical properties of the surfaces of the materials used. The material properties of cookware, such as hydrophobicity, surface roughness, and conductivity can impact the taste of a dish dramatically. The technique of food preparation alters food in fundamentally different ways, which produce unique textures and flavors. The temperature of food preparation must be considered when choosing the correct ingredients.
Materials in cooking
The interactions between food and pan are very dependent on the material that the pan is made of. Whether or not the pan is hydrophilic or hydrophobic, the heat conductivity and capacity, surface roughness, and more all determine how the food is cooked.
Stainless steel
Stainless steel is considered stainless because it has at least 11% chromium by mass. Chromium is a relatively inert metal and does not rust or react as easily as plain carbon steel. This is what makes it an exceptional material for cooking. It is also fairly inexpensive, but does not have a very high thermal conductivity. From a surface standpoint, this is because of the thin layer of chromium oxide that is formed on the surface. This thin layer protects the metal from rusting or corroding. While it is protective, the oxide layer is not very conductive, which makes cooking food less efficient than it could be. For most cooking applications, high thermal conductivity is desirable to create an evenly heated surface on which to cook. In this way, stainless steel is usually not considered high-grade cookware.
In terms of surface interactions, chromium oxide is polar. The oxygen atoms on the surface have a permanent dipole moment, and are therefore hydrophilic. This means that water will wet it, but oils or other lipids will not.
Cast iron
Cast-iron
Document 1:::
The Handle-o-Meter is a testing machine developed by Johnson & Johnson and now manufactured by Thwing-Albert that measures the "handle" of sheeted materials: a combination of its surface friction and flexibility. Originally, it was used to test the durability and flexibility of toilet paper and paper towels.
The test sample is placed over an adjustable slot. The resistance encountered by the penetrator blade as it is moved into the slot by a pivoting arm is measured by the machine.
Details
The data collected when such nonwovens, tissues, toweling, film and textiles are tested has been shown to correlate well with the actual performance of these specific material's performance as a finished product.
Materials are simply placed over the slot that extends across the instrument platform, and then the tester hits test. There are three different test modes which can be applied to the material: single, double, and quadruple. The average is automatically calculated for double or quadruple tests.
Features
Adjustable slot openings
Interchangeable beams
Auto-ranging
2 x 40 LCD display
Statistical Analysis
RS-232 Output and Serial Port
Industry Standards:
ASTM D2923, D6828-02
TAPPI T498
INDA IST 90.3
Document 2:::
Food rheology is the study of the rheological properties of food, that is, the consistency and flow of food under tightly specified conditions. The consistency, degree of fluidity, and other mechanical properties are important in understanding how long food can be stored, how stable it will remain, and in determining food texture. The acceptability of food products to the consumer is often determined by food texture, such as how spreadable and creamy a food product is. Food rheology is important in quality control during food manufacture and processing. Food rheology terms have been noted since ancient times. In ancient Egypt, bakers judged the consistency of dough by rolling it in their hands.
Overview
There is a large body of literature on food rheology because the study of food rheology entails unique factors beyond an understanding of the basic rheological dynamics of the flow and deformation of matter. Food can be classified according to its rheological state, such as a solid, gel, liquid, emulsion with associated rheological behaviors, and its rheological properties can be measured. These properties will affect the design of food processing plants, as well as shelf life and other important factors, including sensory properties that appeal to consumers. Because foods are structurally complex, often a mixture of fluid and solids with varying properties within a single mass, the study of food rheology is more complicated than study in fields such as the rheology of polymers. However, food rheology is something we experience every day with our perception of food texture (see below) and basic concepts of food rheology well apply to polymers physics, oil flow etc. For this reason, examples of food rheology are didactically useful to explain the dynamics of other materials we are less familiar with. Ketchup is commonly used an example of Bingham fluid and its flow behavior can be compared to that of a polymer melt.
Psychorheology
Psychorheology is the
Document 3:::
A non-stick surface is engineered to reduce the ability of other materials to stick to it. Non-stick cookware is a common application, where the non-stick coating allows food to brown without sticking to the pan. Non-stick is often used to refer to surfaces coated with polytetrafluoroethylene (PTFE), a well-known brand of which is Teflon. In the twenty-first century, other coatings have been marketed as non-stick, such as anodized aluminium, silica, enameled cast iron, and seasoned cookware.
Types
Seasoning
Cast iron, carbon steel, stainless steel and cast aluminium cookware may be seasoned before cooking by applying a fat to the surface and heating it to polymerize it. This produces a dry, hard, smooth, hydrophobic coating, which is non-stick when food is cooked with a small amount of cooking oil or fat.
Fluoropolymer
The modern non-stick pans were made using a coating of Teflon (polytetrafluoroethylene or PTFE). PTFE was invented serendipitously by Roy Plunkett in 1938, while working for a joint venture of the DuPont company. The substance was found to have several unique properties, including very good corrosion-resistance and the lowest coefficient of friction of any substance yet manufactured. PTFE was first used to make seals resistant to the uranium hexafluoride gas used in development of the atomic bomb during World War II, and was regarded as a military secret. Dupont registered the Teflon trademark in 1944 and soon began planning for post-war commercial use of the new product.
By 1951 Dupont had developed applications for Teflon in commercial bread and cookie-making; however, the company avoided the market for consumer cookware due to potential problems associated with release of toxic gases if stove-top pans were overheated in inadequately ventilated spaces. While working at DuPont, NYU Tandon School of Engineering alumnus John Gilbert was asked to evaluate a newly developed material called Teflon. His experiments using the fluorinated polymer as a s
Document 4:::
Ring and Ball Apparatus is used to determine the softening point of bitumen, waxes, LDPE, HDPE/PP blend granules, rosin and solid hydrocarbon resins. The apparatus was first designed in the 1910s while ASTM adopted a test method in 1916. This instrument is ideally used for materials having softening point in the range of 30 °C to 157 °C.
Components
Two brass rings.
Two steel balls.
Two ball guides to hold the balls in position.
A Support to hold the rings, balls and thermometer in position
A Glass beaker
Thermometer
Hot plate
Magnetic stirrer
Glycerol or water as heating bath
Procedure
The solid sample is taken in a Petri dish and melted by heating it on a standard hot plate. The bubble free liquefied sample is poured from the Petri dish and cast into the ring. The brass shouldered rings in this apparatus have 6.4 mm depth. The cast sample in the ring is kept undisturbed for one hour to solidify. Excess material is removed with a hot knife. The ring is set with the ball on top with ball guides on the grooved plate within the heating bath. As the temperature rises, the balls begin to sink through the rings carrying a portion of the softened sample with it. The temperature at which the steel balls touch the bottom plate determines the softening point in degrees Celsius.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What best explains why some cooking pans have rubber handles?
A. The rubber in the handles is easy to hold.
B. The rubber in the handles is a good insulator.
C. The rubber in the handles keeps the food in the pan hot.
D. The rubber in the handles keeps the metal in the pan cool.
Answer:
|
|
sciq-8599
|
multiple_choice
|
Hot material from near the sun’s center rises in which zone?
|
[
"thermosphere",
"radiative zone",
"drifting zone",
"convection zone"
] |
D
|
Relavent Documents:
Document 0:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 1:::
Science, technology, engineering, and mathematics (STEM) is an umbrella term used to group together the distinct but related technical disciplines of science, technology, engineering, and mathematics. The term is typically used in the context of education policy or curriculum choices in schools. It has implications for workforce development, national security concerns (as a shortage of STEM-educated citizens can reduce effectiveness in this area), and immigration policy, with regard to admitting foreign students and tech workers.
There is no universal agreement on which disciplines are included in STEM; in particular, whether or not the science in STEM includes social sciences, such as psychology, sociology, economics, and political science. In the United States, these are typically included by organizations such as the National Science Foundation (NSF), the Department of Labor's O*Net online database for job seekers, and the Department of Homeland Security. In the United Kingdom, the social sciences are categorized separately and are instead grouped with humanities and arts to form another counterpart acronym HASS (Humanities, Arts, and Social Sciences), rebranded in 2020 as SHAPE (Social Sciences, Humanities and the Arts for People and the Economy). Some sources also use HEAL (health, education, administration, and literacy) as the counterpart of STEM.
Terminology
History
Previously referred to as SMET by the NSF, in the early 1990s the acronym STEM was used by a variety of educators, including Charles E. Vela, the founder and director of the Center for the Advancement of Hispanics in Science and Engineering Education (CAHSEE). Moreover, the CAHSEE started a summer program for talented under-represented students in the Washington, D.C., area called the STEM Institute. Based on the program's recognized success and his expertise in STEM education, Charles Vela was asked to serve on numerous NSF and Congressional panels in science, mathematics, and engineering edu
Document 2:::
Earth's internal heat budget is fundamental to the thermal history of the Earth. The flow of heat from Earth's interior to the surface is estimated at 47±2 terawatts (TW) and comes from two main sources in roughly equal amounts: the radiogenic heat produced by the radioactive decay of isotopes in the mantle and crust, and the primordial heat left over from the formation of Earth.
Earth's internal heat travels along geothermal gradients and powers most geological processes. It drives mantle convection, plate tectonics, mountain building, rock metamorphism, and volcanism. Convective heat transfer within the planet's high-temperature metallic core is also theorized to sustain a geodynamo which generates Earth's magnetic field.
Despite its geological significance, Earth's interior heat contributes only 0.03% of Earth's total energy budget at the surface, which is dominated by 173,000 TW of incoming solar radiation. This external energy source powers most of the planet's atmospheric, oceanic, and biologic processes. Nevertheless on land and at the ocean floor, the sensible heat absorbed from non-reflected insolation flows inward only by means of thermal conduction, and thus penetrates only several tens of centimeters on the daily cycle and only several tens of meters on the annual cycle. This renders solar radiation minimally relevant for processes internal to Earth's crust.
Global data on heat-flow density are collected and compiled by the International Heat Flow Commission of the International Association of Seismology and Physics of the Earth's Interior.
Heat and early estimate of Earth's age
Based on calculations of Earth's cooling rate, which assumed constant conductivity in the Earth's interior, in 1862 William Thomson, later Lord Kelvin, estimated the age of the Earth at 98 million years, which contrasts with the age of 4.5 billion years obtained in the 20th century by radiometric dating. As pointed out by John Perry in 1895 a variable conductivity in the E
Document 3:::
The core–mantle boundary (CMB) of Earth lies between the planet's silicate mantle and its liquid iron–nickel outer core, at a depth of below Earth's surface. The boundary is observed via the discontinuity in seismic wave velocities at that depth due to the differences between the acoustic impedances of the solid mantle and the molten outer core. P-wave velocities are much slower in the outer core than in the deep mantle while S-waves do not exist at all in the liquid portion of the core. Recent evidence suggests a distinct boundary layer directly above the CMB possibly made of a novel phase of the basic perovskite mineralogy of the deep mantle named post-perovskite. Seismic tomography studies have shown significant irregularities within the boundary zone and appear to be dominated by the African and Pacific Large Low-Shear-Velocity Provinces (LLSVP).
The uppermost section of the outer core is thought to be about 500–1,800 K hotter than the overlying mantle, creating a thermal boundary layer. The boundary is thought to harbor topography, much like Earth's surface, that is supported by solid-state convection within the overlying mantle. Variations in the thermal properties of the core-mantle boundary may affect how the outer core's iron-rich fluids flow, which are ultimately responsible for Earth's magnetic field.
The D″ region
The approx. 200 km thick layer of the lower mantle directly above the boundary is referred to as the D″ region ("D double-prime" or "D prime prime") and is sometimes included in discussions regarding the core–mantle boundary zone. The D″ name originates from geophysicist Keith Bullen's designations for the Earth's layers. His system was to label each layer alphabetically, A through G, with the crust as 'A' and the inner core as 'G'. In his 1942 publication of his model, the entire lower mantle was the D layer. In 1949, Bullen found his 'D' layer to actually be two different layers. The upper part of the D layer, about 1800 km thick, was r
Document 4:::
The thermal history of Earth involves the study of the cooling history of Earth's interior. It is a sub-field of geophysics. (Thermal histories are also computed for the internal cooling of other planetary and stellar bodies.) The study of the thermal evolution of Earth's interior is uncertain and controversial in all aspects, from the interpretation of petrologic observations used to infer the temperature of the interior, to the fluid dynamics responsible for heat loss, to material properties that determine the efficiency of heat transport.
Overview
Observations that can be used to infer the temperature of Earth's interior range from the oldest rocks on Earth to modern seismic images of the inner core size. Ancient volcanic rocks can be associated with a depth and temperature of melting through their geochemical composition. Using this technique and some geological inferences about the conditions under which the rock is preserved, the temperature of the mantle can be inferred. The mantle itself is fully convective, so that the temperature in the mantle is basically constant with depth outside the top and bottom thermal boundary layers. This is not quite true because the temperature in any convective body under pressure must increase along an adiabat, but the adiabatic temperature gradient is usually much smaller than the temperature jumps at the boundaries. Therefore, the mantle is usually associated with a single or potential temperature that refers to the mid-mantle temperature extrapolated along the adiabat to the surface. The potential temperature of the mantle is estimated to be about 1350 C today. There is an analogous potential temperature of the core but since there are no samples from the core its present-day temperature relies on extrapolating the temperature along an adiabat from the inner core boundary, where the iron solidus is somewhat constrained.
Thermodynamics
The simplest mathematical formulation of the thermal history of Earth's interior i
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Hot material from near the sun’s center rises in which zone?
A. thermosphere
B. radiative zone
C. drifting zone
D. convection zone
Answer:
|
|
sciq-8905
|
multiple_choice
|
A rounded hollow carved in the side of a mountain by a glacier is known as?
|
[
"a crater",
"a crest",
"a cavern",
"a cirque"
] |
D
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 2:::
Further Mathematics is the title given to a number of advanced secondary mathematics courses. The term "Higher and Further Mathematics", and the term "Advanced Level Mathematics", may also refer to any of several advanced mathematics courses at many institutions.
In the United Kingdom, Further Mathematics describes a course studied in addition to the standard mathematics AS-Level and A-Level courses. In the state of Victoria in Australia, it describes a course delivered as part of the Victorian Certificate of Education (see § Australia (Victoria) for a more detailed explanation). Globally, it describes a course studied in addition to GCE AS-Level and A-Level Mathematics, or one which is delivered as part of the International Baccalaureate Diploma.
In other words, more mathematics can also be referred to as part of advanced mathematics, or advanced level math.
United Kingdom
Background
A qualification in Further Mathematics involves studying both pure and applied modules. Whilst the pure modules (formerly known as Pure 4–6 or Core 4–6, now known as Further Pure 1–3, where 4 exists for the AQA board) build on knowledge from the core mathematics modules, the applied modules may start from first principles.
The structure of the qualification varies between exam boards.
With regard to Mathematics degrees, most universities do not require Further Mathematics, and may incorporate foundation math modules or offer "catch-up" classes covering any additional content. Exceptions are the University of Warwick, the University of Cambridge which requires Further Mathematics to at least AS level; University College London requires or recommends an A2 in Further Maths for its maths courses; Imperial College requires an A in A level Further Maths, while other universities may recommend it or may promise lower offers in return. Some schools and colleges may not offer Further mathematics, but online resources are available
Although the subject has about 60% of its cohort obtainin
Document 3:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 4:::
Advanced Placement (AP) Calculus (also known as AP Calc, Calc AB / Calc BC or simply AB / BC) is a set of two distinct Advanced Placement calculus courses and exams offered by the American nonprofit organization College Board. AP Calculus AB covers basic introductions to limits, derivatives, and integrals. AP Calculus BC covers all AP Calculus AB topics plus additional topics (including integration by parts, Taylor series, parametric equations, vector calculus, and polar coordinate functions).
AP Calculus AB
AP Calculus AB is an Advanced Placement calculus course. It is traditionally taken after precalculus and is the first calculus course offered at most schools except for possibly a regular calculus class. The Pre-Advanced Placement pathway for math helps prepare students for further Advanced Placement classes and exams.
Purpose
According to the College Board:
Topic outline
The material includes the study and application of differentiation and integration, and graphical analysis including limits, asymptotes, and continuity. An AP Calculus AB course is typically equivalent to one semester of college calculus.
Analysis of graphs (predicting and explaining behavior)
Limits of functions (one and two sided)
Asymptotic and unbounded behavior
Continuity
Derivatives
Concept
At a point
As a function
Applications
Higher order derivatives
Techniques
Integrals
Interpretations
Properties
Applications
Techniques
Numerical approximations
Fundamental theorem of calculus
Antidifferentiation
L'Hôpital's rule
Separable differential equations
AP Calculus BC
AP Calculus BC is equivalent to a full year regular college course, covering both Calculus I and II. After passing the exam, students may move on to Calculus III (Multivariable Calculus).
Purpose
According to the College Board,
Topic outline
AP Calculus BC includes all of the topics covered in AP Calculus AB, as well as the following:
Convergence tests for series
Taylor series
Parametric equations
Polar functions (inclu
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A rounded hollow carved in the side of a mountain by a glacier is known as?
A. a crater
B. a crest
C. a cavern
D. a cirque
Answer:
|
|
sciq-3170
|
multiple_choice
|
During the scientific revolution, who proposed that the sun, not earth, is the center of the solar system?
|
[
"janus",
"Newton",
"copernicus",
"Galileo"
] |
C
|
Relavent Documents:
Document 0:::
The Copernican Revolution is a 1957 book by the philosopher Thomas Kuhn, in which the author provides an analysis of the Copernican Revolution, documenting the pre-Ptolemaic understanding through the Ptolemaic system and its variants until the eventual acceptance of the Keplerian system.
Kuhn argues that the Ptolemaic system provided broader appeal than a simple astronomical system but also became intertwined in broader philosophical and theological beliefs. Kuhn argues that this broader appeal made it more difficult for other systems to be proposed.
Summary
At the end of the book, Kuhn summarizes the achievements of Copernicus and Newton, while comparing the incompatibility of Newtonian physics with Aristotelian concepts that preceded the then new physics. Kuhn also noted that discoveries, such as that produced by Newton, were not in agreement with the prevailing worldview during his lifetime.
Document 1:::
The Copernican Question: Prognostication, Skepticism, and Celestial Order is a 704-page book written by Robert S. Westman and published by University of California Press (Berkeley, Los Angeles, London) in 2011 and in 2020 (paperback). The book is a broad historical overview of Europe's astronomical and astrological culture leading to Copernicus’s De revolutionibus and follows the scholarly debates that took place roughly over three generations after Copernicus.
Summary
In 1543, Nicolaus Copernicus publicly defended his hypothesis that the earth is a planet and the sun a body resting near the center of a finite universe. This view challenged a long-held, widespread consensus about the order of the planets. But why did Copernicus make this bold proposal? And why did it matter? The Copernican Question revisits this pivotal moment in the history of science and puts political and cultural developments at the center rather than the periphery of the story. When Copernicus first hit on his theory around 1510, European society at all social levels was consumed with chronic warfare, the syphilis pandemic and recurrence of the bubonic plague, and, soon thereafter, Martin Luther’s break with the Catholic church. Apocalyptic prophecies about the imminent end of the world were rife; the relatively new technology of print was churning out reams of alarming astrological prognostications even as astrology itself came under serious attack in July 1496 from the Renaissance Florentine polymath Giovanni Pico della Mirandola (1463-1494). Copernicus knew Pico’s work, possibly as early as the year of its publication in Bologna, the city in which he lived with the astrological prognosticator and astronomer, Domenico Maria di Novara (1454-1504). Against Pico’s multi-pronged critique, Copernicus sought to protect the credibility of astrology by reforming the astronomical foundations on which astrology rested. But, his new hypothesis came at the cost of introducing new uncertainties and enge
Document 2:::
Astronomia nova (English: New Astronomy, full title in original Latin: ) is a book, published in 1609, that contains the results of the astronomer Johannes Kepler's ten-year-long investigation of the motion of Mars.
One of the most significant books in the history of astronomy, the Astronomia nova provided strong arguments for heliocentrism and contributed valuable insight into the movement of the planets. This included the first mention of the planets' elliptical paths and the change of their movement to the movement of free floating bodies as opposed to objects on rotating spheres. It is recognized as one of the most important works of the Scientific Revolution.
Background
Prior to Kepler, Nicolaus Copernicus proposed in 1543 that the Earth and other planets orbit the Sun. The Copernican model of the Solar System was regarded as a device to explain the observed positions of the planets rather than a physical description.
Kepler sought for and proposed physical causes for planetary motion. His work is primarily based on the research of his mentor, Tycho Brahe. The two, though close in their work, had a tumultuous relationship. Regardless, in 1601 on his deathbed, Brahe asked Kepler to make sure that he did not "die in vain," and to continue the development of his model of the Solar System. Kepler would instead write the Astronomia nova, in which he rejects the Tychonic system, as well as the Ptolemaic system and the Copernican system. Some scholars have speculated that Kepler's dislike for Brahe may have had a hand in his rejection of the Tychonic system and formation of a new one.
By 1602, Kepler set to work on determining the orbit pattern of Mars, keeping David Fabricius informed of his progress. He suggested the possibility of an oval orbit to Fabricius by early 1604, though was not believed. Later in the year, Kepler wrote back with his discovery of Mars's elliptical orbit. The manuscript for Astronomia nova was completed by September 1607, and was in pr
Document 3:::
In physics, the history of centrifugal and centripetal forces illustrates a long and complex evolution of thought about the nature of forces, relativity, and the nature of physical laws.
Huygens, Leibniz, Newton, and Hooke
Early scientific ideas about centrifugal force were based upon intuitive perception, and circular motion was considered somehow more "natural" than straight-line motion. According to Domenico Bertoloni-Meli:
For Huygens and Newton centrifugal force was the result of a curvilinear motion of a body; hence it was located in nature, in the object of investigation. According to a more recent formulation of classical mechanics, centrifugal force depends on the choice of how phenomena can be conveniently represented. Hence it is not located in nature, but is the result of a choice by the observer. In the first case a mathematical formulation mirrors centrifugal force; in the second it creates it.
Christiaan Huygens coined the term "centrifugal force" in his 1659 De Vi Centrifuga and wrote of it in his 1673 Horologium Oscillatorium on pendulums. In 1676–77, Isaac Newton combined Kepler's laws of planetary motion with Huygens' ideas and foundthe proposition that by a centrifugal force reciprocally as the square of the distance a planet must revolve in an ellipsis about the center of the force placed in the lower umbilicus of the ellipsis, and with a radius drawn to that center, describe areas proportional to the times.Newton coined the term "centripetal force" (vis centripeta) in his discussions of gravity in his De motu corporum in gyrum, a 1684 manuscript which he sent to Edmond Halley.
Gottfried Leibniz as part of his "solar vortex theory" conceived of centrifugal force as a real outward force which is induced by the circulation of the body upon which the force acts. An inverse cube law centrifugal force appears in an equation representing planetary orbits, including non-circular ones, as Leibniz described in his 1689 Tentamen de motuum coelestium
Document 4:::
Feynman's Lost Lecture: The Motion of Planets Around the Sun is a book based on a lecture by Richard Feynman. Restoration of the lecture notes and conversion into book form was undertaken by Caltech physicist David L. Goodstein and archivist Judith R. Goodstein.
Feynman had given the lecture on the motion of bodies at Caltech on March 13, 1964, but the notes and pictures were lost for a number of years and consequently not included in The Feynman Lectures on Physics series. The lecture notes were later found, but without the photographs of his illustrative chalkboard drawings. One of the editors, David L. Goodstein, stated that at first without the photographs, it was very hard to figure out what diagrams he was referring to in the audiotapes, but a later finding of his own private lecture notes made it possible to understand completely the logical framework with which Feynman delivered the lecture.
Overview
You can explain to people who don't know much of the physics, the early history... how Newton discovered... Kepler's Laws, and equal areas, and that means it's toward the sun, and all this stuff. And then the key - they always ask then, "Well, how do you see that it's an ellipse if it's the inverse square?" Well, it's God damned hard, there's no question of that. But I tried to find the simplest one I could.
In a non-course lecture delivered to a freshman physics audience, Feynman undertakes to present an elementary, geometric demonstration of Newton's discovery of the fact that Kepler's first observation, that the planets travel in elliptical orbits, is a necessary consequence of Kepler's other two observations.
The structure of Feynman's lecture:
A historical introduction to the material
An overview of some geometric properties of an ellipse
Newton's demonstration that equal areas in equal times is equivalent to forces toward the sun
Feynman's demonstration that equal changes in velocity occur in equal angles in the orbit
Feynman's demonstration,
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
During the scientific revolution, who proposed that the sun, not earth, is the center of the solar system?
A. janus
B. Newton
C. copernicus
D. Galileo
Answer:
|
|
sciq-8358
|
multiple_choice
|
What is the basic unit of matter?
|
[
"neutron",
"atom",
"dark material",
"nucleus"
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
The mass recorded by a mass spectrometer can refer to different physical quantities depending on the characteristics of the instrument and the manner in which the mass spectrum is displayed.
Units
The dalton (symbol: Da) is the standard unit that is used for indicating mass on an atomic or molecular scale (atomic mass). The unified atomic mass unit (symbol: u) is equivalent to the dalton. One dalton is approximately the mass of one a single proton or neutron. The unified atomic mass unit has a value of . The amu without the "unified" prefix is an obsolete unit based on oxygen, which was replaced in 1961.
Molecular mass
The molecular mass (abbreviated Mr) of a substance, formerly also called molecular weight and abbreviated as MW, is the mass of one molecule of that substance, relative to the unified atomic mass unit u (equal to 1/12 the mass of one atom of 12C). Due to this relativity, the molecular mass of a substance is commonly referred to as the relative molecular mass, and abbreviated to Mr.
Average mass
The average mass of a molecule is obtained by summing the average atomic masses of the constituent elements. For example, the average mass of natural water with formula H2O is 1.00794 + 1.00794 + 15.9994 = 18.01528 Da.
Mass number
The mass number, also called the nucleon number, is the number of protons and neutrons in an atomic nucleus. The mass number is unique for each isotope of an element and is written either after the element name or as a superscript to the left of an element's symbol. For example, carbon-12 (12C) has 6 protons and 6 neutrons.
Nominal mass
The nominal mass for an element is the mass number of its most abundant naturally occurring stable isotope, and for an ion or molecule, the nominal mass is the sum of the nominal masses of the constituent atoms. Isotope abundances are tabulated by IUPAC: for example carbon has two stable isotopes 12C at 98.9% natural abundance and 13C at 1.1% natural abundance, thus the nominal mass of carbon i
Document 2:::
To help compare different orders of magnitude, the following lists describe various mass levels between 10−59 kg and 1052 kg. The least massive thing listed here is a graviton, and the most massive thing is the observable universe. Typically, an object having greater mass will also have greater weight (see mass versus weight), especially if the objects are subject to the same gravitational field strength.
Units of mass
The table at right is based on the kilogram (kg), the base unit of mass in the International System of Units (SI). The kilogram is the only standard unit to include an SI prefix (kilo-) as part of its name. The gram (10−3 kg) is an SI derived unit of mass. However, the names of all SI mass units are based on gram, rather than on kilogram; thus 103 kg is a megagram (106 g), not a *kilokilogram.
The tonne (t) is an SI-compatible unit of mass equal to a megagram (Mg), or 103 kg. The unit is in common use for masses above about 103 kg and is often used with SI prefixes. For example, a gigagram (Gg) or 109 g is 103 tonnes, commonly called a kilotonne.
Other units
Other units of mass are also in use. Historical units include the stone, the pound, the carat, and the grain.
For subatomic particles, physicists use the mass equivalent to the energy represented by an electronvolt (eV). At the atomic level, chemists use the mass of one-twelfth of a carbon-12 atom (the dalton). Astronomers use the mass of the sun ().
The least massive things: below 10−24 kg
Unlike other physical quantities, mass–energy does not have an a priori expected minimal quantity, or an observed basic quantum as in the case of electric charge. Planck's law allows for the existence of photons with arbitrarily low energies. Consequently, there can only ever be an experimental upper bound on the mass of a supposedly massless particle; in the case of the photon, this confirmed upper bound is of the order of = .
10−24 to 10−18 kg
10−18 to 10−12 kg
10−12 to 10−6 kg
10−6 to 1 kg
Document 3:::
The atomic mass (ma or m) is the mass of an atom. Although the SI unit of mass is the kilogram (symbol: kg), atomic mass is often expressed in the non-SI unit dalton (symbol: Da) – equivalently, unified atomic mass unit (u). 1 Da is defined as of the mass of a free carbon-12 atom at rest in its ground state. The protons and neutrons of the nucleus account for nearly all of the total mass of atoms, with the electrons and nuclear binding energy making minor contributions. Thus, the numeric value of the atomic mass when expressed in daltons has nearly the same value as the mass number. Conversion between mass in kilograms and mass in daltons can be done using the atomic mass constant .
The formula used for conversion is:
where is the molar mass constant, is the Avogadro constant, and is the experimentally determined molar mass of carbon-12.
The relative isotopic mass (see section below) can be obtained by dividing the atomic mass ma of an isotope by the atomic mass constant mu yielding a dimensionless value. Thus, the atomic mass of a carbon-12 atom is by definition, but the relative isotopic mass of a carbon-12 atom is simply 12. The sum of relative isotopic masses of all atoms in a molecule is the relative molecular mass.
The atomic mass of an isotope and the relative isotopic mass refers to a certain specific isotope of an element. Because substances are usually not isotopically pure, it is convenient to use the elemental atomic mass which is the average (mean) atomic mass of an element, weighted by the abundance of the isotopes. The dimensionless (standard) atomic weight is the weighted mean relative isotopic mass of a (typical naturally occurring) mixture of isotopes.
The atomic mass of atoms, ions, or atomic nuclei is slightly less than the sum of the masses of their constituent protons, neutrons, and electrons, due to binding energy mass loss (per ).
Relative isotopic mass
Relative isotopic mass (a property of a single atom) is not to be confused w
Document 4:::
The elementary charge, usually denoted by , is a fundamental physical constant, defined as the electric charge carried by a single proton or, equivalently, the magnitude of the negative electric charge carried by a single electron, which has charge −1 .
In the SI system of units, the value of the elementary charge is exactly defined as = coulombs, or 160.2176634 zeptocoulombs (zC). Since the 2019 redefinition of SI base units, the seven SI base units are defined by seven fundamental physical constants, of which the elementary charge is one.
In the centimetre–gram–second system of units (CGS), the corresponding quantity is .
Robert A. Millikan and Harvey Fletcher's oil drop experiment first directly measured the magnitude of the elementary charge in 1909, differing from the modern accepted value by just 0.6%. Under assumptions of the then-disputed atomic theory, the elementary charge had also been indirectly inferred to ~3% accuracy from blackbody spectra by Max Planck in 1901 and (through the Faraday constant) at order-of-magnitude accuracy by Johann Loschmidt's measurement of the Avogadro number in 1865.
As a unit
In some natural unit systems, such as the system of atomic units, e functions as the unit of electric charge. The use of elementary charge as a unit was promoted by George Johnstone Stoney in 1874 for the first system of natural units, called Stoney units. Later, he proposed the name electron for this unit. At the time, the particle we now call the electron was not yet discovered and the difference between the particle electron and the unit of charge electron was still blurred. Later, the name electron was assigned to the particle and the unit of charge e lost its name. However, the unit of energy electronvolt (eV) is a remnant of the fact that the elementary charge was once called electron.
In other natural unit systems, the unit of charge is defined as with the result that
where is the fine-structure constant, is the speed of light, is
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the basic unit of matter?
A. neutron
B. atom
C. dark material
D. nucleus
Answer:
|
|
sciq-8850
|
multiple_choice
|
When blood engorged capillaries leak fluid into neighboring tissues, what occurs?
|
[
"swelling",
"seeping",
"bleeding",
"infection"
] |
A
|
Relavent Documents:
Document 0:::
The Starling principle holds that extracellular fluid movements between blood and tissues are determined by differences in hydrostatic pressure and colloid osmotic (oncotic) pressure between plasma inside microvessels and interstitial fluid outside them. The Starling Equation, proposed many years after the death of Starling, describes that relationship in mathematical form and can be applied to many biological and non-biological semipermeable membranes. The classic Starling principle and the equation that describes it have in recent years been revised and extended.
Every day around 8 litres of water (solvent) containing a variety of small molecules (solutes) leaves the blood stream of an adult human and perfuses the cells of the various body tissues. Interstitial fluid drains by afferent lymph vessels to one of the regional lymph node groups, where around 4 litres per day is reabsorbed to the blood stream. The remainder of the lymphatic fluid is rich in proteins and other large molecules and rejoins the blood stream via the thoracic duct which empties into the great veins close to the heart. Filtration from plasma to interstitial (or tissue) fluid occurs in microvascular capillaries and post-capillary venules. In most tissues the micro vessels are invested with a continuous internal surface layer that includes a fibre matrix now known as the endothelial glycocalyx whose interpolymer spaces function as a system of small pores, radius circa 5 nm. Where the endothelial glycocalyx overlies a gap in the junction molecules that bind endothelial cells together (inter endothelial cell cleft), the plasma ultrafiltrate may pass to the interstitial space, leaving larger molecules reflected back into the plasma.
A small number of continuous capillaries are specialised to absorb solvent and solutes from interstitial fluid back into the blood stream through fenestrations in endothelial cells, but the volume of solvent absorbed every day is small.
Discontinuous capillaries as
Document 1:::
Infiltration is the diffusion or accumulation (in a tissue or cells) of foreign substances in amounts excess of the normal. The material collected in those tissues or cells is called infiltrate.
Definitions of infiltration
As part of a disease process, infiltration is sometimes used to define the invasion of cancer cells into the underlying matrix or the blood vessels. Similarly, the term may describe the deposition of amyloid protein. During leukocyte extravasation, white blood cells move in response to cytokines from within the blood, into the diseased or infected tissues, usually in the same direction as a chemical gradient, in a process called chemotaxis. The presence of lymphocytes in tissue in greater than normal numbers is likewise called infiltration.
As part of medical intervention, local anaesthetics may be injected at more than one point so as to infiltrate an area prior to a surgical procedure. However, the term may also apply to unintended iatrogenic leakage of fluids from phlebotomy or intravenous drug delivery procedures, a process also known as extravasation or "tissuing".
Causes
Infiltration may be caused by:
Puncture of distal vein wall during venipuncture
Puncture of any portion of the vein wall by mechanical friction from the catheter/needle cannula
Dislodgement of the catheter/needle cannula from the intima of the vein which may be a result of a poorly secured IV device or inappropriate choice of venous site to puncture.
Improper cannula size or excessive delivery rate of the fluid
Signs and symptoms
The signs and symptoms of infiltration include:
Inflammation at or near the insertion site with swollen, taut skin with pain
Blanching and coolness of skin around IV site
Damp or wet dressing
Slowed or stopped infusion
No backflow of blood into IV tubing on lowering the solution container.
Grading
Nursing treatment
The use of warm compresses to treat infiltration has become controversial. It has been found that cold compresses may b
Document 2:::
The endothelium (: endothelia) is a single layer of squamous endothelial cells that line the interior surface of blood vessels and lymphatic vessels. The endothelium forms an interface between circulating blood or lymph in the lumen and the rest of the vessel wall. Endothelial cells form the barrier between vessels and tissue and control the flow of substances and fluid into and out of a tissue.
Endothelial cells in direct contact with blood are called vascular endothelial cells whereas those in direct contact with lymph are known as lymphatic endothelial cells. Vascular endothelial cells line the entire circulatory system, from the heart to the smallest capillaries.
These cells have unique functions that include fluid filtration, such as in the glomerulus of the kidney, blood vessel tone, hemostasis, neutrophil recruitment, and hormone trafficking. Endothelium of the interior surfaces of the heart chambers is called endocardium. An impaired function can lead to serious health issues throughout the body.
Structure
The endothelium is a thin layer of single flat (squamous) cells that line the interior surface of blood vessels and lymphatic vessels.
Endothelium is of mesodermal origin. Both blood and lymphatic capillaries are composed of a single layer of endothelial cells called a monolayer. In straight sections of a blood vessel, vascular endothelial cells typically align and elongate in the direction of fluid flow.
Terminology
The foundational model of anatomy, an index of terms used to describe anatomical structures, makes a distinction between endothelial cells and epithelial cells on the basis of which tissues they develop from, and states that the presence of vimentin rather than keratin filaments separates these from epithelial cells. Many considered the endothelium a specialized epithelial tissue.
Function
The endothelium forms an interface between circulating blood or lymph in the lumen and the rest of the vessel wall. This forms a barrier between v
Document 3:::
Leukocyte extravasation (also commonly known as leukocyte adhesion cascade or diapedesis – the passage of cells through the intact vessel wall) is the movement of leukocytes out of the circulatory system and towards the site of tissue damage or infection. This process forms part of the innate immune response, involving the recruitment of non-specific leukocytes. Monocytes also use this process in the absence of infection or tissue damage during their development into macrophages.
Overview
Leukocyte extravasation occurs mainly in post-capillary venules, where haemodynamic shear forces are minimised. This process can be understood in several steps:
Chemoattraction
Rolling adhesion
Tight adhesion
(Endothelial) Transmigration
It has been demonstrated that leukocyte recruitment is halted whenever any of these steps is suppressed.
White blood cells (leukocytes) perform most of their functions in tissues. Functions include phagocytosis of foreign particles, production of antibodies, secretion of inflammatory response triggers (histamine and heparin), and neutralization of histamine. In general, leukocytes are involved in the defense of an organism and protect it from disease by promoting or inhibiting inflammatory responses.
Leukocytes use the blood as a transport medium to reach the tissues of the body. Here is a brief summary of each of the four steps currently thought to be involved in leukocyte extravasation:
Chemoattraction
Upon recognition of and activation by pathogens, resident macrophages in the affected tissue release cytokines such as IL-1, TNFα and chemokines. IL-1, TNFα and C5a cause the endothelial cells of blood vessels near the site of infection to express cellular adhesion molecules, including selectins. Circulating leukocytes are localised towards the site of injury or infection due to the presence of chemokines.
Rolling adhesion
Like velcro, carbohydrate ligands on the circulating leukocytes bind to selectin molecules on the inner wall of the
Document 4:::
Vascular recruitment is the increase in the number of perfused capillaries in response to a stimulus. I.e., the more you exercise regularly, the more oxygen can reach your muscles.
Vascular recruitment may also be called capillary recruitment.
Vascular recruitment in skeletal muscle
The term «vascular recruitment» or «capillary recruitment» usually refers to the increase in the number perfused capillaries in skeletal muscle in response to a stimulus. The most important stimulus in humans is regular exercise. Vascular recruitment in skeletal muscle is thought to enhance the capillary surface area for oxygen exchange and decrease the oxygen diffusion distance.
Other stimuli are possible. Insulin can act as a stimulus for vascular recruitment in skeletal muscle. This process may also improve glucose delivery to skeletal muscle by increasing the surface area for diffusion. That insulin can act in this way has been proposed based on increases in limb blood flow and skeletal muscle blood volume which occurred after hyperinsulinemia.
The exact extent of capillary recruitment in intact skeletal muscle in response to regular exercise or insulin is unknown, because non-invasive measurement techniques are not yet extremely precise.
Being overweight or obese may negatively interfere with vascular recruitment in skeletal muscle.
Vascular recruitment in the lung
Vascular recruitment in the lung (i.e., in the pulmonary microcirculation) may be noteworthy to healthcare professionals in emergency medicine, because it may increase evidence of lung injury, and increase pulmonary capillary protein leak.
Vascular recruitment in the brain
Vascular recruitment in the brain is thought to lead to new capillaries and increase the cerebral blood flow.
Controversy
The existence of vascular recruitment in response to a stimulus has been disputed by some researchers. However, most researchers accept that vascular recruitment exists.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
When blood engorged capillaries leak fluid into neighboring tissues, what occurs?
A. swelling
B. seeping
C. bleeding
D. infection
Answer:
|
|
scienceQA-12567
|
multiple_choice
|
What do these two changes have in common?
ice melting in a glass
molding clay into the shape of a pot
|
[
"Both are caused by cooling.",
"Both are caused by heating.",
"Both are only physical changes.",
"Both are chemical changes."
] |
C
|
Step 1: Think about each change.
Ice melting in a glass is a change of state. So, it is a physical change. The solid ice becomes liquid, but it is still made of water. A different type of matter is not made.
Molding clay into the shape of a pot is a physical change. The clay gets a different shape. But it is made of the same type of matter.
Step 2: Look at each answer choice.
Both are only physical changes.
Both changes are physical changes. No new matter is created.
Both are chemical changes.
Both changes are physical changes. They are not chemical changes.
Both are caused by heating.
Ice melting is caused by heating. But molding clay is not.
Both are caused by cooling.
Neither change is caused by cooling.
|
Relavent Documents:
Document 0:::
Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.
Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate.
A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density.
An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge.
Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change.
Examples
Heating and cooling
Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation.
Magnetism
Ferro-magnetic materials can become magnetic. The process is reve
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
Thermofluids is a branch of science and engineering encompassing four intersecting fields:
Heat transfer
Thermodynamics
Fluid mechanics
Combustion
The term is a combination of "thermo", referring to heat, and "fluids", which refers to liquids, gases and vapors. Temperature, pressure, equations of state, and transport laws all play an important role in thermofluid problems. Phase transition and chemical reactions may also be important in a thermofluid context. The subject is sometimes also referred to as "thermal fluids".
Heat transfer
Heat transfer is a discipline of thermal engineering that concerns the transfer of thermal energy from one physical system to another. Heat transfer is classified into various mechanisms, such as heat conduction, convection, thermal radiation, and phase-change transfer. Engineers also consider the transfer of mass of differing chemical species, either cold or hot, to achieve heat transfer.
Sections include :
Energy transfer by heat, work and mass
Laws of thermodynamics
Entropy
Refrigeration Techniques
Properties and nature of pure substances
Applications
Engineering : Predicting and analysing the performance of machines
Thermodynamics
Thermodynamics is the science of energy conversion involving heat and other forms of energy, most notably mechanical work. It studies and interrelates the macroscopic variables, such as temperature, volume and pressure, which describe physical, thermodynamic systems.
Fluid mechanics
Fluid Mechanics the study of the physical forces at work during fluid flow. Fluid mechanics can be divided into fluid kinematics, the study of fluid motion, and fluid kinetics, the study of the effect of forces on fluid motion. Fluid mechanics can further be divided into fluid statics, the study of fluids at rest, and fluid dynamics, the study of fluids in motion. Some of its more interesting concepts include momentum and reactive forces in fluid flow and fluid machinery theory and performance.
Sections include:
Flu
Document 3:::
A continuous cooling transformation (CCT) phase diagram is often used when heat treating steel. These diagrams are used to represent which types of phase changes will occur in a material as it is cooled at different rates. These diagrams are often more useful than time-temperature-transformation diagrams because it is more convenient to cool materials at a certain rate (temperature-variable cooling), than to cool quickly and hold at a certain temperature (isothermal cooling).
Types of continuous cooling diagrams
There are two types of continuous cooling diagrams drawn for practical purposes.
Type 1: This is the plot beginning with the transformation start point, cooling with a specific transformation fraction and ending with a transformation finish temperature for all products against transformation time for each cooling curve.
Type 2: This is the plot beginning with the transformation start point, cooling with specific transformation fraction and ending with a transformation finish temperature for all products against cooling rate or bar diameter of the specimen for each type of cooling medium..
See also
Isothermal transformation
Phase diagram
Document 4:::
In physics, a dynamical system is said to be mixing if the phase space of the system becomes strongly intertwined, according to at least one of several mathematical definitions. For example, a measure-preserving transformation T is said to be strong mixing if
whenever A and B are any measurable sets and μ is the associated measure. Other definitions are possible, including weak mixing and topological mixing.
The mathematical definition of mixing is meant to capture the notion of physical mixing. A canonical example is the Cuba libre: suppose one is adding rum (the set A) to a glass of cola. After stirring the glass, the bottom half of the glass (the set B) will contain rum, and it will be in equal proportion as it is elsewhere in the glass. The mixing is uniform: no matter which region B one looks at, some of A will be in that region. A far more detailed, but still informal description of mixing can be found in the article on mixing (mathematics).
Every mixing transformation is ergodic, but there are ergodic transformations which are not mixing.
Physical mixing
The mixing of gases or liquids is a complex physical process, governed by a convective diffusion equation that may involve non-Fickian diffusion as in spinodal decomposition. The convective portion of the governing equation contains fluid motion terms that are governed by the Navier–Stokes equations. When fluid properties such as viscosity depend on composition, the governing equations may be coupled. There may also be temperature effects. It is not clear that fluid mixing processes are mixing in the mathematical sense.
Small rigid objects (such as rocks) are sometimes mixed in a rotating drum or tumbler. The 1969 Selective Service draft lottery was carried out by mixing plastic capsules which contained a slip of paper (marked with a day of the year).
See also
Miscibility
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do these two changes have in common?
ice melting in a glass
molding clay into the shape of a pot
A. Both are caused by cooling.
B. Both are caused by heating.
C. Both are only physical changes.
D. Both are chemical changes.
Answer:
|
sciq-8233
|
multiple_choice
|
How do glucose, ions, and other larger molecules leave the blood?
|
[
"through capillary tips",
"through intercellular clefts",
"through veinous fissures",
"through cell membranes"
] |
B
|
Relavent Documents:
Document 0:::
The Starling principle holds that extracellular fluid movements between blood and tissues are determined by differences in hydrostatic pressure and colloid osmotic (oncotic) pressure between plasma inside microvessels and interstitial fluid outside them. The Starling Equation, proposed many years after the death of Starling, describes that relationship in mathematical form and can be applied to many biological and non-biological semipermeable membranes. The classic Starling principle and the equation that describes it have in recent years been revised and extended.
Every day around 8 litres of water (solvent) containing a variety of small molecules (solutes) leaves the blood stream of an adult human and perfuses the cells of the various body tissues. Interstitial fluid drains by afferent lymph vessels to one of the regional lymph node groups, where around 4 litres per day is reabsorbed to the blood stream. The remainder of the lymphatic fluid is rich in proteins and other large molecules and rejoins the blood stream via the thoracic duct which empties into the great veins close to the heart. Filtration from plasma to interstitial (or tissue) fluid occurs in microvascular capillaries and post-capillary venules. In most tissues the micro vessels are invested with a continuous internal surface layer that includes a fibre matrix now known as the endothelial glycocalyx whose interpolymer spaces function as a system of small pores, radius circa 5 nm. Where the endothelial glycocalyx overlies a gap in the junction molecules that bind endothelial cells together (inter endothelial cell cleft), the plasma ultrafiltrate may pass to the interstitial space, leaving larger molecules reflected back into the plasma.
A small number of continuous capillaries are specialised to absorb solvent and solutes from interstitial fluid back into the blood stream through fenestrations in endothelial cells, but the volume of solvent absorbed every day is small.
Discontinuous capillaries as
Document 1:::
Transcellular transport involves the transportation of solutes by a cell through a cell. Transcellular transport can occur in three different ways active transport, passive transport, and transcytosis.
Active Transport
Main article: Active transport
Active transport is the process of moving molecules from an area of low concentrations to an area of high concentration. There are two types of active transport, primary active transport and secondary active transport. Primary active transport uses adenosine triphosphate (ATP) to move specific molecules and solutes against its concentration gradient. Examples of molecules that follow this process are potassium K+, sodium Na+, and calcium Ca2+. A place in the human body where this occurs is in the intestines with the uptake of glucose. Secondary active transport is when one solute moves down the electrochemical gradient to produce enough energy to force the transport of another solute from low concentration to high concentration. An example of where this occurs is in the movement of glucose within the proximal convoluted tubule (PCT).
Passive Transport
Main article: Passive transport
Passive transport is the process of moving molecules from an area of high concentration to an area of low concentration without expelling any energy. There are two types of passive transport, passive diffusion and facilitated diffusion. Passive diffusion is the unassisted movement of molecules from high concentration to low concentration across a permeable membrane. One example of passive diffusion is the gas exchange that occurs between the oxygen in the blood and the carbon dioxide present in the lungs. Facilitated diffusion is the movement of polar molecules down the concentration gradient with the assistance of membrane proteins. Since the molecules associated with facilitated diffusion are polar, they are repelled by the hydrophobic sections of permeable membrane, therefore they need to be assisted by the membrane proteins. Both t
Document 2:::
Paracellular transport refers to the transfer of substances across an epithelium by passing through the intercellular space between the cells. It is in contrast to transcellular transport, where the substances travel through the cell, passing through both the apical membrane and basolateral membrane.
The distinction has particular significance in renal physiology and intestinal physiology. Transcellular transport often involves energy expenditure whereas paracellular transport is unmediated and passive down a concentration gradient, or by osmosis (for water) and solvent drag for solutes. Paracellular transport also has the benefit that absorption rate is matched to load because it has no transporters that can be saturated.
In most mammals, intestinal absorption of nutrients is thought to be dominated by transcellular transport, e.g., glucose is primarily absorbed via the SGLT1 transporter and other glucose transporters. Paracellular absorption therefore plays only a minor role in glucose absorption, although there is evidence that paracellular pathways become more available when nutrients are present in the intestinal lumen. In contrast, small flying vertebrates (small birds and bats) rely on the paracellular pathway for the majority of glucose absorption in the intestine. This has been hypothesized to compensate for an evolutionary pressure to reduce mass in flying animals, which resulted in a reduction in intestine size and faster transit time of food through the gut.
Capillaries of the blood–brain barrier have only transcellular transport, in contrast with normal capillaries which have both transcellular and paracellular transport.
The paracellular pathway of transport is also important for the absorption of drugs in the gastrointestinal tract. The paracellular pathway allows the permeation of hydrophilic molecules that are not able to permeate through the lipid membrane by the transcellular pathway of absorption. This is particularly important for hydrophi
Document 3:::
Vascular permeability, often in the form of capillary permeability or microvascular permeability, characterizes the capacity of a blood vessel wall to allow for the flow of small molecules (drugs, nutrients, water, ions) or even whole cells (lymphocytes on their way to the site of inflammation) in and out of the vessel. Blood vessel walls are lined by a single layer of endothelial cells. The gaps between endothelial cells (cell junctions) are strictly regulated depending on the type and physiological state of the tissue.
There are several techniques to measure vascular permeability to certain molecules. For instance, the cannulation of a single microvessel with a micropipette, the microvessel is perfused with a certain pressure, occluded downstream and then the velocity of some cells will be related to the permeability. Another technique uses multiphoton fluorescence intravital microscopy through which the flow is related to fluorescence intensity and the permeability is estimated from the Patlak transformation.
An example of increased vascular permeability is in the initial lesion of periodontal disease, in which the gingival plexus becomes engorged and dilated, allowing large numbers of neutrophils to extravasate and appear within the junctional epithelium and underlying connective tissue.
Document 4:::
The human body and even its individual body fluids may be conceptually divided into various fluid compartments, which, although not literally anatomic compartments, do represent a real division in terms of how portions of the body's water, solutes, and suspended elements are segregated. The two main fluid compartments are the intracellular and extracellular compartments. The intracellular compartment is the space within the organism's cells; it is separated from the extracellular compartment by cell membranes.
About two-thirds of the total body water of humans is held in the cells, mostly in the cytosol, and the remainder is found in the extracellular compartment. The extracellular fluids may be divided into three types: interstitial fluid in the "interstitial compartment" (surrounding tissue cells and bathing them in a solution of nutrients and other chemicals), blood plasma and lymph in the "intravascular compartment" (inside the blood vessels and lymphatic vessels), and small amounts of transcellular fluid such as ocular and cerebrospinal fluids in the "transcellular compartment".
The normal processes by which life self-regulates its biochemistry (homeostasis) produce fluid balance across the fluid compartments. Water and electrolytes are continuously moving across barriers (eg, cell membranes, vessel walls), albeit often in small amounts, to maintain this healthy balance. The movement of these molecules is controlled and restricted by various mechanisms. When illnesses upset the balance, electrolyte imbalances can result.
The interstitial and intravascular compartments readily exchange water and solutes, but the third extracellular compartment, the transcellular, is thought of as separate from the other two and not in dynamic equilibrium with them.
The science of fluid balance across fluid compartments has practical application in intravenous therapy, where doctors and nurses must predict fluid shifts and decide which IV fluids to give (for example, isot
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How do glucose, ions, and other larger molecules leave the blood?
A. through capillary tips
B. through intercellular clefts
C. through veinous fissures
D. through cell membranes
Answer:
|
|
sciq-4014
|
multiple_choice
|
Molecules of gas are rare in what outermost region of the planet's atmosphere?
|
[
"ionosphere",
"thermosphere",
"exosphere",
"ozone layer"
] |
C
|
Relavent Documents:
Document 0:::
Planetary oceanography also called astro-oceanography or exo-oceanography is the study of oceans on planets and moons other than Earth. Unlike other planetary sciences like astrobiology, astrochemistry and planetary geology, it only began after the discovery of underground oceans in Saturn's moon Titan and Jupiter's moon Europa. This field remains speculative until further missions reach the oceans beneath the rock or ice layer of the moons. There are many theories about oceans or even ocean worlds of celestial bodies in the Solar System, from oceans made of diamond in Neptune to a gigantic ocean of liquid hydrogen that may exist underneath Jupiter's surface.
Early in their geologic histories, Mars and Venus are theorized to have had large water oceans. The Mars ocean hypothesis suggests that nearly a third of the surface of Mars was once covered by water, and a runaway greenhouse effect may have boiled away the global ocean of Venus. Compounds such as salts and ammonia dissolved in water lower its freezing point so that water might exist in large quantities in extraterrestrial environments as brine or convecting ice. Unconfirmed oceans are speculated beneath the surface of many dwarf planets and natural satellites; notably, the ocean of the moon Europa is estimated to have over twice the water volume of Earth's. The Solar System's giant planets are also thought to have liquid atmospheric layers of yet to be confirmed compositions. Oceans may also exist on exoplanets and exomoons, including surface oceans of liquid water within a circumstellar habitable zone. Ocean planets are a hypothetical type of planet with a surface completely covered with liquid.
Extraterrestrial oceans may be composed of water or other elements and compounds. The only confirmed large stable bodies of extraterrestrial surface liquids are the lakes of Titan, which are made of hydrocarbons instead of water. However, there is strong evidence for subsurface water oceans' existence elsewhere in t
Document 1:::
The interplanetary medium (IPM) or interplanetary space consists of the mass and energy which fills the Solar System, and through which all the larger Solar System bodies, such as planets, dwarf planets, asteroids, and comets, move. The IPM stops at the heliopause, outside of which the interstellar medium begins. Before 1950, interplanetary space was widely considered to either be an empty vacuum, or consisting of "aether".
Composition and physical characteristics
The interplanetary medium includes interplanetary dust, cosmic rays, and hot plasma from the solar wind. The density of the interplanetary medium is very low, decreasing in inverse proportion to the square of the distance from the Sun. It is variable, and may be affected by magnetic fields and events such as coronal mass ejections. Typical particle densities in the interplanetary medium are about 5-40 particles/cm, but exhibit substantial variation. In the vicinity of the Earth, it contains about 5 particles/cm, but values as high as 100 particles/cm have been observed.
The temperature of the interplanetary medium varies through the solar system. Joseph Fourier estimated that interplanetary medium must have temperatures comparable to those observed at Earth's poles, but on faulty grounds: lacking modern estimates of atmospheric heat transport, he saw no other means to explain the relative consistency of earth's climate. A very hot interplanetary medium remained a minor position among geophysicists as late as 1959, when Chapman proposed a temperature on the order of 10000 K, but observation in Low Earth orbit of the exosphere soon contradicted his position. In fact, both Fourier and Chapman's final predictions were correct: because the interplanetary medium is so rarefied, it does not exhibit thermodynamic equilibrium. Instead, different components have different temperatures. The solar wind exhibits temperatures consistent with Chapman's estimate in cislunar space, and dust particles near Earth's
Document 2:::
Aeronomy is the scientific study of the upper atmosphere of the Earth and corresponding regions of the atmospheres of other planets. It is a branch of both atmospheric chemistry and atmospheric physics. Scientists specializing in aeronomy, known as aeronomers, study the motions and chemical composition and properties of the Earth's upper atmosphere and regions of the atmospheres of other planets that correspond to it, as well as the interaction between upper atmospheres and the space environment. In atmospheric regions aeronomers study, chemical dissociation and ionization are important phenomena.
History
The mathematician Sydney Chapman introduced the term aeronomy to describe the study of the Earth's upper atmosphere in 1946 in a letter to the editor of Nature entitled "Some Thoughts on Nomenclature." The term became official in 1954 when the International Union of Geodesy and Geophysics adopted it. "Aeronomy" later also began to refer to the study of the corresponding regions of the atmospheres of other planets.
Branches
Aeronomy can be divided into three main branches: terrestrial aeronomy, planetary aeronomy, and comparative aeronomy.
Terrestrial aeronomy
Terrestrial aeronomy focuses on the Earth's upper atmosphere, which extends from the stratopause to the atmosphere's boundary with outer space and is defined as consisting of the mesosphere, thermosphere, and exosphere and their ionized component, the ionosphere. Terrestrial aeronomy contrasts with meteorology, which is the scientific study of the Earth's lower atmosphere, defined as the troposphere and stratosphere. Although terrestrial aeronomy and meteorology once were completely separate fields of scientific study, cooperation between terrestrial aeronomers and meteorologists has grown as discoveries made since the early 1990s have demonstrated that the upper and lower atmospheres have an impact on one another's physics, chemistry, and biology.
Terrestrial aeronomers study atmospheric tides and upper-
Document 3:::
An atmosphere () is a layer of gas or layers of gases that envelop a planet, and is held in place by the gravity of the planetary body. A planet retains an atmosphere when the gravity is great and the temperature of the atmosphere is low. A stellar atmosphere is the outer region of a star, which includes the layers above the opaque photosphere; stars of low temperature might have outer atmospheres containing compound molecules.
The atmosphere of Earth is composed of nitrogen (78 %), oxygen (21 %), argon (0.9 %), carbon dioxide (0.04 %) and trace gases. Most organisms use oxygen for respiration; lightning and bacteria perform nitrogen fixation to produce ammonia that is used to make nucleotides and amino acids; plants, algae, and cyanobacteria use carbon dioxide for photosynthesis. The layered composition of the atmosphere minimises the harmful effects of sunlight, ultraviolet radiation, solar wind, and cosmic rays to protect organisms from genetic damage. The current composition of the atmosphere of the Earth is the product of billions of years of biochemical modification of the paleoatmosphere by living organisms.
Composition
The initial gaseous composition of an atmosphere is determined by the chemistry and temperature of the local solar nebula from which a planet is formed, and the subsequent escape of some gases from the interior of the atmosphere proper. The original atmosphere of the planets originated from a rotating disc of gases, which collapsed onto itself and then divided into a series of spaced rings of gas and matter that, which later condensed to form the planets of the Solar System. The atmospheres of the planets Venus and Mars are principally composed of carbon dioxide and nitrogen, argon and oxygen.
The composition of Earth's atmosphere is determined by the by-products of the life that it sustains. Dry air (mixture of gases) from Earth's atmosphere contains 78.08% nitrogen, 20.95% oxygen, 0.93% argon, 0.04% carbon dioxide, and traces of hydrogen,
Document 4:::
The European Astrobiology Network Association (EANA) coordinates and facilities research expertise in astrobiology in Europe.
EANA was created in 2001 to coordinate the different European centers in astrobiology and the related fields previously organized in paleontology, geology, atmospheric physics, planetary science and stellar physics.
The association is administered by an Executive Council that is elected every three years and represents the European nations active in the field, as Austria, Belgium, France, Germany, Italy, Portugal, Spain, etc.
The EANA Executive Council is composed of a president, two vice-presidents, a treasurer and two secretaries, and councillors. Further information about the current Executive Council can be founded at http://www.eana-net.eu/index.php?page=Discover/eananetwork.
The EANA association strongly supports the AbGradE – Astrobiology Graduates in Europe, which is an independent organisation that aim to support early-career scientists and students in astrobiology.
Objectives
The specific objectives of EANA are to:
bring together active European researchers and link their research programs
fund exchange visits between laboratories
optimize the sharing of information and resources facilities for research
promote this field of research to European funding agencies and politicians
promote research on extremophiles of relevance to environmental issues in Europe
interface with the Research Network with European bodies (e.g. European Space Agency, and the European Commission)
attract young scientists to participate
promote public interest in astrobiology, and to educate the younger generation
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Molecules of gas are rare in what outermost region of the planet's atmosphere?
A. ionosphere
B. thermosphere
C. exosphere
D. ozone layer
Answer:
|
|
sciq-9555
|
multiple_choice
|
What does the esophagus produce for lubrication?
|
[
"mucous",
"synovial fluid",
"bile",
"phloem"
] |
A
|
Relavent Documents:
Document 0:::
The Joan Mott Prize Lecture is a prize lecture awarded annually by The Physiological Society in honour of Joan Mott.
Laureates
Laureates of the award have included:
- Intestinal absorption of sugars and peptides: from textbook to surprises
See also
Physiological Society Annual Review Prize Lecture
Document 1:::
The esophagus (American English) or oesophagus (British English, see spelling differences; both ; : (o)esophagi or (o)esophaguses), colloquially known also as the food pipe or gullet, is an organ in vertebrates through which food passes, aided by peristaltic contractions, from the pharynx to the stomach. The esophagus is a fibromuscular tube, about long in adults, that travels behind the trachea and heart, passes through the diaphragm, and empties into the uppermost region of the stomach. During swallowing, the epiglottis tilts backwards to prevent food from going down the larynx and lungs. The word oesophagus is from Ancient Greek οἰσοφάγος (oisophágos), from οἴσω (oísō), future form of φέρω (phérō, “I carry”) + ἔφαγον (éphagon, “I ate”).
The wall of the esophagus from the lumen outwards consists of mucosa, submucosa (connective tissue), layers of muscle fibers between layers of fibrous tissue, and an outer layer of connective tissue. The mucosa is a stratified squamous epithelium of around three layers of squamous cells, which contrasts to the single layer of columnar cells of the stomach. The transition between these two types of epithelium is visible as a zig-zag line. Most of the muscle is smooth muscle although striated muscle predominates in its upper third. It has two muscular rings or sphincters in its wall, one at the top and one at the bottom. The lower sphincter helps to prevent reflux of acidic stomach content. The esophagus has a rich blood supply and venous drainage. Its smooth muscle is innervated by involuntary nerves (sympathetic nerves via the sympathetic trunk and parasympathetic nerves via the vagus nerve) and in addition voluntary nerves (lower motor neurons) which are carried in the vagus nerve to innervate its striated muscle.
The esophagus passes through the thoracic cavity into the diaphragm into the stomach.
Document 2:::
The internal anal sphincter, IAS, (or sphincter ani internus) is a ring of smooth muscle that surrounds about 2.5–4.0 cm of the anal canal. It is about 5 mm thick, and is formed by an aggregation of the smooth (involuntary) circular muscle fibers of the rectum. it terminates distally about 6 mm from the anal orifice.
The internal anal sphincter aids the sphincter ani externus to occlude the anal aperture and aids in the expulsion of the feces. Its action is entirely involuntary. It is normally in a state of continuous maximal contraction to prevent leakage of faeces or gases. Sympathetic stimulation stimulates and maintains the sphincter's contraction, and parasympathetic stimulation inhibits it. It becomes relaxed in response to distention of the rectal ampulla, requiring voluntary contraction of the puborectalis and external anal sphincter to maintain continence.
Anatomy
The internal anal sphincter is the specialised thickened terminal portion of the inner circular layer of smooth muscle of the large intestine. It extends from the pectinate line (anorectal junction) proximally to just proximal to the anal orifice distally (the distal termination is palpable). Its muscle fibres are arranged in a spiral (rather than a circular) manner.
At its distal extremity, it is in contact with but separate from the external anal sphincter.
Innervation
The sphincter receives extrinsic autonomic innervation via the inferior hypogastric plexus, with sympathetic innervation derived from spinal levels L1-L2, and parasympathetic innervation derived from S2-S4.
The internal anal sphincter is not innervated by the pudendal nerve (which provides motor and sensory innervation to the external anal sphincter).
Function
The sphincter is contracted in its resting state, but reflexively relaxes in certain contexts (most notably during defecation).
Transient relaxation of its proximal portion occurs with rectal distension and post-prandial rectal contraction (the recto-anal inhibitory
Document 3:::
This table lists the epithelia of different organs of the human body
Human anatomy
Document 4:::
The esophagus passes through the thoracic cavity into the diaphragm into the stomach.
The esophagus may be affected by gastric reflux, cancer, prominent dilated blood vessels called varices that can bleed heavily, t
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What does the esophagus produce for lubrication?
A. mucous
B. synovial fluid
C. bile
D. phloem
Answer:
|
|
sciq-9536
|
multiple_choice
|
Where does most of the earth's energy come from?
|
[
"magnetic field",
"its core",
"its atmosphere",
"sun"
] |
D
|
Relavent Documents:
Document 0:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 1:::
The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work.
History
It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council.
Function
Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres.
STEM ambassadors
To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell.
Funding
STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments.
See also
The WISE Campaign
Engineering and Physical Sciences Research Council
National Centre for Excellence in Teaching Mathematics
Association for Science Education
Glossary of areas of mathematics
Glossary of astronomy
Glossary of biology
Glossary of chemistry
Glossary of engineering
Glossary of physics
Document 2:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
Document 3:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 4:::
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Where does most of the earth's energy come from?
A. magnetic field
B. its core
C. its atmosphere
D. sun
Answer:
|
|
sciq-10572
|
multiple_choice
|
What is the main difference between prokaryotic and eukaryotic cells?
|
[
"presence of cytoplasm",
"absence of cytoplasm",
"enlarged mitochondria",
"presence of a nucleus"
] |
D
|
Relavent Documents:
Document 0:::
Cell physiology is the biological study of the activities that take place in a cell to keep it alive. The term physiology refers to normal functions in a living organism. Animal cells, plant cells and microorganism cells show similarities in their functions even though they vary in structure.
General characteristics
There are two types of cells: prokaryotes and eukaryotes.
Prokaryotes were the first of the two to develop and do not have a self-contained nucleus. Their mechanisms are simpler than later-evolved eukaryotes, which contain a nucleus that envelops the cell's DNA and some organelles.
Prokaryotes
Prokaryotes have DNA located in an area called the nucleoid, which is not separated from other parts of the cell by a membrane. There are two domains of prokaryotes: bacteria and archaea. Prokaryotes have fewer organelles than eukaryotes. Both have plasma membranes and ribosomes (structures that synthesize proteins and float free in cytoplasm). Two unique characteristics of prokaryotes are fimbriae (finger-like projections on the surface of a cell) and flagella (threadlike structures that aid movement).
Eukaryotes
Eukaryotes have a nucleus where DNA is contained. They are usually larger than prokaryotes and contain many more organelles. The nucleus, the feature of a eukaryote that distinguishes it from a prokaryote, contains a nuclear envelope, nucleolus and chromatin. In cytoplasm, endoplasmic reticulum (ER) synthesizes membranes and performs other metabolic activities. There are two types, rough ER (containing ribosomes) and smooth ER (lacking ribosomes). The Golgi apparatus consists of multiple membranous sacs, responsible for manufacturing and shipping out materials such as proteins. Lysosomes are structures that use enzymes to break down substances through phagocytosis, a process that comprises endocytosis and exocytosis. In the mitochondria, metabolic processes such as cellular respiration occur. The cytoskeleton is made of fibers that support the str
Document 1:::
The cell is the basic structural and functional unit of all forms of life. Every cell consists of cytoplasm enclosed within a membrane, and contains many macromolecules such as proteins, DNA and RNA, as well as many small molecules of nutrients and metabolites. The term comes from the Latin word meaning 'small room'.
Cells can acquire specified function and carry out various tasks within the cell such as replication, DNA repair, protein synthesis, and motility. Cells are capable of specialization and mobility within the cell.
Most plant and animal cells are only visible under a light microscope, with dimensions between 1 and 100 micrometres. Electron microscopy gives a much higher resolution showing greatly detailed cell structure. Organisms can be classified as unicellular (consisting of a single cell such as bacteria) or multicellular (including plants and animals). Most unicellular organisms are classed as microorganisms.
The study of cells and how they work has led to many other studies in related areas of biology, including: discovery of DNA, cancer systems biology, aging and developmental biology.
Cell biology is the study of cells, which were discovered by Robert Hooke in 1665, who named them for their resemblance to cells inhabited by Christian monks in a monastery. Cell theory, first developed in 1839 by Matthias Jakob Schleiden and Theodor Schwann, states that all organisms are composed of one or more cells, that cells are the fundamental unit of structure and function in all living organisms, and that all cells come from pre-existing cells. Cells emerged on Earth about 4 billion years ago.
Discovery
With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as i
Document 2:::
Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence.
Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism.
Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry.
See also
Cell (biology)
Cell biology
Biomolecule
Organelle
Tissue (biology)
External links
https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm
Document 3:::
This lecture, named in memory of Keith R. Porter, is presented to an eminent cell biologist each year at the ASCB Annual Meeting. The ASCB Program Committee and the ASCB President recommend the Porter Lecturer to the Porter Endowment each year.
Lecturers
Source: ASCB
See also
List of biology awards
Document 4:::
Cellular compartments in cell biology comprise all of the closed parts within the cytosol of a eukaryotic cell, usually surrounded by a single or double lipid layer membrane. These compartments are often, but not always, defined as membrane-bound organelles. The formation of cellular compartments is called compartmentalization.
Both organelles, the mitochondria and chloroplasts (in photosynthetic organisms), are compartments that are believed to be of endosymbiotic origin. Other compartments such as peroxisomes, lysosomes, the endoplasmic reticulum, the cell nucleus or the Golgi apparatus are not of endosymbiotic origin. Smaller elements like vesicles, and sometimes even microtubules can also be counted as compartments.
It was thought that compartmentalization is not found in prokaryotic cells., but the discovery of carboxysomes and many other metabolosomes revealed that prokaryotic cells are capable of making compartmentalized structures, albeit these are in most cases not surrounded by a lipid bilayer, but of pure proteinaceous built.
Types
In general there are 4 main cellular compartments, they are:
The nuclear compartment comprising the nucleus
The intercisternal space which comprises the space between the membranes of the endoplasmic reticulum (which is continuous with the nuclear envelope)
Organelles (the mitochondrion in all eukaryotes and the plastid in phototrophic eukaryotes)
The cytosol
Function
Compartments have three main roles. One is to establish physical boundaries for biological processes that enables the cell to carry out different metabolic activities at the same time. This may include keeping certain biomolecules within a region, or keeping other molecules outside. Within the membrane-bound compartments, different intracellular pH, different enzyme systems, and other differences are isolated from other organelles and cytosol. With mitochondria, the cytosol has an oxidizing environment which converts NADH to NAD+. With these cases, the
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the main difference between prokaryotic and eukaryotic cells?
A. presence of cytoplasm
B. absence of cytoplasm
C. enlarged mitochondria
D. presence of a nucleus
Answer:
|
|
sciq-8797
|
multiple_choice
|
Enzymes - proteins which speed up what?
|
[
"physical reactions",
"liquid reactions",
"chemical reactions",
"crystals reactions"
] |
C
|
Relavent Documents:
Document 0:::
Biochemical engineering, also known as bioprocess engineering, is a field of study with roots stemming from chemical engineering and biological engineering. It mainly deals with the design, construction, and advancement of unit processes that involve biological organisms (such as fermentation) or organic molecules (often enzymes) and has various applications in areas of interest such as biofuels, food, pharmaceuticals, biotechnology, and water treatment processes. The role of a biochemical engineer is to take findings developed by biologists and chemists in a laboratory and translate that to a large-scale manufacturing process.
History
For hundreds of years, humans have made use of the chemical reactions of biological organisms in order to create goods. In the mid-1800s, Louis Pasteur was one of the first people to look into the role of these organisms when he researched fermentation. His work also contributed to the use of pasteurization, which is still used to this day. By the early 1900s, the use of microorganisms had expanded, and was used to make industrial products. Up to this point, biochemical engineering hadn't developed as a field yet. It wasn't until 1928 when Alexander Fleming discovered penicillin that the field of biochemical engineering was established. After this discovery, samples were gathered from around the world in order to continue research into the characteristics of microbes from places such as soils, gardens, forests, rivers, and streams. Today, biochemical engineers can be found working in a variety of industries, from food to pharmaceuticals. This is due to the increasing need for efficiency and production which requires knowledge of how biological systems and chemical reactions interact with each other and how they can be used to meet these needs.
Education
Biochemical engineering is not a major offered by most universities and is instead an area of interest under the chemical engineering major in most cases. The following universiti
Document 1:::
Enzyme assays are laboratory methods for measuring enzymatic activity. They are vital for the study of enzyme kinetics and enzyme inhibition.
Enzyme units
The quantity or concentration of an enzyme can be expressed in molar amounts, as with any other chemical, or in terms of activity in enzyme units.
Enzyme activity
Enzyme activity is a measure of the quantity of active enzyme present and is thus dependent on various physical conditions, which should be specified.
It is calculated using the following formula:
where
Enzyme activity
Moles of substrate converted per unit time
Rate of the reaction
Reaction volume
The SI unit is the katal, 1 katal = 1 mol s−1 (mole per second), but this is an excessively large unit. A more practical and commonly used value is enzyme unit (U) = 1 μmol min−1 (micromole per minute). 1 U corresponds to 16.67 nanokatals.
Enzyme activity as given in katal generally refers to that of the assumed natural target substrate of the enzyme. Enzyme activity can also be given as that of certain standardized substrates, such as gelatin, then measured in gelatin digesting units (GDU), or milk proteins, then measured in milk clotting units (MCU). The units GDU and MCU are based on how fast one gram of the enzyme will digest gelatin or milk proteins, respectively. 1 GDU approximately equals 1.5 MCU.
An increased amount of substrate will increase the rate of reaction with enzymes, however once past a certain point, the rate of reaction will level out because the amount of active sites available has stayed constant.
Specific activity
The specific activity of an enzyme is another common unit. This is the activity of an enzyme per milligram of total protein (expressed in μmol min−1 mg−1). Specific activity gives a measurement of enzyme purity in the mixture. It is the micro moles of product formed by an enzyme in a given amount of time (minutes) under given conditions per milligram of total proteins. Specific activity is equal to the rate of reacti
Document 2:::
This is a list of topics in molecular biology. See also index of biochemistry articles.
Document 3:::
Biochemistry or biological chemistry is the study of chemical processes within and relating to living organisms. A sub-discipline of both chemistry and biology, biochemistry may be divided into three fields: structural biology, enzymology, and metabolism. Over the last decades of the 20th century, biochemistry has become successful at explaining living processes through these three disciplines. Almost all areas of the life sciences are being uncovered and developed through biochemical methodology and research. Biochemistry focuses on understanding the chemical basis which allows biological molecules to give rise to the processes that occur within living cells and between cells, in turn relating greatly to the understanding of tissues and organs, as well as organism structure and function. Biochemistry is closely related to molecular biology, which is the study of the molecular mechanisms of biological phenomena.
Much of biochemistry deals with the structures, bonding, functions, and interactions of biological macromolecules, such as proteins, nucleic acids, carbohydrates, and lipids. They provide the structure of cells and perform many of the functions associated with life. The chemistry of the cell also depends upon the reactions of small molecules and ions. These can be inorganic (for example, water and metal ions) or organic (for example, the amino acids, which are used to synthesize proteins). The mechanisms used by cells to harness energy from their environment via chemical reactions are known as metabolism. The findings of biochemistry are applied primarily in medicine, nutrition and agriculture. In medicine, biochemists investigate the causes and cures of diseases. Nutrition studies how to maintain health and wellness and also the effects of nutritional deficiencies. In agriculture, biochemists investigate soil and fertilizers, with the goal of improving crop cultivation, crop storage, and pest control. In recent decades, biochemical principles a
Document 4:::
Biochemistry is the study of the chemical processes in living organisms. It deals with the structure and function of cellular components such as proteins, carbohydrates, lipids, nucleic acids and other biomolecules.
Articles related to biochemistry include:
0–9
2-amino-5-phosphonovalerate - 3' end - 5' end
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Enzymes - proteins which speed up what?
A. physical reactions
B. liquid reactions
C. chemical reactions
D. crystals reactions
Answer:
|
|
sciq-9902
|
multiple_choice
|
What is the transition from liquid to gas is called?
|
[
"boiling",
"condensing",
"freezing",
"melting"
] |
A
|
Relavent Documents:
Document 0:::
In chemistry, thermodynamics, and other related fields, a phase transition (or phase change) is the physical process of transition between one state of a medium and another. Commonly the term is used to refer to changes among the basic states of matter: solid, liquid, and gas, and in rare cases, plasma. A phase of a thermodynamic system and the states of matter have uniform physical properties. During a phase transition of a given medium, certain properties of the medium change as a result of the change of external conditions, such as temperature or pressure. This can be a discontinuous change; for example, a liquid may become gas upon heating to its boiling point, resulting in an abrupt change in volume. The identification of the external conditions at which a transformation occurs defines the phase transition point.
Types of phase transition
States of matter
Phase transitions commonly refer to when a substance transforms between one of the four states of matter to another. At the phase transition point for a substance, for instance the boiling point, the two phases involved - liquid and vapor, have identical free energies and therefore are equally likely to exist. Below the boiling point, the liquid is the more stable state of the two, whereas above the boiling point the gaseous form is the more stable.
Common transitions between the solid, liquid, and gaseous phases of a single component, due to the effects of temperature and/or pressure are identified in the following table:
For a single component, the most stable phase at different temperatures and pressures can be shown on a phase diagram. Such a diagram usually depicts states in equilibrium. A phase transition usually occurs when the pressure or temperature changes and the system crosses from one region to another, like water turning from liquid to solid as soon as the temperature drops below the freezing point. In exception to the usual case, it is sometimes possible to change the state of a system dia
Document 1:::
Sublimation is the transition of a substance directly from the solid to the gas state, without passing through the liquid state. Sublimation is an endothermic process that occurs at temperatures and pressures below a substance's triple point in its phase diagram, which corresponds to the lowest pressure at which the substance can exist as a liquid. The reverse process of sublimation is deposition or desublimation, in which a substance passes directly from a gas to a solid phase. Sublimation has also been used as a generic term to describe a solid-to-gas transition (sublimation) followed by a gas-to-solid transition (deposition). While vaporization from liquid to gas occurs as evaporation from the surface if it occurs below the boiling point of the liquid, and as boiling with formation of bubbles in the interior of the liquid if it occurs at the boiling point, there is no such distinction for the solid-to-gas transition which always occurs as sublimation from the surface.
At normal pressures, most chemical compounds and elements possess three different states at different temperatures. In these cases, the transition from the solid to the gaseous state requires an intermediate liquid state. The pressure referred to is the partial pressure of the substance, not the total (e.g. atmospheric) pressure of the entire system. Thus, any solid can sublimate if its vapour pressure is higher than the surrounding partial pressure of the same substance, and in some cases sublimates at an appreciable rate (e.g. water ice just below 0 °C). For some substances, such as carbon and arsenic, sublimation is much easier than evaporation from the melt, because the pressure of their triple point is very high, and it is difficult to obtain them as liquids.
The term sublimation refers to a physical change of state and is not used to describe the transformation of a solid to a gas in a chemical reaction. For example, the dissociation on heating of solid ammonium chloride into hydrogen chlori
Document 2:::
Boiling is the rapid phase transition from liquid to gas or vapor; the reverse of boiling is condensation. Boiling occurs when a liquid is heated to its boiling point, so that the vapour pressure of the liquid is equal to the pressure exerted on the liquid by the surrounding atmosphere. Boiling and evaporation are the two main forms of liquid vapourization.
There are two main types of boiling: nucleate boiling where small bubbles of vapour form at discrete points, and critical heat flux boiling where the boiling surface is heated above a certain critical temperature and a film of vapour forms on the surface. Transition boiling is an intermediate, unstable form of boiling with elements of both types. The boiling point of water is 100 °C or 212 °F but is lower with the decreased atmospheric pressure found at higher altitudes.
Boiling water is used as a method of making it potable by killing microbes and viruses that may be present. The sensitivity of different micro-organisms to heat varies, but if water is held at for one minute, most micro-organisms and viruses are inactivated. Ten minutes at a temperature of 70 °C (158 °F) is also sufficient to inactivate most bacteria.
Boiling water is also used in several cooking methods including boiling, steaming, and poaching.
Types
Free convection
The lowest heat flux seen in boiling is only sufficient to cause [natural convection], where the warmer fluid rises due to its slightly lower density. This condition occurs only when the superheat is very low, meaning that the hot surface near the fluid is nearly the same temperature as the boiling point.
Nucleate
Nucleate boiling is characterised by the growth of bubbles or pops on a heated surface (heterogeneous nucleation), which rises from discrete points on a surface, whose temperature is only slightly above the temperature of the liquid. In general, the number of nucleation sites is increased by an increasing surface temperature.
An irregular surface of the boiling
Document 3:::
In materials science, liquefaction is a process that generates a liquid from a solid or a gas or that generates a non-liquid phase which behaves in accordance with fluid dynamics.
It occurs both naturally and artificially. As an example of the latter, a "major commercial application of liquefaction is the liquefaction of air to allow separation of the constituents, such as oxygen, nitrogen, and the noble gases." Another is the conversion of solid coal into a liquid form usable as a substitute for liquid fuels.
Geology
In geology, soil liquefaction refers to the process by which water-saturated, unconsolidated sediments are transformed into a substance that acts like a liquid, often in an earthquake. Soil liquefaction was blamed for building collapses in the city of Palu, Indonesia in October 2018.
In a related phenomenon, liquefaction of bulk materials in cargo ships may cause a dangerous shift in the load.
Physics and chemistry
In physics and chemistry, the phase transitions from solid and gas to liquid (melting and condensation, respectively) may be referred to as liquefaction. The melting point (sometimes called liquefaction point) is the temperature and pressure at which a solid becomes a liquid. In commercial and industrial situations, the process of condensing a gas to liquid is sometimes referred to as liquefaction of gases.
Coal
Coal liquefaction is the production of liquid fuels from coal using a variety of industrial processes.
Dissolution
Liquefaction is also used in commercial and industrial settings to refer to mechanical dissolution of a solid by mixing, grinding or blending with a liquid.
Food preparation
In kitchen or laboratory settings, solids may be chopped into smaller parts sometimes in combination with a liquid, for example in food preparation or laboratory use. This may be done with a blender, or liquidiser in British English.
Irradiation
Liquefaction of silica and silicate glasses occurs on electron beam irradiation of nanos
Document 4:::
This is a list of gases at standard conditions, which means substances that boil or sublime at or below and 1 atm pressure and are reasonably stable.
List
This list is sorted by boiling point of gases in ascending order, but can be sorted on different values. "sub" and "triple" refer to the sublimation point and the triple point, which are given in the case of a substance that sublimes at 1 atm; "dec" refers to decomposition. "~" means approximately.
Known as gas
The following list has substances known to be gases, but with an unknown boiling point.
Fluoroamine
Trifluoromethyl trifluoroethyl trioxide CF3OOOCF2CF3 boils between 10 and 20°
Bis-trifluoromethyl carbonate boils between −10 and +10° possibly +12, freezing −60°
Difluorodioxirane boils between −80 and −90°.
Difluoroaminosulfinyl fluoride F2NS(O)F is a gas but decomposes over several hours
Trifluoromethylsulfinyl chloride CF3S(O)Cl
Nitrosyl cyanide ?−20° blue-green gas 4343-68-4
Thiazyl chloride NSCl greenish yellow gas; trimerises.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the transition from liquid to gas is called?
A. boiling
B. condensing
C. freezing
D. melting
Answer:
|
|
sciq-6049
|
multiple_choice
|
What two types of juices help digestion within the small intestine?
|
[
"intestinal and pancreatic",
"bile and lymph",
"amniotic fluid, bile",
"chyme and phloem"
] |
A
|
Relavent Documents:
Document 0:::
Digestion is the breakdown of large insoluble food compounds into small water-soluble components so that they can be absorbed into the blood plasma. In certain organisms, these smaller substances are absorbed through the small intestine into the blood stream. Digestion is a form of catabolism that is often divided into two processes based on how food is broken down: mechanical and chemical digestion. The term mechanical digestion refers to the physical breakdown of large pieces of food into smaller pieces which can subsequently be accessed by digestive enzymes. Mechanical digestion takes place in the mouth through mastication and in the small intestine through segmentation contractions. In chemical digestion, enzymes break down food into the small compounds that the body can use.
In the human digestive system, food enters the mouth and mechanical digestion of the food starts by the action of mastication (chewing), a form of mechanical digestion, and the wetting contact of saliva. Saliva, a liquid secreted by the salivary glands, contains salivary amylase, an enzyme which starts the digestion of starch in the food; the saliva also contains mucus, which lubricates the food, and hydrogen carbonate, which provides the ideal conditions of pH (alkaline) for amylase to work, and electrolytes (Na+, K+, Cl−, HCO−3). About 30% of starch is hydrolyzed into disaccharide in the oral cavity (mouth). After undergoing mastication and starch digestion, the food will be in the form of a small, round slurry mass called a bolus. It will then travel down the esophagus and into the stomach by the action of peristalsis. Gastric juice in the stomach starts protein digestion. Gastric juice mainly contains hydrochloric acid and pepsin. In infants and toddlers, gastric juice also contains rennin to digest milk proteins. As the first two chemicals may damage the stomach wall, mucus and bicarbonates are secreted by the stomach. They provide a slimy layer that acts as a shield against the damag
Document 1:::
Bile (from Latin bilis), or gall, is a yellow-green fluid produced by the liver of most vertebrates that aids the digestion of lipids in the small intestine. In humans, bile is primarily composed of water, produced continuously by the liver, and stored and concentrated in the gallbladder. After a human eats, this stored bile is discharged into the first section of their small intestine.
Composition
In the human liver, bile is composed of 97–98% water, 0.7% bile salts, 0.2% bilirubin, 0.51% fats (cholesterol, fatty acids, and lecithin), and 200 meq/L inorganic salts. The two main pigments of bile are bilirubin, which is yellow, and its oxidised form biliverdin, which is green. When mixed, they are responsible for the brown color of feces. About of bile is produced per day in adult human beings.
Function
Bile or gall acts to some extent as a surfactant, helping to emulsify the lipids in food. Bile salt anions are hydrophilic on one side and hydrophobic on the other side; consequently, they tend to aggregate around droplets of lipids (triglycerides and phospholipids) to form micelles, with the hydrophobic sides towards the fat and hydrophilic sides facing outwards. The hydrophilic sides are negatively charged, and this charge prevents fat droplets coated with bile from re-aggregating into larger fat particles. Ordinarily, the micelles in the duodenum have a diameter around 1–50 μm in humans.
The dispersion of food fat into micelles provides a greatly increased surface area for the action of the enzyme pancreatic lipase, which digests the triglycerides, and is able to reach the fatty core through gaps between the bile salts. A triglyceride is broken down into two fatty acids and a monoglyceride, which are absorbed by the villi on the intestine walls. After being transferred across the intestinal membrane, the fatty acids reform into triglycerides (), before being absorbed into the lymphatic system through lacteals. Without bile salts, most of the lipids in food wou
Document 2:::
The small intestine or small bowel is an organ in the gastrointestinal tract where most of the absorption of nutrients from food takes place. It lies between the stomach and large intestine, and receives bile and pancreatic juice through the pancreatic duct to aid in digestion. The small intestine is about long and folds many times to fit in the abdomen. Although it is longer than the large intestine, it is called the small intestine because it is narrower in diameter.
The small intestine has three distinct regions – the duodenum, jejunum, and ileum. The duodenum, the shortest, is where preparation for absorption through small finger-like protrusions called villi begins. The jejunum is specialized for the absorption through its lining by enterocytes: small nutrient particles which have been previously digested by enzymes in the duodenum. The main function of the ileum is to absorb vitamin B12, bile salts, and whatever products of digestion that were not absorbed by the jejunum.
Structure
Size
The length of the small intestine can vary greatly, from as short as to as long as , also depending on the measuring technique used. The typical length in a living person is . The length depends both on how tall the person is and how the length is measured. Taller people generally have a longer small intestine and measurements are generally longer after death and when the bowel is empty.
It is approximately in diameter in newborns after 35 weeks of gestational age, and in diameter in adults. On abdominal X-rays, the small intestine is considered to be abnormally dilated when the diameter exceeds 3 cm. On CT scans, a diameter of over 2.5 cm is considered abnormally dilated. The surface area of the human small intestinal mucosa, due to enlargement caused by folds, villi and microvilli, averages .
Parts
The small intestine is divided into three structural parts.
The duodenum is a short structure ranging from in length, and shaped like a "C". It surrounds the head of t
Document 3:::
The gastrointestinal wall of the gastrointestinal tract is made up of four layers of specialised tissue. From the inner cavity of the gut (the lumen) outwards, these are:
Mucosa
Submucosa
Muscular layer
Serosa or adventitia
The mucosa is the innermost layer of the gastrointestinal tract. It surrounds the lumen of the tract and comes into direct contact with digested food (chyme). The mucosa itself is made up of three layers: the epithelium, where most digestive, absorptive and secretory processes occur; the lamina propria, a layer of connective tissue, and the muscularis mucosae, a thin layer of smooth muscle.
The submucosa contains nerves including the submucous plexus (also called Meissner's plexus), blood vessels and elastic fibres with collagen, that stretches with increased capacity but maintains the shape of the intestine.
The muscular layer surrounds the submucosa. It comprises layers of smooth muscle in longitudinal and circular orientation that also helps with continued bowel movements (peristalsis) and the movement of digested material out of and along the gut. In between the two layers of muscle lies the myenteric plexus (also called Auerbach's plexus).
The serosa/adventitia are the final layers. These are made up of loose connective tissue and coated in mucus so as to prevent any friction damage from the intestine rubbing against other tissue. The serosa is present if the tissue is within the peritoneum, and the adventitia if the tissue is retroperitoneal.
Structure
When viewed under the microscope, the gastrointestinal wall has a consistent general form, but with certain parts differing along its course.
Mucosa
The mucosa is the innermost layer of the gastrointestinal tract. It surrounds the cavity (lumen) of the tract and comes into direct contact with digested food (chyme). The mucosa is made up of three layers:
The epithelium is the innermost layer. It is where most digestive, absorptive and secretory processes occur.
The lamina propr
Document 4:::
The Joan Mott Prize Lecture is a prize lecture awarded annually by The Physiological Society in honour of Joan Mott.
Laureates
Laureates of the award have included:
- Intestinal absorption of sugars and peptides: from textbook to surprises
See also
Physiological Society Annual Review Prize Lecture
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What two types of juices help digestion within the small intestine?
A. intestinal and pancreatic
B. bile and lymph
C. amniotic fluid, bile
D. chyme and phloem
Answer:
|
|
sciq-3058
|
multiple_choice
|
What helps to regulate consciousness, arousal, and sleep states?
|
[
"thalamus",
"hypothalamus",
"hippocampus",
"cerebral cortex"
] |
A
|
Relavent Documents:
Document 0:::
The following outline is provided as an overview of and topical guide to neuroscience:
Neuroscience is the scientific study of the structure and function of the nervous system. It encompasses the branch of biology that deals with the anatomy, biochemistry, molecular biology, and physiology of neurons and neural circuits. It also encompasses cognition, and human behavior. Neuroscience has multiple concepts that each relate to learning abilities and memory functions. Additionally, the brain is able to transmit signals that cause conscious/unconscious behaviors that are responses verbal or non-verbal. This allows people to communicate with one another.
Branches of neuroscience
Neurophysiology
Neurophysiology is the study of the function (as opposed to structure) of the nervous system.
Brain mapping
Electrophysiology
Extracellular recording
Intracellular recording
Brain stimulation
Electroencephalography
Intermittent rhythmic delta activity
:Category: Neurophysiology
:Category: Neuroendocrinology
:Neuroendocrinology
Neuroanatomy
Neuroanatomy is the study of the anatomy of nervous tissue and neural structures of the nervous system.
Immunostaining
:Category: Neuroanatomy
Neuropharmacology
Neuropharmacology is the study of how drugs affect cellular function in the nervous system.
Drug
Psychoactive drug
Anaesthetic
Narcotic
Behavioral neuroscience
Behavioral neuroscience, also known as biological psychology, biopsychology, or psychobiology, is the application of the principles of biology to the study of mental processes and behavior in human and non-human animals.
Neuroethology
Developmental neuroscience
Developmental neuroscience aims to describe the cellular basis of brain development and to address the underlying mechanisms. The field draws on both neuroscience and developmental biology to provide insight into the cellular and molecular mechanisms by which complex nervous systems develop.
Aging and memory
Cognitive neuroscience
Cognitive ne
Document 1:::
Wakefulness is a daily recurring brain state and state of consciousness in which an individual is conscious and engages in coherent cognitive and behavioral responses to the external world.
Being awake is the opposite of being asleep, in which most external inputs to the brain are excluded from neural processing.
Effects upon the brain
The longer the brain has been awake, the greater the synchronous firing rates of cerebral cortex neurons. After sustained periods of sleep, both the speed and synchronicity of the neurons firing are shown to decrease.
Another effect of wakefulness is the reduction of glycogen held in the astrocytes, which supply energy to the neurons. Studies have shown that one of sleep's underlying functions is to replenish this glycogen energy source.
Maintenance by the brain
Wakefulness is produced by a complex interaction between multiple neurotransmitter systems arising in the brainstem and ascending through the midbrain, hypothalamus, thalamus and basal forebrain.<ref></</ref> The posterior hypothalamus plays a key role in the maintenance of the cortical activation that underlies wakefulness. Several systems originating in this part of the brain control the shift from wakefulness into sleep and sleep into wakefulness. Histamine neurons in the tuberomammillary nucleus and nearby adjacent posterior hypothalamus project to the entire brain and are the most wake-selective system so far identified in the brain. Another key system is that provided by the orexins (also known as hypocretins) projecting neurons. These exist in areas adjacent to histamine neurons and like them project widely to most brain areas and associate with arousal. Orexin deficiency has been identified as responsible for narcolepsy.
Research suggests that orexin and histamine neurons play distinct, but complementary roles in controlling wakefulness with orexin being more involved with wakeful behavior and histamine with cognition and activation of cortical EEG.
It has been
Document 2:::
Sleep onset is the transition from wakefulness into sleep. Sleep onset usually transmits into non-rapid eye movement sleep (NREM sleep) but under certain circumstances (e.g. narcolepsy) it is possible to transit from wakefulness directly into rapid eye movement sleep (REM sleep).
History
During the 1920s an obscure disorder that caused encephalitis and attacked the part of the brain that regulates sleep influenced Europe and North America. Although the virus that caused this disorder was never identified, the psychiatrist and neurologist Constantin von Economo decided to study this disease and identified a key component in the sleep-wake regulation. He identified the pathways that regulated wakefulness and sleep onset by studying the parts of the brain that were affected by the disease and the consequences it had on the circadian rhythm. He stated that the pathways that regulated sleep onset are located between the brain stem and the basal forebrain. His discoveries were not appreciated until the last two decades of the 20th century when the pathways of sleep were found to reside in the exact place that Constantin von Economo stated.
Neural circuit
Sleep electrophysiological measurements can be made by attaching electrodes to the scalp to measure the electroencephalogram (EEG) and to the chin to monitor muscle activity, recorded as the electromyogram (EMG). Electrodes attached around the eyes monitor eye movements, recorded as the electro-oculogram (EOG).
Pathways
Von Economo, in his studies, noticed that lesions in the connection between the midbrain and the diencephalon caused prolonged sleepiness and therefore proposed the idea of an ascending arousal system. During the past few decades major ascending pathways have been discovered with located neurons and respective neurotransmitters. This pathway divides into two branches: one that ascends to the thalamus and activates the thalamus relay neurons, and another one that activates neurons in the lateral part of
Document 3:::
The activation-synthesis hypothesis, proposed by Harvard University psychiatrists John Allan Hobson and Robert McCarley, is a neurobiological theory of dreams first published in the American Journal of Psychiatry in December 1977. The differences in neuronal activity of the brainstem during waking and REM sleep were observed, and the hypothesis proposes that dreams result from brain activation during REM sleep. Since then, the hypothesis has undergone an evolution as technology and experimental equipment has become more precise. Currently, a three-dimensional model called AIM Model, described below, is used to determine the different states of the brain over the course of the day and night. The AIM Model introduces a new hypothesis that primary consciousness is an important building block on which secondary consciousness is constructed.
Introduction
With the advancement of brain imaging technology, the sleep-waking cycle can be studied as never before. The brain can be objectively quantified and identified as being in either one of three states: awake, REM sleep, and NREM sleep due to these advanced methods of measurement. It has been shown that global deactivation of the brain from waking state to NREM sleep occurs, and a subsequent reactivation during REM sleep, to a degree greater than during waking. Consciousness and its substates, primary consciousness and secondary consciousness, play a part in identifying the state of the brain. Primary consciousness is the simple awareness of perception and emotion; that is, the awareness of the world via advanced visual and motor coordination information your brain receives. Secondary consciousness is an advanced state that includes both primary consciousness and abstract analysis, or thinking, and metacognitive components, or the awareness of being aware. Most animals show some stages of primary consciousness, but only humans have been experimentally shown to experience secondary consciousness. The cycle of waking-NRE
Document 4:::
The neuroscience of sleep is the study of the neuroscientific and physiological basis of the nature of sleep and its functions. Traditionally, sleep has been studied as part of psychology and medicine. The study of sleep from a neuroscience perspective grew to prominence with advances in technology and the proliferation of neuroscience research from the second half of the twentieth century.
The importance of sleep is demonstrated by the fact that organisms daily spend hours of their time in sleep, and that sleep deprivation can have disastrous effects ultimately leading to death in animals. For a phenomenon so important, the purposes and mechanisms of sleep are only partially understood, so much so that as recently as the late 1990s it was quipped: "The only known function of sleep is to cure sleepiness". However, the development of improved imaging techniques like EEG, PET and fMRI, along with high computational power have led to an increasingly greater understanding of the mechanisms underlying sleep.
The fundamental questions in the neuroscientific study of sleep are:
What are the correlates of sleep i.e. what are the minimal set of events that could confirm that the organism is sleeping?
How is sleep triggered and regulated by the brain and the nervous system?
What happens in the brain during sleep?
How can we understand sleep function based on physiological changes in the brain?
What causes various sleep disorders and how can they be treated?
Other areas of modern neuroscience sleep research include the evolution of sleep, sleep during development and aging, animal sleep, mechanism of effects of drugs on sleep, dreams and nightmares, and stages of arousal between sleep and wakefulness.
Introduction
Rapid eye movement sleep (REM), non-rapid eye movement sleep (NREM or non-REM), and waking represent the three major modes of consciousness, neural activity, and physiological regulation. NREM sleep itself is divided into multiple stages – N1,
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What helps to regulate consciousness, arousal, and sleep states?
A. thalamus
B. hypothalamus
C. hippocampus
D. cerebral cortex
Answer:
|
|
sciq-1033
|
multiple_choice
|
What is the process of the fusion of a sperm and an egg called?
|
[
"embryo",
"stimulation",
"fertilization",
"Sperm"
] |
C
|
Relavent Documents:
Document 0:::
Spermatogenesis is the process by which haploid spermatozoa develop from germ cells in the seminiferous tubules of the testis. This process starts with the mitotic division of the stem cells located close to the basement membrane of the tubules. These cells are called spermatogonial stem cells. The mitotic division of these produces two types of cells. Type A cells replenish the stem cells, and type B cells differentiate into primary spermatocytes. The primary spermatocyte divides meiotically (Meiosis I) into two secondary spermatocytes; each secondary spermatocyte divides into two equal haploid spermatids by Meiosis II. The spermatids are transformed into spermatozoa (sperm) by the process of spermiogenesis. These develop into mature spermatozoa, also known as sperm cells. Thus, the primary spermatocyte gives rise to two cells, the secondary spermatocytes, and the two secondary spermatocytes by their subdivision produce four spermatozoa and four haploid cells.
Spermatozoa are the mature male gametes in many sexually reproducing organisms. Thus, spermatogenesis is the male version of gametogenesis, of which the female equivalent is oogenesis. In mammals it occurs in the seminiferous tubules of the male testes in a stepwise fashion. Spermatogenesis is highly dependent upon optimal conditions for the process to occur correctly, and is essential for sexual reproduction. DNA methylation and histone modification have been implicated in the regulation of this process. It starts during puberty and usually continues uninterrupted until death, although a slight decrease can be discerned in the quantity of produced sperm with increase in age (see Male infertility).
Spermatogenesis starts in the bottom part of seminiferous tubes and, progressively, cells go deeper into tubes and moving along it until mature spermatozoa reaches the lumen, where mature spermatozoa are deposited. The division happens asynchronically; if the tube is cut transversally one could observe different
Document 1:::
Fertilisation or fertilization (see spelling differences), also known as generative fertilisation, syngamy and impregnation, is the fusion of gametes to give rise to a new individual organism or offspring and initiate its development. While processes such as insemination or pollination which happen before the fusion of gametes are also sometimes informally referred to as fertilisation, these are technically separate processes. The cycle of fertilisation and development of new individuals is called sexual reproduction. During double fertilisation in angiosperms the haploid male gamete combines with two haploid polar nuclei to form a triploid primary endosperm nucleus by the process of vegetative fertilisation.
History
In Antiquity, Aristotle conceived the formation of new individuals through fusion of male and female fluids, with form and function emerging gradually, in a mode called by him as epigenetic.
In 1784, Spallanzani established the need of interaction between the female's ovum and male's sperm to form a zygote in frogs. In 1827, von Baer observed a therian mammalian egg for the first time. Oscar Hertwig (1876), in Germany, described the fusion of nuclei of spermatozoa and of ova from sea urchin.
Evolution
The evolution of fertilisation is related to the origin of meiosis, as both are part of sexual reproduction, originated in eukaryotes. One theory states that meiosis originated from mitosis.
Fertilisation in plants
The gametes that participate in fertilisation of plants are the sperm (male) and the egg (female) cell. Various families of plants have differing methods by which the gametes produced by the male and female gametophytes come together and are fertilised. In Bryophyte land plants, fertilisation of the sperm and egg takes place within the archegonium. In seed plants, the male gametophyte is called a pollen grain. After pollination, the pollen grain germinates, and a pollen tube grows and penetrates the ovule through a tiny pore called a mic
Document 2:::
The spermatid is the haploid male gametid that results from division of secondary spermatocytes. As a result of meiosis, each spermatid contains only half of the genetic material present in the original primary spermatocyte.
Spermatids are connected by cytoplasmic material and have superfluous cytoplasmic material around their nuclei.
When formed, early round spermatids must undergo further maturational events to develop into spermatozoa, a process termed spermiogenesis (also termed spermeteliosis).
The spermatids begin to grow a living thread, develop a thickened mid-piece where the mitochondria become localised, and form an acrosome. Spermatid DNA also undergoes packaging, becoming highly condensed. The DNA is packaged firstly with specific nuclear basic proteins, which are subsequently replaced with protamines during spermatid elongation. The resultant tightly packed chromatin is transcriptionally inactive.
In 2016 scientists at Nanjing Medical University claimed they had produced cells resembling mouse spermatids artificially from stem cells. They injected these spermatids into mouse eggs and produced pups.
DNA repair
As postmeiotic germ cells develop to mature sperm they progressively lose the ability to repair DNA damage that may then accumulate and be transmitted to the zygote and ultimately the embryo. In particular, the repair of DNA double-strand breaks by the non-homologous end joining pathway, although present in round spermatids, appears to be lost as they develop into elongated spermatids.
Additional images
See also
List of distinct cell types in the adult human body
Document 3:::
Reproductive biology includes both sexual and asexual reproduction.
Reproductive biology includes a wide number of fields:
Reproductive systems
Endocrinology
Sexual development (Puberty)
Sexual maturity
Reproduction
Fertility
Human reproductive biology
Endocrinology
Human reproductive biology is primarily controlled through hormones, which send signals to the human reproductive structures to influence growth and maturation. These hormones are secreted by endocrine glands, and spread to different tissues in the human body. In humans, the pituitary gland synthesizes hormones used to control the activity of endocrine glands.
Reproductive systems
Internal and external organs are included in the reproductive system. There are two reproductive systems including the male and female, which contain different organs from one another. These systems work together in order to produce offspring.
Female reproductive system
The female reproductive system includes the structures involved in ovulation, fertilization, development of an embryo, and birth.
These structures include:
Ovaries
Oviducts
Uterus
Vagina
Mammary Glands
Estrogen is one of the sexual reproductive hormones that aid in the sexual reproductive system of the female.
Male reproductive system
The male reproductive system includes testes, rete testis, efferent ductules, epididymis, sex accessory glands, sex accessory ducts and external genitalia.
Testosterone, an androgen, although present in both males and females, is relatively more abundant in males. Testosterone serves as one of the major sexual reproductive hormones in the male reproductive system However, the enzyme aromatase is present in testes and capable of synthesizing estrogens from androgens. Estrogens are present in high concentrations in luminal fluids of the male reproductive tract. Androgen and estrogen receptors are abundant in epithelial cells of the male reproductive tract.
Animal Reproductive Biology
Animal reproduction oc
Document 4:::
Sperm (: sperm or sperms) is the male reproductive cell, or gamete, in anisogamous forms of sexual reproduction (forms in which there is a larger, female reproductive cell and a smaller, male one). Animals produce motile sperm with a tail known as a flagellum, which are known as spermatozoa, while some red algae and fungi produce non-motile sperm cells, known as spermatia. Flowering plants contain non-motile sperm inside pollen, while some more basal plants like ferns and some gymnosperms have motile sperm.
Sperm cells form during the process known as spermatogenesis, which in amniotes (reptiles and mammals) takes place in the seminiferous tubules of the testes. This process involves the production of several successive sperm cell precursors, starting with spermatogonia, which differentiate into spermatocytes. The spermatocytes then undergo meiosis, reducing their chromosome number by half, which produces spermatids. The spermatids then mature and, in animals, construct a tail, or flagellum, which gives rise to the mature, motile sperm cell. This whole process occurs constantly and takes around 3 months from start to finish.
Sperm cells cannot divide and have a limited lifespan, but after fusion with egg cells during fertilization, a new organism begins developing, starting as a totipotent zygote. The human sperm cell is haploid, so that its 23 chromosomes can join the 23 chromosomes of the female egg to form a diploid cell with 46 paired chromosomes. In mammals, sperm is stored in the epididymis and is released from the penis during ejaculation in a fluid known as semen.
The word sperm is derived from the Greek word σπέρμα, sperma, meaning "seed".
Evolution
It is generally accepted that isogamy is the ancestor to sperm and eggs. However, there are no fossil records for the evolution of sperm and eggs from isogamy leading there to be a strong emphasis on mathematical models to understand the evolution of sperm.
A widespread hypothesis states that sperm evolve
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the process of the fusion of a sperm and an egg called?
A. embryo
B. stimulation
C. fertilization
D. Sperm
Answer:
|
|
sciq-6254
|
multiple_choice
|
What has two chains of nucleotides, one more than rna?
|
[
"fna",
"dna",
"mna",
"gna"
] |
B
|
Relavent Documents:
Document 0:::
Biomolecular structure is the intricate folded, three-dimensional shape that is formed by a molecule of protein, DNA, or RNA, and that is important to its function. The structure of these molecules may be considered at any of several length scales ranging from the level of individual atoms to the relationships among entire protein subunits. This useful distinction among scales is often expressed as a decomposition of molecular structure into four levels: primary, secondary, tertiary, and quaternary. The scaffold for this multiscale organization of the molecule arises at the secondary level, where the fundamental structural elements are the molecule's various hydrogen bonds. This leads to several recognizable domains of protein structure and nucleic acid structure, including such secondary-structure features as alpha helixes and beta sheets for proteins, and hairpin loops, bulges, and internal loops for nucleic acids.
The terms primary, secondary, tertiary, and quaternary structure were introduced by Kaj Ulrik Linderstrøm-Lang in his 1951 Lane Medical Lectures at Stanford University.
Primary structure
The primary structure of a biopolymer is the exact specification of its atomic composition and the chemical bonds connecting those atoms (including stereochemistry). For a typical unbranched, un-crosslinked biopolymer (such as a molecule of a typical intracellular protein, or of DNA or RNA), the primary structure is equivalent to specifying the sequence of its monomeric subunits, such as amino acids or nucleotides.
The primary structure of a protein is reported starting from the amino N-terminus to the carboxyl C-terminus, while the primary structure of DNA or RNA molecule is known as the nucleic acid sequence reported from the 5' end to the 3' end.
The nucleic acid sequence refers to the exact sequence of nucleotides that comprise the whole molecule. Often, the primary structure encodes sequence motifs that are of functional importance. Some examples of such motif
Document 1:::
A nucleic acid sequence is a succession of bases within the nucleotides forming alleles within a DNA (using GACT) or RNA (GACU) molecule. This succession is denoted by a series of a set of five different letters that indicate the order of the nucleotides. By convention, sequences are usually presented from the 5' end to the 3' end. For DNA, with its double helix, there are two possible directions for the notated sequence; of these two, the sense strand is used. Because nucleic acids are normally linear (unbranched) polymers, specifying the sequence is equivalent to defining the covalent structure of the entire molecule. For this reason, the nucleic acid sequence is also termed the primary structure.
The sequence represents biological information. Biological deoxyribonucleic acid represents the information which directs the functions of an organism.
Nucleic acids also have a secondary structure and tertiary structure. Primary structure is sometimes mistakenly referred to as "primary sequence". However there is no parallel concept of secondary or tertiary sequence.
Nucleotides
Nucleic acids consist of a chain of linked units called nucleotides. Each nucleotide consists of three subunits: a phosphate group and a sugar (ribose in the case of RNA, deoxyribose in DNA) make up the backbone of the nucleic acid strand, and attached to the sugar is one of a set of nucleobases. The nucleobases are important in base pairing of strands to form higher-level secondary and tertiary structures such as the famed double helix.
The possible letters are A, C, G, and T, representing the four nucleotide bases of a DNA strand – adenine, cytosine, guanine, thymine – covalently linked to a phosphodiester backbone. In the typical case, the sequences are printed abutting one another without gaps, as in the sequence AAAGTCTGAC, read left to right in the 5' to 3' direction. With regards to transcription, a sequence is on the coding strand if it has the same order as the transcribed RNA.
Document 2:::
The central dogma of molecular biology is an explanation of the flow of genetic information within a biological system. It is often stated as "DNA makes RNA, and RNA makes protein", although this is not its original meaning. It was first stated by Francis Crick in 1957, then published in 1958:
He re-stated it in a Nature paper published in 1970: "The central dogma of molecular biology deals with the detailed residue-by-residue transfer of sequential information. It states that such information cannot be transferred back from protein to either protein or nucleic acid."
A second version of the central dogma is popular but incorrect. This is the simplistic DNA → RNA → protein pathway published by James Watson in the first edition of The Molecular Biology of the Gene (1965). Watson's version differs from Crick's because Watson describes a two-step (DNA → RNA and RNA → protein) process as the central dogma. While the dogma as originally stated by Crick remains valid today, Watson's version does not.
The dogma is a framework for understanding the transfer of sequence information between information-carrying biopolymers, in the most common or general case, in living organisms. There are 3 major classes of such biopolymers: DNA and RNA (both nucleic acids), and protein. There are conceivable direct transfers of information that can occur between these. The dogma classes these into 3 groups of 3: three general transfers (believed to occur normally in most cells), two special transfers (known to occur, but only under specific conditions in case of some viruses or in a laboratory), and four unknown transfers (believed never to occur). The general transfers describe the normal flow of biological information: DNA can be copied to DNA (DNA replication), DNA information can be copied into mRNA (transcription), and proteins can be synthesized using the information in mRNA as a template (translation). The special transfers describe: RNA being copied from RNA (RNA replication), D
Document 3:::
In molecular biology, a polynucleotide () is a biopolymer composed of nucleotide monomers that are covalently bonded in a chain. DNA (deoxyribonucleic acid) and RNA (ribonucleic acid) are examples of polynucleotides with distinct biological functions. DNA consists of two chains of polynucleotides, with each chain in the form of a helix (like a spiral staircase).
Sequence
Although DNA and RNA do not generally occur in the same polynucleotide, the four species of nucleotides may occur in any order in the chain. The sequence of DNA or RNA species for a given polynucleotide is the main factor determining its function in a living organism or a scientific experiment.
Polynucleotides in organisms
Polynucleotides occur naturally in all living organisms. The genome of an organism consists of complementary pairs of enormously long polynucleotides wound around each other in the form of a double helix. Polynucleotides have a variety of other roles in organisms.
Polynucleotides in scientific experiments
Polynucleotides are used in biochemical experiments such as polymerase chain reaction (PCR) or DNA sequencing. Polynucleotides are made artificially from oligonucleotides, smaller nucleotide chains with generally fewer than 30 subunits. A polymerase enzyme is used to extend the chain by adding nucleotides according to a pattern specified by the scientist.
Prebiotic condensation of nucleobases with ribose
In order to understand how life arose, knowledge is required of the chemical pathways that permit formation of the key building blocks of life under plausible prebiotic conditions. According to the RNA world hypothesis free-floating ribonucleotides were present in the primitive soup. These were the fundamental molecules that combined in series to form RNA. Molecules as complex as RNA must have arisen from small molecules whose reactivity was governed by physico-chemical processes. RNA is composed of purine and pyrimidine nucleotides, both of which are necessary for re
Document 4:::
Spiegelman's Monster is an RNA chain of only 218 nucleotides that is able to be reproduced by the RNA replication enzyme RNA-dependent RNA polymerase, also called RNA replicase. It is named after its creator, Sol Spiegelman, of the University of Illinois at Urbana-Champaign who first described it in 1965.
Description
Spiegelman introduced RNA from a simple bacteriophage Qβ (Qβ) into a solution which contained Qβ's RNA replicase, some free nucleotides, and some salts. In this environment, the RNA started to be replicated. After a while, Spiegelman took some RNA and moved it to another tube with fresh solution. This process was repeated.
Shorter RNA chains were able to be replicated faster, so the RNA became shorter and shorter as selection favored speed. After 74 generations, the original strand with 4,500 nucleotide bases ended up as a dwarf genome with only 218 bases. This short RNA sequence replicated very quickly in these unnatural circumstances.
Further work
M. Sumper and R. Luce of Manfred Eigen's laboratory replicated the experiment, except without adding RNA, only RNA bases and Qβ replicase. They found that under the right conditions the Qβ replicase can spontaneously generate RNA which evolves into a form similar to Spiegelman's Monster.
Eigen built on Spiegelman's work and produced a similar system further degraded to just 48 or 54 nucleotides—the minimum required for the binding of the replication enzyme, this time a combination of HIV-1 reverse transcriptase and T7 RNA polymerase.
See also
Abiogenesis
RNA world hypothesis
PAH world hypothesis
Viroid
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What has two chains of nucleotides, one more than rna?
A. fna
B. dna
C. mna
D. gna
Answer:
|
|
sciq-4982
|
multiple_choice
|
What are steroid hormones made of?
|
[
"amino acid",
"lipids",
"water",
"organisms"
] |
B
|
Relavent Documents:
Document 0:::
The following is a list of hormones found in Homo sapiens. Spelling is not uniform for many hormones. For example, current North American and international usage uses estrogen and gonadotropin, while British usage retains the Greek digraph in oestrogen and favours the earlier spelling gonadotrophin.
Hormone listing
Steroid
Document 1:::
Norsteroids (nor-, L. norma, from "normal" in chemistry, indicating carbon removal) are a structural class of steroids that have had an atom or atoms (typically carbon) removed, biosynthetically or synthetically, from positions of branching off of rings or side chains (e.g., removal of methyl groups), or from within rings of the steroid ring system. For instance, 19-norsteroids (e.g., 19-norprogesterone) constitute an important class of natural and synthetic steroids derived by removal of the methyl group of the natural product progesterone; the equivalent change between testosterone and 19-nortestosterone (nandrolone) is illustrated below.
Examples
Norsteroid examples include: 19-norpregnane (from pregnane), desogestrel, ethylestrenol, etynodiol diacetate, ethinylestradiol, gestrinone, levonorgestrel, norethisterone (norethindrone), norgestrel, norpregnatriene (from pregnatriene), quinestrol, 19-norprogesterone (from a progesterone), Nomegestrol acetate, 19-nortestosterone (from a testosterone), and norethisterone acetate.
Document 2:::
Here are some of the steroids, grouped by catalytic activity of the CYP11B1 isozyme:
strong activity:
11-deoxycortisol to cortisol,
11-deoxycorticosterone to corticosterone;
medium activity:
progesterone to
Document 3:::
20α,22R-Dihydroxycholesterol, or (3β)-cholest-5-ene-3,20,22-triol is an endogenous, metabolic intermediate in the biosynthesis of the steroid hormones from cholesterol. Cholesterol ((3β)-cholest-5-en-3-ol) is hydroxylated by cholesterol side-chain cleavage enzyme (P450scc) to form 22R-hydroxycholesterol, which is subsequently hydroxylated again by P450scc to form 20α,22R-dihydroxycholesterol, and finally the bond between carbons 20 and 22 is cleaved by P450scc to form pregnenolone ((3β)-3-hydroxypregn-5-en-20-one), the precursor to the steroid hormones.
See also
22R-Hydroxycholesterol
27-Hydroxycholesterol
Document 4:::
Dihydroprogesterone may refer to:
5α-Dihydroprogesterone
5β-Dihydroprogesterone
20α-Dihydroprogesterone (20α-hydroxyprogesterone)
20β-Dihydroprogesterone (20β-hydroxyprogesterone)
3α-Dihydroprogesterone
3β-Dihydroprogesterone
17α,21-Dihydroprogesterone (11-deoxycortisol)
11β,21-Dihydroprogesterone (corticosterone)
See also
Progesterone
Pregnanedione
Pregnanolone
Pregnanediol
Pregnanetriol
Hydroxyprogesterone
Biochemistry
Pregnanes
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are steroid hormones made of?
A. amino acid
B. lipids
C. water
D. organisms
Answer:
|
|
sciq-1659
|
multiple_choice
|
What is the fracture called when rocks on both sides move?
|
[
"a crevice",
"a fault",
"a shear",
"a stress fracture"
] |
B
|
Relavent Documents:
Document 0:::
Plasticity theory for rocks is concerned with the response of rocks to loads beyond the elastic limit. Historically, conventional wisdom has it that rock is brittle and fails by fracture while plasticity is identified with ductile materials. In field scale rock masses, structural discontinuities exist in the rock indicating that failure has taken place. Since the rock has not fallen apart, contrary to expectation of brittle behavior, clearly elasticity theory is not the last word.
Theoretically, the concept of rock plasticity is based on soil plasticity which is different from metal plasticity. In metal plasticity, for example in steel, the size of a dislocation is sub-grain size while for soil it is the relative movement of microscopic grains. The theory of soil plasticity was developed in the 1960s at Rice University to provide for inelastic effects not observed in metals. Typical behaviors observed in rocks include strain softening, perfect plasticity, and work hardening.
Application of continuum theory is possible in jointed rocks because of the continuity of tractions across joints even through displacements may be discontinuous. The difference between an aggregate with joints and a continuous solid is in the type of constitutive law and the values of constitutive parameters.
Experimental evidence
Experiments are usually carried out with the intention of characterizing the mechanical behavior of rock in terms of rock strength. The strength is the limit to elastic behavior and delineates the regions where plasticity theory is applicable. Laboratory tests for characterizing rock plasticity fall into four overlapping categories: confining pressure tests, pore pressure or effective stress tests, temperature-dependent tests, and strain rate-dependent tests. Plastic behavior has been observed in rocks using all these techniques since the early 1900s.
The Boudinage experiments show that localized plasticity is observed in certain rock specimens that ha
Document 1:::
Rock mechanics is a theoretical and applied science of the mechanical behavior of rocks and rock masses.
Compared to geology, it is the branch of mechanics concerned with the response of rock and rock masses to the force fields of their physical environment.
Background
Rock mechanics is part of a much broader subject of geomechanics, which is concerned with the mechanical responses of all geological materials, including soils.
Rock mechanics is concerned with the application of the principles of engineering mechanics to the design of structures built in or on rock. The structure could include many objects such as a drilling well, a mine shaft, a tunnel, a reservoir dam, a repository component, or a building. Rock mechanics is used in many engineering disciplines, but is primarily used in Mining, Civil, Geotechnical, Transportation, and Petroleum Engineering.
Rock mechanics answers questions such as, "is reinforcement necessary for a rock, or will it be able to handle whatever load it is faced with?" It also includes the design of reinforcement systems, such as rock bolting patterns.
Assessing the Project Site
Before any work begins, the construction site must be investigated properly to inform of the geological conditions of the site. Field observations, deep drilling, and geophysical surveys, can all give necessary information to develop a safe construction plan and create a site geological model. The level of investigation conducted at this site depends on factors such as budget, time frame, and expected geological conditions.
The first step of the investigation is the collection of maps and aerial photos to analyze. This can provide information about potential sinkholes, landslides, erosion, etc. Maps can provide information on the rock type of the site, geological structure, and boundaries between bedrock units.
Boreholes
Creating a borehole is a technique that consists of drilling through the ground in various areas at various depths, to get a bett
Document 2:::
The shear strength of a discontinuity in a soil or rock mass may have a strong impact on the mechanical behavior of a soil or rock mass. The shear strength of a discontinuity is often considerably lower than the shear strength of the blocks of intact material in between the discontinuities, and therefore influences, for example, tunnel, foundation, or slope engineering, but also the stability of natural slopes. Many slopes, natural and man-made, fail due to a low shear strength of discontinuities in the soil or rock mass in the slope. The deformation characteristics of a soil or rock mass are also influenced by the shear strength of the discontinuities. For example, the modulus of deformation is reduced, and the deformation becomes plastic (i.e. non-reversible deformation on reduction of stress) rather than elastic (i.e. reversible deformation). This may cause, for example, larger settlement of foundations, which is also permanent even if the load is only temporary. Furthermore, the shear strength of discontinuities influences the stress distribution in a soil or rock mass.
Shear strength
The shear strength along a discontinuity in a soil or rock mass in geotechnical engineering is governed by the persistence of the discontinuity, roughness of discontinuity surfaces, infill material in the discontinuity, presence and pressure of gasses and fluids (e.g. water, oil), and possible solution (e.g. karst) and cementation along the discontinuity. Further the shear strength is dependent on whether the discontinuity has moved before in the geological history (i.e. are the asperities on opposing walls of the discontinuity fitting or non-fitting, or have the asperities been sheared off).
Determination shear strength
Only for simple models of discontinuities the shear strength can be analytically calculated. For real discontinuities no analytical calculation method exists. Testing on various scales in the laboratory or in the field, or empirical calculations based on chara
Document 3:::
Fracture is the appearance of a crack or complete separation of an object or material into two or more pieces under the action of stress. The fracture of a solid usually occurs due to the development of certain displacement discontinuity surfaces within the solid. If a displacement develops perpendicular to the surface, it is called a normal tensile crack or simply a crack; if a displacement develops tangentially, it is called a shear crack, slip band or dislocation.
Brittle fractures occur without any apparent deformation before fracture. Ductile fractures occur after visible deformation. Fracture strength, or breaking strength, is the stress when a specimen fails or fractures. The detailed understanding of how a fracture occurs and develops in materials is the object of fracture mechanics.
Strength
Fracture strength, also known as breaking strength, is the stress at which a specimen fails via fracture. This is usually determined for a given specimen by a tensile test, which charts the stress–strain curve (see image). The final recorded point is the fracture strength.
Ductile materials have a fracture strength lower than the ultimate tensile strength (UTS), whereas in brittle materials the fracture strength is equivalent to the UTS. If a ductile material reaches its ultimate tensile strength in a load-controlled situation, it will continue to deform, with no additional load application, until it ruptures. However, if the loading is displacement-controlled, the deformation of the material may relieve the load, preventing rupture.
The statistics of fracture in random materials have very intriguing behavior, and was noted by the architects and engineers quite early. Indeed, fracture or breakdown studies might be the oldest physical science studies, which still remain intriguing and very much alive. Leonardo da Vinci, more than 500 years ago, observed that the tensile strengths of nominally identical specimens of iron wire decrease with increasing length of the w
Document 4:::
Shale Gouge Ratio (typically abbreviated to SGR) is a mathematical algorithm that aims to predict the fault rock types for simple fault zones developed in sedimentary sequences dominated by sandstone and shale.
The parameter is widely used in the oil and gas exploration and production industries to enable quantitative predictions to be made regarding the hydrodynamic behavior of faults.
Definition
At any point on a fault surface, the shale gouge ratio is equal to the net shale/clay content of the rocks that have slipped past that point.
The SGR algorithm assumes complete mixing of the wall-rock components in any particular 'throw interval'. The parameter is a measure of the 'upscaled' composition of the fault zone.
Application to hydrocarbon exploration
Hydrocarbon exploration involves identifying and defining accumulations of hydrocarbons that are trapped in subsurface structures. These structures are often segmented by faults. For a thorough trap evaluation, it is necessary to predict whether the fault is sealing or leaking to hydrocarbons and also to provide an estimate of how 'strong' the fault seal might be. The 'strength' of a fault seal can be quantified in terms of subsurface pressure, arising from the buoyancy forces within the hydrocarbon column, that the fault can support before it starts to leak. When acting on a fault zone this subsurface pressure is termed capillary threshold pressure.
For faults developed in sandstone and shale sequences, the first order control on capillary threshold pressure is likely to be the composition, in particular the shale or clay content, of the fault-zone material. SGR is used to estimate the shale content of the fault zone.
In general, fault zones with higher clay content, equivalent to higher SGR values, can support higher capillary threshold pressures. On a broader scale, other factors also exert a control on the threshold pressure, such as depth of the rock sequence at the time of faulting, and the maxim
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the fracture called when rocks on both sides move?
A. a crevice
B. a fault
C. a shear
D. a stress fracture
Answer:
|
|
ai2_arc-416
|
multiple_choice
|
Margaret is running a full lap around a circular track. She is facing north when she starts. What direction will she be facing after she has completed half of a lap?
|
[
"north",
"south",
"east",
"west"
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams.
Course content
E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are:
Electrostatics
Conductors, capacitors, and dielectrics
Electric circuits
Magnetic fields
Electromagnetism.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with
Document 2:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 3:::
Test equating traditionally refers to the statistical process of determining comparable scores on different forms of an exam. It can be accomplished using either classical test theory or item response theory.
In item response theory, equating is the process of placing scores from two or more parallel test forms onto a common score scale. The result is that scores from two different test forms can be compared directly, or treated as though they came from the same test form. When the tests are not parallel, the general process is called linking. It is the process of equating the units and origins of two scales on which the abilities of students have been estimated from results on different tests. The process is analogous to equating degrees Fahrenheit with degrees Celsius by converting measurements from one scale to the other. The determination of comparable scores is a by-product of equating that results from equating the scales obtained from test results.
Purpose
Suppose that Dick and Jane both take a test to become licensed in a certain profession. Because the high stakes (you get to practice the profession if you pass the test) may create a temptation to cheat, the organization that oversees the test creates two forms. If we know that Dick scored 60% on form A and Jane scored 70% on form B, do we know for sure which one has a better grasp of the material? What if form A is composed of very difficult items, while form B is relatively easy? Equating analyses are performed to address this very issue, so that scores are as fair as possible.
Equating in item response theory
In item response theory, person "locations" (measures of some quality being assessed by a test) are estimated on an interval scale; i.e., locations are estimated in relation to a unit and origin. It is common in educational assessment to employ tests in order to assess different groups of students with the intention of establishing a common scale by equating the origins, and when appropri
Document 4:::
Advanced Placement (AP) Physics C: Mechanics (also known as AP Mechanics) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a one-semester calculus-based university course in mechanics. The content of Physics C: Mechanics overlaps with that of AP Physics 1, but Physics 1 is algebra-based, while Physics C is calculus-based. Physics C: Mechanics may be combined with its electricity and magnetism counterpart to form a year-long course that prepares for both exams.
Course content
Intended to be equivalent to an introductory college course in mechanics for physics or engineering majors, the course modules are:
Kinematics
Newton's laws of motion
Work, energy and power
Systems of particles and linear momentum
Circular motion and rotation
Oscillations and gravitation.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a Calculus I class.
This course is often compared to AP Physics 1: Algebra Based for its similar course material involving kinematics, work, motion, forces, rotation, and oscillations. However, AP Physics 1: Algebra Based lacks concepts found in Calculus I, like derivatives or integrals.
This course may be combined with AP Physics C: Electricity and Magnetism to make a unified Physics C course that prepares for both exams.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Mechanics is separate from the AP examination for AP Physics C: Electricity and Magnetism. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday aftern
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Margaret is running a full lap around a circular track. She is facing north when she starts. What direction will she be facing after she has completed half of a lap?
A. north
B. south
C. east
D. west
Answer:
|
|
sciq-11321
|
multiple_choice
|
Via what process do substance move from one cell to another?
|
[
"reverse transferase",
"plasmodesmata",
"autolysis",
"downregulation"
] |
B
|
Relavent Documents:
Document 0:::
Transfer cells are specialized parenchyma cells that have an increased surface area, due to infoldings of the plasma membrane. They facilitate the transport of sugars from a sugar source, mainly mature leaves, to a sugar sink, often developing leaves or fruits. They are found in nectaries of flowers and some carnivorous plants.
Transfer cells are specially found in plants in the region of absorption or secretion of nutrients.
The term transfer cell was coined by Brian Gunning and John Stewart Pate. Their presence is generally correlated with the existence of extensive solute influxes across the plasma membrane.
Document 1:::
Transcellular transport involves the transportation of solutes by a cell through a cell. Transcellular transport can occur in three different ways active transport, passive transport, and transcytosis.
Active Transport
Main article: Active transport
Active transport is the process of moving molecules from an area of low concentrations to an area of high concentration. There are two types of active transport, primary active transport and secondary active transport. Primary active transport uses adenosine triphosphate (ATP) to move specific molecules and solutes against its concentration gradient. Examples of molecules that follow this process are potassium K+, sodium Na+, and calcium Ca2+. A place in the human body where this occurs is in the intestines with the uptake of glucose. Secondary active transport is when one solute moves down the electrochemical gradient to produce enough energy to force the transport of another solute from low concentration to high concentration. An example of where this occurs is in the movement of glucose within the proximal convoluted tubule (PCT).
Passive Transport
Main article: Passive transport
Passive transport is the process of moving molecules from an area of high concentration to an area of low concentration without expelling any energy. There are two types of passive transport, passive diffusion and facilitated diffusion. Passive diffusion is the unassisted movement of molecules from high concentration to low concentration across a permeable membrane. One example of passive diffusion is the gas exchange that occurs between the oxygen in the blood and the carbon dioxide present in the lungs. Facilitated diffusion is the movement of polar molecules down the concentration gradient with the assistance of membrane proteins. Since the molecules associated with facilitated diffusion are polar, they are repelled by the hydrophobic sections of permeable membrane, therefore they need to be assisted by the membrane proteins. Both t
Document 2:::
This list of life sciences comprises the branches of science that involve the scientific study of life – such as microorganisms, plants, and animals including human beings. This science is one of the two major branches of natural science, the other being physical science, which is concerned with non-living matter. Biology is the overall natural science that studies life, with the other life sciences as its sub-disciplines.
Some life sciences focus on a specific type of organism. For example, zoology is the study of animals, while botany is the study of plants. Other life sciences focus on aspects common to all or many life forms, such as anatomy and genetics. Some focus on the micro-scale (e.g. molecular biology, biochemistry) other on larger scales (e.g. cytology, immunology, ethology, pharmacy, ecology). Another major branch of life sciences involves understanding the mindneuroscience. Life sciences discoveries are helpful in improving the quality and standard of life and have applications in health, agriculture, medicine, and the pharmaceutical and food science industries. For example, it has provided information on certain diseases which has overall aided in the understanding of human health.
Basic life science branches
Biology – scientific study of life
Anatomy – study of form and function, in plants, animals, and other organisms, or specifically in humans
Astrobiology – the study of the formation and presence of life in the universe
Bacteriology – study of bacteria
Biotechnology – study of combination of both the living organism and technology
Biochemistry – study of the chemical reactions required for life to exist and function, usually a focus on the cellular level
Bioinformatics – developing of methods or software tools for storing, retrieving, organizing and analyzing biological data to generate useful biological knowledge
Biolinguistics – the study of the biology and evolution of language.
Biological anthropology – the study of humans, non-hum
Document 3:::
Trans-endocytosis is the biological process where material created in one cell undergoes endocytosis (enters) into another cell. If the material is large enough, this can be observed using an electron microscope. Trans-endocytosis from neurons to glia has been observed using time-lapse microscopy.
Trans-endocytosis also applies to molecules. For example, this process is involved when a part of the protein Notch is cleaved off and undergoes endocytosis into its neighboring cell. Without Notch trans-endocytosis, there would be too many neurons in a developing embryo. Trans-endocytosis is also involved in cell movement when the protein ephrin is bound by its receptor from a neighboring cell.
Document 4:::
The School of Biological Sciences is a School within the Faculty Biology, Medicine and Health at The University of Manchester. Biology at University of Manchester and its precursor institutions has gone through a number of reorganizations (see History below), the latest of which was the change from a Faculty of Life Sciences to the current School.
Academics
Research
The School, though unitary for teaching, is divided into a number of broadly defined sections for research purposes, these sections consist of: Cellular Systems, Disease Systems, Molecular Systems, Neuro Systems and Tissue Systems.
Research in the School is structured into multiple research groups including the following themes:
Cell-Matrix Research (part of the Wellcome Trust Centre for Cell-Matrix Research)
Cell Organisation and Dynamics
Computational and Evolutionary Biology
Developmental Biology
Environmental Research
Eye and Vision Sciences
Gene Regulation and Cellular Biotechnology
History of Science, Technology and Medicine
Immunology and Molecular Microbiology
Molecular Cancer Studies
Neurosciences (part of the University of Manchester Neurosciences Research Institute)
Physiological Systems & Disease
Structural and Functional Systems
The School hosts a number of research centres, including: the Manchester Centre for Biophysics and Catalysis, the Wellcome Trust Centre for Cell-Matrix Research, the Centre of Excellence in Biopharmaceuticals, the Centre for the History of Science, Technology and Medicine, the Centre for Integrative Mammalian Biology, and the Healing Foundation Centre for Tissue Regeneration. The Manchester Collaborative Centre for Inflammation Research is a joint endeavour with the Faculty of Medical and Human Sciences of Manchester University and industrial partners.
Research Assessment Exercise (2008)
The faculty entered research into the units of assessment (UOA) for Biological Sciences and Pre-clinical and Human Biological Sciences. In Biological Sciences 20% of outputs
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Via what process do substance move from one cell to another?
A. reverse transferase
B. plasmodesmata
C. autolysis
D. downregulation
Answer:
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.