id
stringlengths
6
15
question_type
stringclasses
1 value
question
stringlengths
15
683
choices
listlengths
4
4
answer
stringclasses
5 values
explanation
stringclasses
481 values
prompt
stringlengths
1.75k
10.9k
sciq-688
multiple_choice
Knees and elbows are examples of what part of the skeletal system?
[ "joints", "nerves", "muscles", "cartilage" ]
A
Relavent Documents: Document 0::: Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research. Americas Human Biology major at Stanford University, Palo Alto (since 1970) Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government. Human and Social Biology (Caribbean) Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment. Human Biology Program at University of Toronto The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications. Asia BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002) BSc (honours) Human Biology at AIIMS (New Document 1::: The following outline is provided as an overview of and topical guide to human anatomy: Human anatomy – scientific study of the morphology of the adult human. It is subdivided into gross anatomy and microscopic anatomy. Gross anatomy (also called topographical anatomy, regional anatomy, or anthropotomy) is the study of anatomical structures that can be seen by unaided vision. Microscopic anatomy is the study of minute anatomical structures assisted with microscopes, and includes histology (the study of the organization of tissues), and cytology (the study of cells). Essence of human anatomy Human body Anatomy Branches of human anatomy Gross anatomy- systemic or region-wise study of human body parts and organs. Gross anatomy encompasses cadaveric anatomy and osteology Microscopic anatomy/histology Cell biology (Cytology) & cytogenetics Surface anatomy Radiological anatomy Developmental anatomy/embryology Anatomy of the human body The following list of human anatomical structures is based on the Terminologia Anatomica, the international standard for anatomical nomenclature. While the order is standardized, the hierarchical relationships in the TA are somewhat vague, and thus are open to interpretation. General anatomy Parts of human body Head Ear Face Forehead Cheek Chin Eye Nose Nostril Mouth Lip Tongue Tooth Neck Torso Thorax Abdomen Pelvis Back Pectoral girdle Shoulder Arm Axilla Elbow Forearm Wrist Hand Finger Thumb Palm Lower limb Pelvic girdle Leg Buttocks Hip Thigh Knee Calf Foot Ankle Heel Toe Big toe Sole Cavities Cranial cavity Spinal cavity Thoracic cavity Abdominopelvic cavity Abdominal cavity Pelvic cavity Planes, lines, and regions Regions of head Regions of neck Anterior and lateral thoracic regions Abdominal regions Regions of back Perineal regions Regions of upper limb Regions of lower limb Bones General terms Bony part Cortical bone Compact bone Spongy bone Cartilaginous part Membranous part Periosteum Perichondrium Axial skele Document 2::: Work He is an associate professor of anatomy, Department of Anatomy, Howard University College of Medicine (US). He was among the most cited/influential anatomists in 2019. Books Single author or co-author books DIOGO, R. (2021). Meaning of Life, Human Nature and Delusions - How Tales about Love, Sex, Races, Gods and Progress Affect Us and Earth's Splendor. Springer (New York, US). MONTERO, R., ADESOMO, A. & R. DIOGO (2021). On viruses, pandemics, and us: a developing story [De virus, pandemias y nosotros: una historia en desarollo]. Independently published, Tucuman, Argentina. 495 pages. DIOGO, R., J. ZIERMANN, J. MOLNAR, N. SIOMAVA & V. ABDALA (2018). Muscles of Chordates: development, homologies and evolution. Taylor & Francis (Oxford, UK). 650 pages. DIOGO, R., B. SHEARER, J. M. POTAU, J. F. PASTOR, F. J. DE PAZ, J. ARIAS-MARTORELL, C. TURCOTTE, A. HAMMOND, E. VEREECKE, M. VANHOOF, S. NAUWELAERTS & B. WOOD (2017). Photographic and descriptive musculoskeletal atlas of bonobos - with notes on the weight, attachments, variations, and innervation of the muscles and comparisons with common chimpanzees and humans. Springer (New York, US). 259 pages. DIOGO, R. (2017). Evolution driven by organismal behavior: a unifying view of life, function, form, mismatches and trends. Springer Document 3::: H2.00.04.4.01001: Lymphoid tissue H2.00.05.0.00001: Muscle tissue H2.00.05.1.00001: Smooth muscle tissue H2.00.05.2.00001: Striated muscle tissue H2.00.06.0.00001: Nerve tissue H2.00.06.1.00001: Neuron H2.00.06.2.00001: Synapse H2.00.06.2.00001: Neuroglia h3.01: Bones h3.02: Joints h3.03: Muscles h3.04: Alimentary system h3.05: Respiratory system h3.06: Urinary system h3.07: Genital system h3.08: Document 4::: A biological system is a complex network which connects several biologically relevant entities. Biological organization spans several scales and are determined based different structures depending on what the system is. Examples of biological systems at the macro scale are populations of organisms. On the organ and tissue scale in mammals and other animals, examples include the circulatory system, the respiratory system, and the nervous system. On the micro to the nanoscopic scale, examples of biological systems are cells, organelles, macromolecular complexes and regulatory pathways. A biological system is not to be confused with a living system, such as a living organism. Organ and tissue systems These specific systems are widely studied in human anatomy and are also present in many other animals. Respiratory system: the organs used for breathing, the pharynx, larynx, bronchi, lungs and diaphragm. Digestive system: digestion and processing food with salivary glands, oesophagus, stomach, liver, gallbladder, pancreas, intestines, rectum and anus. Cardiovascular system (heart and circulatory system): pumping and channeling blood to and from the body and lungs with heart, blood and blood vessels. Urinary system: kidneys, ureters, bladder and urethra involved in fluid balance, electrolyte balance and excretion of urine. Integumentary system: skin, hair, fat, and nails. Skeletal system: structural support and protection with bones, cartilage, ligaments and tendons. Endocrine system: communication within the body using hormones made by endocrine glands such as the hypothalamus, pituitary gland, pineal body or pineal gland, thyroid, parathyroid and adrenals, i.e., adrenal glands. Lymphatic system: structures involved in the transfer of lymph between tissues and the blood stream; includes the lymph and the nodes and vessels. The lymphatic system includes functions including immune responses and development of antibodies. Immune system: protects the organism from The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Knees and elbows are examples of what part of the skeletal system? A. joints B. nerves C. muscles D. cartilage Answer:
sciq-11372
multiple_choice
What happens to light when it reflects from a rough surface?
[ "gets diffused", "becomes isolated", "becomes concentrated", "reflects" ]
A
Relavent Documents: Document 0::: Total external reflection is a phenomenon traditionally involving X-rays, but in principle any type of electromagnetic or other wave, closely related to total internal reflection. Total internal reflection describes the fact that radiation (e.g. visible light) can, at certain angles, be totally reflected from an interface between two media of different indices of refraction (see Snell's law). Total internal reflection occurs when the first medium has a larger refractive index than the second medium, for example, light that starts in water and bounces off the water-to-air interface. Total external reflection is the situation where the light starts in air and vacuum (refractive index 1), and bounces off a material with index of refraction less than 1. For example, in X-rays, the refractive index is frequently slightly less than 1, and therefore total external reflection can happen at a glancing angle. It is called external because the light bounces off the exterior of the material. This makes it possible to focus X-rays. Document 1::: The visual appearance of objects is given by the way in which they reflect and transmit light. The color of objects is determined by the parts of the spectrum of (incident white) light that are reflected or transmitted without being absorbed. Additional appearance attributes are based on the directional distribution of reflected (BRDF) or transmitted light (BTDF) described by attributes like glossy, shiny versus dull, matte, clear, turbid, distinct, etc. Since "visual appearance" is a general concept that includes also various other visual phenomena, such as color, visual texture, visual perception of shape, size, etc., the specific aspects related to how humans see different spatial distributions of light (absorbed, transmitted and reflected, either regularly or diffusely) have been given the name cesia. It marks a difference (but also a relationship) with color, which could be defined as the sensation arising from different spectral compositions or distributions of light. Appearance of reflective objects The appearance of reflecting objects is determined by the way the surface reflects incident light. The reflective properties of the surface can be characterized by a closer look at the (micro)-topography of that surface. Structures on the surface and the texture of the surface are determined by typical dimensions between some 10 mm and 0.1 mm (the detection limit of the human eye is at ~0.07 mm). Smaller structures and features of the surface cannot be directly detected by the unaided eye, but their effect becomes apparent in objects or images reflected in the surface. Structures at and below 0.1 mm reduce the distinctness of image (DOI), structures in the range of 0.01 mm induce haze and even smaller structures affect the gloss of the surface. Definitiondiffusion, scattering: process by which the spatial distribution of a beam of radiation is changed in many directions when it is deviated by a surface or by a medium, without change of frequency of its monoch Document 2::: Gloss is an optical property which indicates how well a surface reflects light in a specular (mirror-like) direction. It is one of the important parameters that are used to describe the visual appearance of an object. Other categories of visual appearance related to the perception of regular or diffuse reflection and transmission of light have been organized under the concept of cesia in an order system with three variables, including gloss among the involved aspects. The factors that affect gloss are the refractive index of the material, the angle of incident light and the surface topography. Apparent gloss depends on the amount of specular reflection – light reflected from the surface in an equal amount and the symmetrical angle to the one of incoming light – in comparison with diffuse reflection – the amount of light scattered into other directions. Theory When light illuminates an object, it interacts with it in a number of ways: Absorbed within it (largely responsible for colour) Transmitted through it (dependent on the surface transparency and opacity) Scattered from or within it (diffuse reflection, haze and transmission) Specularly reflected from it (gloss) Variations in surface texture directly influence the level of specular reflection. Objects with a smooth surface, i.e. highly polished or containing coatings with finely dispersed pigments, appear shiny to the eye due to a large amount of light being reflected in a specular direction whilst rough surfaces reflect no specular light as the light is scattered in other directions and therefore appears dull. The image forming qualities of these surfaces are much lower making any reflections appear blurred and distorted. Substrate material type also influences the gloss of a surface. Non-metallic materials, i.e. plastics etc. produce a higher level of reflected light when illuminated at a greater illumination angle due to light being absorbed into the material or being diffusely scattered depending o Document 3::: Newton's rings is a phenomenon in which an interference pattern is created by the reflection of light between two surfaces, typically a spherical surface and an adjacent touching flat surface. It is named after Isaac Newton, who investigated the effect in 1666. When viewed with monochromatic light, Newton's rings appear as a series of concentric, alternating bright and dark rings centered at the point of contact between the two surfaces. When viewed with white light, it forms a concentric ring pattern of rainbow colors because the different wavelengths of light interfere at different thicknesses of the air layer between the surfaces. History The phenomenon was first described by Robert Hooke in his 1665 book Micrographia. Its name derives from the mathematician and physicist Sir Isaac Newton, who studied the phenomenon in 1666 while sequestered at home in Lincolnshire in the time of the Great Plague that had shut down Trinity College, Cambridge. He recorded his observations in an essay entitled "Of Colours". The phenomenon became a source of dispute between Newton, who favored a corpuscular nature of light, and Hooke, who favored a wave-like nature of light. Newton did not publish his analysis until after Hooke's death, as part of his treatise "Opticks" published in 1704. Theory The pattern is created by placing a very slightly convex curved glass on an optical flat glass. The two pieces of glass make contact only at the center. At other points there is a slight air gap between the two surfaces, increasing with radial distance from the center, as shown in Fig. 3. Consider monochromatic (single color) light incident from the top that reflects from both the bottom surface of the top lens and the top surface of the optical flat below it. The light passes through the glass lens until it comes to the glass-to-air boundary, where the transmitted light goes from a higher refractive index (n) value to a lower n value. The transmitted light passes through this bou Document 4::: Diffuse reflectance spectroscopy, or diffuse reflection spectroscopy, is a subset of absorption spectroscopy. It is sometimes called remission spectroscopy. Remission is the reflection or back-scattering of light by a material, while transmission is the passage of light through a material. The word remission implies a direction of scatter, independent of the scattering process. Remission includes both specular and diffusely back-scattered light. The word reflection often implies a particular physical process, such as specular reflection. The use of the term remission spectroscopy is relatively recent, and found first use in applications related to medicine and biochemistry. While the term is becoming more common in certain areas of absorption spectroscopy, the term diffuse reflectance is firmly entrenched, as in diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS) and diffuse-reflectance ultraviolet–visible spectroscopy. Mathematical treatments related to diffuse reflectance and transmittance The mathematical treatments of absorption spectroscopy for scattering materials were originally largely borrowed from other fields. The most successful treatments use the concept of dividing a sample into layers, called plane parallel layers. They are generally those consistent with a two-flux or two-stream approximation. Some of the treatments require all the scattered light, both remitted and transmitted light, to be measured. Others apply only to remitted light, with the assumption that the sample is "infinitely thick" and transmits no light. These are special cases of the more general treatments. There are several general treatments, all of which are compatible with each other, related to the mathematics of plane parallel layers. They are the Stokes formulas, equations of Benford, Hecht finite difference formula, and the Dahm equation. For the special case of infinitesimal layers, the Kubelka–Munk and Schuster–Kortüm treatments also give compat The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What happens to light when it reflects from a rough surface? A. gets diffused B. becomes isolated C. becomes concentrated D. reflects Answer:
sciq-2165
multiple_choice
What is the term for matter that does not let any light pass through it, whether it absorbs light, reflects light, or does both?
[ "artificial", "devoid", "obscene", "opaque" ]
D
Relavent Documents: Document 0::: Invisibility is the state of an object that cannot be seen. An object in this state is said to be invisible (literally, "not visible"). The phenomenon is studied by physics and perceptual psychology. Since objects can be seen by light in the visible spectrum from a source reflecting off their surfaces and hitting the viewer's eye, the most natural form of invisibility (whether real or fictional) is an object that neither reflects nor absorbs light (that is, it allows light to pass through it). This is known as transparency, and is seen in many naturally occurring materials (although no naturally occurring material is 100% transparent). Invisibility perception depends on several optical and visual factors. For example, invisibility depends on the eyes of the observer and/or the instruments used. Thus an object can be classified as "invisible to" a person, animal, instrument, etc. In research on sensorial perception it has been shown that invisibility is perceived in cycles. Invisibility is often considered to be the supreme form of camouflage, as it does not reveal to the viewer any kind of vital signs, visual effects, or any frequencies of the electromagnetic spectrum detectable to the human eye, instead making use of radio, infrared or ultraviolet wavelengths. In illusion optics, invisibility is a special case of illusion effects: the illusion of free space. The term is often used in fantasy and science fiction, where objects cannot be seen by means of magic or hypothetical technology. Practical efforts Technology can be used theoretically or practically to render real-world objects invisible. Making use of a real-time image displayed on a wearable display, it is possible to create a see-through effect. This is known as active camouflage. Though stealth technology is declared to be invisible to radar, all officially disclosed applications of the technology can only reduce the size and/or clarity of the signature detected by radar. In 2003 the Chilean s Document 1::: A physical property is any property that is measurable, involved in the physical system, intensity on the object's state and behavior. The changes in the physical properties of a system can be used to describe its changes between momentary states. A quantifiable physical property is called physical quantity. Measurable physical quantities are often referred to as observables. Some physical properties are qualitative, such as shininess, brittleness, etc.; some general qualitative properties admit more specific related quantitative properties, such as in opacity, hardness, ductility,viscosity, etc. Physical properties are often characterized as intensive and extensive properties. An intensive property does not depend on the size or extent of the system, nor on the amount of matter in the object, while an extensive property shows an additive relationship. These classifications are in general only valid in cases when smaller subdivisions of the sample do not interact in some physical or chemical process when combined. Properties may also be classified with respect to the directionality of their nature. For example, isotropic properties do not change with the direction of observation, and anisotropic properties do have spatial variance. It may be difficult to determine whether a given property is a material property or not. Color, for example, can be seen and measured; however, what one perceives as color is really an interpretation of the reflective properties of a surface and the light used to illuminate it. In this sense, many ostensibly physical properties are called supervenient. A supervenient property is one which is actual, but is secondary to some underlying reality. This is similar to the way in which objects are supervenient on atomic structure. A cup might have the physical properties of mass, shape, color, temperature, etc., but these properties are supervenient on the underlying atomic structure, which may in turn be supervenient on an underlying quan Document 2::: In physics, absorption of electromagnetic radiation is how matter (typically electrons bound in atoms) takes up a photon's energy — and so transforms electromagnetic energy into internal energy of the absorber (for example, thermal energy). A notable effect of the absorption of electromagnetic radiation is attenuation of the radiation; attenuation is the gradual reduction of the intensity of light waves as they propagate through the medium. Although the absorption of waves does not usually depend on their intensity (linear absorption), in certain conditions (optics) the medium's transparency changes by a factor that varies as a function of wave intensity, and saturable absorption (or nonlinear absorption) occurs. Quantifying absorption Many approaches can potentially quantify radiation absorption, with key examples following. The absorption coefficient along with some closely related derived quantities The attenuation coefficient (NB used infrequently with meaning synonymous with "absorption coefficient") The Molar attenuation coefficient (also called "molar absorptivity"), which is the absorption coefficient divided by molarity (see also Beer–Lambert law) The mass attenuation coefficient (also called "mass extinction coefficient"), which is the absorption coefficient divided by density The absorption cross section and scattering cross-section, related closely to the absorption and attenuation coefficients, respectively "Extinction" in astronomy, which is equivalent to the attenuation coefficient Other measures of radiation absorption, including penetration depth and skin effect, propagation constant, attenuation constant, phase constant, and complex wavenumber, complex refractive index and extinction coefficient, complex dielectric constant, electrical resistivity and conductivity. Related measures, including absorbance (also called "optical density") and optical depth (also called "optical thickness") All these quantities measure, at least to some ex Document 3::: Total external reflection is a phenomenon traditionally involving X-rays, but in principle any type of electromagnetic or other wave, closely related to total internal reflection. Total internal reflection describes the fact that radiation (e.g. visible light) can, at certain angles, be totally reflected from an interface between two media of different indices of refraction (see Snell's law). Total internal reflection occurs when the first medium has a larger refractive index than the second medium, for example, light that starts in water and bounces off the water-to-air interface. Total external reflection is the situation where the light starts in air and vacuum (refractive index 1), and bounces off a material with index of refraction less than 1. For example, in X-rays, the refractive index is frequently slightly less than 1, and therefore total external reflection can happen at a glancing angle. It is called external because the light bounces off the exterior of the material. This makes it possible to focus X-rays. Document 4::: Actinism () is the property of solar radiation that leads to the production of photochemical and photobiological effects. Actinism is derived from the Ancient Greek ἀκτίς, ἀκτῖνος ("ray, beam"). The word actinism is found, for example, in the terminology of imaging technology (esp. photography), medicine (concerning sunburn), and chemistry (concerning containers that protect from photo-degradation), and the concept of actinism is applied, for example, in chemical photography and X-ray imaging. Actinic () chemicals include silver salts used in photography and other light sensitive chemicals. In chemistry In chemical terms, actinism is the property of radiation that lets it be absorbed by a molecule and cause a photochemical reaction as a result. Albert Einstein was the first to correctly theorize that each photon would be able to cause only one molecular reaction. This distinction separates photochemical reactions from exothermic reduction reactions triggered by radiation. For general purposes, photochemistry is the commonly used vernacular rather than actinic or actino-chemistry, which are again more commonly seen used for photography or imaging. In medicine In medicine, actinic effects are generally described in terms of the dermis or outer layers of the body, such as eyes (see: Actinic conjunctivitis) and upper tissues that the sun would normally affect, rather than deeper tissues that higher-energy shorter-wavelength radiation such as x-ray and gamma might affect. Actinic is also used to describe medical conditions that are triggered by exposure to light, especially UV light (see actinic keratosis). The term actinic rays is used to refer to this phenomenon. In biology In biology, actinic light denotes light from solar or other sources that can cause photochemical reactions such as photosynthesis in a species. In photography Actinic light was first commonly used in early photography to distinguish light that would expose the monochrome films from light tha The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the term for matter that does not let any light pass through it, whether it absorbs light, reflects light, or does both? A. artificial B. devoid C. obscene D. opaque Answer:
sciq-5628
multiple_choice
What refers to the development and nourishment of an embryo within the mother’s body but not inside an egg?
[ "adolescence", "birth", "vivipary", "ovulation" ]
C
Relavent Documents: Document 0::: Development of the human body is the process of growth to maturity. The process begins with fertilization, where an egg released from the ovary of a female is penetrated by a sperm cell from a male. The resulting zygote develops through mitosis and cell differentiation, and the resulting embryo then implants in the uterus, where the embryo continues development through a fetal stage until birth. Further growth and development continues after birth, and includes both physical and psychological development that is influenced by genetic, hormonal, environmental and other factors. This continues throughout life: through childhood and adolescence into adulthood. Before birth Development before birth, or prenatal development () is the process in which a zygote, and later an embryo, and then a fetus develops during gestation. Prenatal development starts with fertilization and the formation of the zygote, the first stage in embryonic development which continues in fetal development until birth. Fertilization Fertilization occurs when the sperm successfully enters the ovum's membrane. The chromosomes of the sperm are passed into the egg to form a unique genome. The egg becomes a zygote and the germinal stage of embryonic development begins. The germinal stage refers to the time from fertilization, through the development of the early embryo, up until implantation. The germinal stage is over at about 10 days of gestation. The zygote contains a full complement of genetic material with all the biological characteristics of a single human being, and develops into the embryo. Embryonic development has four stages: the morula stage, the blastula stage, the gastrula stage, and the neurula stage. Prior to implantation, the embryo remains in a protein shell, the zona pellucida, and undergoes a series of rapid mitotic cell divisions called cleavage. A week after fertilization the embryo still has not grown in size, but hatches from the zona pellucida and adheres to the lining o Document 1::: Human embryonic development, or human embryogenesis, is the development and formation of the human embryo. It is characterised by the processes of cell division and cellular differentiation of the embryo that occurs during the early stages of development. In biological terms, the development of the human body entails growth from a one-celled zygote to an adult human being. Fertilization occurs when the sperm cell successfully enters and fuses with an egg cell (ovum). The genetic material of the sperm and egg then combine to form the single cell zygote and the germinal stage of development commences. Embryonic development in the human, covers the first eight weeks of development; at the beginning of the ninth week the embryo is termed a fetus. The eight weeks has 23 stages. Human embryology is the study of this development during the first eight weeks after fertilization. The normal period of gestation (pregnancy) is about nine months or 40 weeks. The germinal stage refers to the time from fertilization through the development of the early embryo until implantation is completed in the uterus. The germinal stage takes around 10 days. During this stage, the zygote begins to divide, in a process called cleavage. A blastocyst is then formed and implants in the uterus. Embryogenesis continues with the next stage of gastrulation, when the three germ layers of the embryo form in a process called histogenesis, and the processes of neurulation and organogenesis follow. In comparison to the embryo, the fetus has more recognizable external features and a more complete set of developing organs. The entire process of embryogenesis involves coordinated spatial and temporal changes in gene expression, cell growth and cellular differentiation. A nearly identical process occurs in other species, especially among chordates. Germinal stage Fertilization Fertilization takes place when the spermatozoon has successfully entered the ovum and the two sets of genetic material carried b Document 2::: In developmental biology, animal embryonic development, also known as animal embryogenesis, is the developmental stage of an animal embryo. Embryonic development starts with the fertilization of an egg cell (ovum) by a sperm cell, (spermatozoon). Once fertilized, the ovum becomes a single diploid cell known as a zygote. The zygote undergoes mitotic divisions with no significant growth (a process known as cleavage) and cellular differentiation, leading to development of a multicellular embryo after passing through an organizational checkpoint during mid-embryogenesis. In mammals, the term refers chiefly to the early stages of prenatal development, whereas the terms fetus and fetal development describe later stages. The main stages of animal embryonic development are as follows: The zygote undergoes a series of cell divisions (called cleavage) to form a structure called a morula. The morula develops into a structure called a blastula through a process called blastulation. The blastula develops into a structure called a gastrula through a process called gastrulation. The gastrula then undergoes further development, including the formation of organs (organogenesis). The embryo then transforms into the next stage of development, the nature of which varies between different animal species (examples of possible next stages include a fetus and a larva). Fertilization and the zygote The egg cell is generally asymmetric, having an animal pole (future ectoderm). It is covered with protective envelopes, with different layers. The first envelope – the one in contact with the membrane of the egg – is made of glycoproteins and is known as the vitelline membrane (zona pellucida in mammals). Different taxa show different cellular and acellular envelopes englobing the vitelline membrane. Fertilization is the fusion of gametes to produce a new organism. In animals, the process involves a sperm fusing with an ovum, which eventually leads to the development of an embryo. Depen Document 3::: A conceptus (from Latin: concipere, to conceive) is an embryo and its appendages (adnexa), the associated membranes, placenta, and umbilical cord; the products of conception or, more broadly, "the product of conception at any point between fertilization and birth." The conceptus includes all structures that develop from the zygote, both embryonic and extraembryonic. It includes the embryo as well as the embryonic part of the placenta and its associated membranes: amnion, chorion (gestational sac), and yolk sac. Document 4::: Embryonated, unembryonated and de-embryonated are terms generally used in reference to eggs or, in botany, to seeds. The words are often used as professional jargon rather than as universally applicable terms or concepts. Examples of relevant fields in which the words are useful include reproductive biology, virology, microbiology, parasitology, entomology, and poultry husbandry. Since the words are widely used in the various disciplines, there seems to be little present prospect of replacing them with universal, definitive, and distinct terms. Meaning The terms embryonated, unembryonated and de-embryonated respectively mean "having an embryo", "not having an embryo", and "having lost an embryo", and they most often refer to eggs. In Merriam-Webster the earliest known use of the term "embryonated" dates from 1687, while Oxford gives a reference dating from 1669. Embryonate The term embryonate can be used as an adjective to mean embryonated, or as a noun to mean one containing an embryo (e.g. "We selected only the embryonates and discarded the rest"). Embryonate can also be used as an intransitive verb meaning to develop an embryo (e.g. "In 2-4 weeks after deposition in soil, they embryonate if the soil conditions are suitable"). De-embryonate De-embryonate refers to the removal of embryos from seeds or similar reproductive units, typically in physiological studies. As with embrionate, it can either be a verb, noun or adjective. In some contexts the term "embryonectomy" may be used. For example, loss of the embryo may result from the activity of seed predation by insects. Usage There often is confusion in applying the term to various classes of unfertilised eggs and trophic eggs, depending on the area of expertise. Virology In virology, eggs of domestic poultry are used for culturing viruses for research purposes. Viruses generally can propagate only in live cells, so only a fertilised egg with a good supply of growing embryonic tissue is useful. Practitioners The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What refers to the development and nourishment of an embryo within the mother’s body but not inside an egg? A. adolescence B. birth C. vivipary D. ovulation Answer:
sciq-6037
multiple_choice
Defined as a substance in foods and beverages that is essential to human survival, what term encompasses water, energy-yielding and body-building substances, and vitamins and minerals?
[ "nutrients", "calories", "molecules", "sustenance" ]
A
Relavent Documents: Document 0::: Human nutrition deals with the provision of essential nutrients in food that are necessary to support human life and good health. Poor nutrition is a chronic problem often linked to poverty, food security, or a poor understanding of nutritional requirements. Malnutrition and its consequences are large contributors to deaths, physical deformities, and disabilities worldwide. Good nutrition is necessary for children to grow physically and mentally, and for normal human biological development. Overview The human body contains chemical compounds such as water, carbohydrates, amino acids (found in proteins), fatty acids (found in lipids), and nucleic acids (DNA and RNA). These compounds are composed of elements such as carbon, hydrogen, oxygen, nitrogen, and phosphorus. Any study done to determine nutritional status must take into account the state of the body before and after experiments, as well as the chemical composition of the whole diet and of all the materials excreted and eliminated from the body (including urine and feces). Nutrients The seven major classes of nutrients are carbohydrates, fats, fiber, minerals, proteins, vitamins, and water. Nutrients can be grouped as either macronutrients or micronutrients (needed in small quantities). Carbohydrates, fats, and proteins are macronutrients, and provide energy. Water and fiber are macronutrients but do not provide energy. The micronutrients are minerals and vitamins. The macronutrients (excluding fiber and water) provide structural material (amino acids from which proteins are built, and lipids from which cell membranes and some signaling molecules are built), and energy. Some of the structural material can also be used to generate energy internally, and in either case it is measured in Joules or kilocalories (often called "Calories" and written with a capital 'C' to distinguish them from little 'c' calories). Carbohydrates and proteins provide 17 kJ approximately (4 kcal) of energy per gram, while fats prov Document 1::: Animal nutrition focuses on the dietary nutrients needs of animals, primarily those in agriculture and food production, but also in zoos, aquariums, and wildlife management. Constituents of diet Macronutrients (excluding fiber and water) provide structural material (amino acids from which proteins are built, and lipids from which cell membranes and some signaling molecules are built) and energy. Some of the structural material can be used to generate energy internally, though the net energy depends on such factors as absorption and digestive effort, which vary substantially from instance to instance. Vitamins, minerals, fiber, and water do not provide energy, but are required for other reasons. A third class dietary material, fiber (i.e., non-digestible material such as cellulose), seems also to be required, for both mechanical and biochemical reasons, though the exact reasons remain unclear. Molecules of carbohydrates and fats consist of carbon, hydrogen, and oxygen atoms. Carbohydrates range from simple monosaccharides (glucose, fructose, galactose) to complex polysaccharides (starch). Fats are triglycerides, made of assorted fatty acid monomers bound to glycerol backbone. Some fatty acids, but not all, are essential in the diet: they cannot be synthesized in the body. Protein molecules contain nitrogen atoms in addition to carbon, oxygen, and hydrogen. The fundamental components of protein are nitrogen-containing amino acids. Essential amino acids cannot be made by the animal. Some of the amino acids are convertible (with the expenditure of energy) to glucose and can be used for energy production just as ordinary glucose. By breaking down existing protein, some glucose can be produced internally; the remaining amino acids are discarded, primarily as urea in urine. This occurs normally only during prolonged starvation. Other dietary substances found in plant foods (phytochemicals, polyphenols) are not identified as essential nutrients but appear to impact healt Document 2::: Body composition may be analyzed in various ways. This can be done in terms of the chemical elements present, or by molecular structure e.g., water, protein, fats (or lipids), hydroxylapatite (in bones), carbohydrates (such as glycogen and glucose) and DNA. In terms of tissue type, the body may be analyzed into water, fat, connective tissue, muscle, bone, etc. In terms of cell type, the body contains hundreds of different types of cells, but notably, the largest number of cells contained in a human body (though not the largest mass of cells) are not human cells, but bacteria residing in the normal human gastrointestinal tract. Elements About 99% of the mass of the human body is made up of six elements: oxygen, carbon, hydrogen, nitrogen, calcium, and phosphorus. Only about 0.85% is composed of another five elements: potassium, sulfur, sodium, chlorine, and magnesium. All 11 are necessary for life. The remaining elements are trace elements, of which more than a dozen are thought on the basis of good evidence to be necessary for life. All of the mass of the trace elements put together (less than 10 grams for a human body) do not add up to the body mass of magnesium, the least common of the 11 non-trace elements. Other elements Not all elements which are found in the human body in trace quantities play a role in life. Some of these elements are thought to be simple common contaminants without function (examples: caesium, titanium), while many others are thought to be active toxins, depending on amount (cadmium, mercury, lead, radioactives). In humans, arsenic is toxic, and its levels in foods and dietary supplements are closely monitored to reduce or eliminate its intake. Some elements (silicon, boron, nickel, vanadium) are probably needed by mammals also, but in far smaller doses. Bromine is used abundantly by some (though not all) lower organisms, and opportunistically in eosinophils in humans. One study has indicated bromine to be necessary to collagen IV synthe Document 3::: Food biodiversity is defined as "the diversity of plants, animals and other organisms used for food, covering the genetic resources within species, between species and provided by ecosystems." Food biodiversity can be considered from two main perspectives: production and consumption. From a consumption perspective, food biodiversity describes the diversity of foods in human diets and their contribution to dietary diversity, cultural identity and good nutrition. Production of food biodiversity looks at the thousands of food products, such as fruits, nuts, vegetables, meat and condiments sourced from agriculture and from the wild (e.g. forests, uncultivated fields, water bodies). Food biodiversity covers the diversity between species, for example different animal and crop species, including those considered neglected and underutilized species. Food biodiversity also comprises the diversity within species, for example different varieties of fruit and vegetables, or different breeds of animals. Food diversity, diet diversity nutritional diversity, are also terms used in the new diet culture spawned by Brandon Eisler, in the study known as Nutritional Diversity. Consumption of food biodiversity Food biodiversity, nutrition, and health Promoting diversity of foods and species consumed in human diets in particular has potential co-benefits for public health as well as sustainable food systems perspective. Food biodiversity provides necessary nutrients for quality diets and is an essential part of local food systems, cultures and food security. Promoting diversity of foods and species consumed in human diets in particular has potential co-benefits for sustainable food systems. Nutritionally, diversity in food is associated with higher micronutrient adequacy of diets. On average, per additional species consumed, mean adequacy of vitamin A, vitamin C, folate, calcium, iron, and zinc increased by 3%. From a conservation point of view, diets based on a wide variety of Document 4::: A nutrient is a substance used by an organism to survive, grow, and reproduce. The requirement for dietary nutrient intake applies to animals, plants, fungi, and protists. Nutrients can be incorporated into cells for metabolic purposes or excreted by cells to create non-cellular structures, such as hair, scales, feathers, or exoskeletons. Some nutrients can be metabolically converted to smaller molecules in the process of releasing energy, such as for carbohydrates, lipids, proteins, and fermentation products (ethanol or vinegar), leading to end-products of water and carbon dioxide. All organisms require water. Essential nutrients for animals are the energy sources, some of the amino acids that are combined to create proteins, a subset of fatty acids, vitamins and certain minerals. Plants require more diverse minerals absorbed through roots, plus carbon dioxide and oxygen absorbed through leaves. Fungi live on dead or living organic matter and meet nutrient needs from their host. Different types of organisms have different essential nutrients. Ascorbic acid (vitamin C) is essential, meaning it must be consumed in sufficient amounts, to humans and some other animal species, but some animals and plants are able to synthesize it. Nutrients may be organic or inorganic: organic compounds include most compounds containing carbon, while all other chemicals are inorganic. Inorganic nutrients include nutrients such as iron, selenium, and zinc, while organic nutrients include, among many others, energy-providing compounds and vitamins. A classification used primarily to describe nutrient needs of animals divides nutrients into macronutrients and micronutrients. Consumed in relatively large amounts (grams or ounces), macronutrients (carbohydrates, fats, proteins, water) are primarily used to generate energy or to incorporate into tissues for growth and repair. Micronutrients are needed in smaller amounts (milligrams or micrograms); they have subtle biochemical and physiologi The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Defined as a substance in foods and beverages that is essential to human survival, what term encompasses water, energy-yielding and body-building substances, and vitamins and minerals? A. nutrients B. calories C. molecules D. sustenance Answer:
sciq-6100
multiple_choice
What process would be impossible without some variation in the inherited traits of organisms within a species?
[ "environmental selection", "natural selection", "characteristic selection", "darwin's selection" ]
B
Relavent Documents: Document 0::: Ecological inheritance occurs when organisms inhabit a modified environment that a previous generation created; it was first described in Odling-Smee (1988) and Odling-Smee et al. (1996) as a consequence of niche construction. Standard evolutionary theory focuses on the influence that natural selection and genetic inheritance has on biological evolution, when individuals that survive and reproduce also transmit genes to their offspring. If offspring do not live in a modified environment created by their parents, then niche construction activities of parents do not affect the selective pressures of their offspring (see orb-web spiders in Genetic inheritance vs. ecological inheritance below). However, when niche construction affects multiple generations (i.e., parents and offspring), ecological inheritance acts a inheritance system different than genetic inheritance. Since ecological inheritance is a result of ecosystem engineering and niche construction, the fitness of several species and their subsequent generations experience a selective pressure dependent on the modified environment they inherit. Organisms in subsequent generations will encounter ecological inheritance because they are affected by a new selective environment created by prior niche construction. On a macroevolutionary scale, ecological inheritance has been defined as, "the persistence of environmental modifications by a species over multiple generations to influence the evolution of that or other species." Ecological inheritance has also been defined as, "... the accumulation of environmental changes, such as altered soil, atmosphere or ocean states that previous generations have brought about through their niche-constructing activity, and that influence the development of descendant organisms." Related to niche construction and ecological inheritance are factors and features of an organism and environment, respectively, where the feature of an organism is synonymous with adaptation if natural se Document 1::: Developmental systems theory (DST) is an overarching theoretical perspective on biological development, heredity, and evolution. It emphasizes the shared contributions of genes, environment, and epigenetic factors on developmental processes. DST, unlike conventional scientific theories, is not directly used to help make predictions for testing experimental results; instead, it is seen as a collection of philosophical, psychological, and scientific models of development and evolution. As a whole, these models argue the inadequacy of the modern evolutionary synthesis on the roles of genes and natural selection as the principal explanation of living structures. Developmental systems theory embraces a large range of positions that expand biological explanations of organismal development and hold modern evolutionary theory as a misconception of the nature of living processes. Overview All versions of developmental systems theory espouse the view that: All biological processes (including both evolution and development) operate by continually assembling new structures. Each such structure transcends the structures from which it arose and has its own systematic characteristics, information, functions and laws. Conversely, each such structure is ultimately irreducible to any lower (or higher) level of structure, and can be described and explained only on its own terms. Furthermore, the major processes through which life as a whole operates, including evolution, heredity and the development of particular organisms, can only be accounted for by incorporating many more layers of structure and process than the conventional concepts of ‘gene’ and ‘environment’ normally allow for. In other words, although it does not claim that all structures are equal, development systems theory is fundamentally opposed to reductionism of all kinds. In short, developmental systems theory intends to formulate a perspective which does not presume the causal (or ontological) priority of any p Document 2::: Evolutionary biology is the subfield of biology that studies the evolutionary processes (natural selection, common descent, speciation) that produced the diversity of life on Earth. It is also defined as the study of the history of life forms on Earth. Evolution holds that all species are related and gradually change over generations. In a population, the genetic variations affect the phenotypes (physical characteristics) of an organism. These changes in the phenotypes will be an advantage to some organisms, which will then be passed onto their offspring. Some examples of evolution in species over many generations are the peppered moth and flightless birds. In the 1930s, the discipline of evolutionary biology emerged through what Julian Huxley called the modern synthesis of understanding, from previously unrelated fields of biological research, such as genetics and ecology, systematics, and paleontology. The investigational range of current research has widened to encompass the genetic architecture of adaptation, molecular evolution, and the different forces that contribute to evolution, such as sexual selection, genetic drift, and biogeography. Moreover, the newer field of evolutionary developmental biology ("evo-devo") investigates how embryogenesis is controlled, thus yielding a wider synthesis that integrates developmental biology with the fields of study covered by the earlier evolutionary synthesis. Subfields Evolution is the central unifying concept in biology. Biology can be divided into various ways. One way is by the level of biological organization, from molecular to cell, organism to population. Another way is by perceived taxonomic group, with fields such as zoology, botany, and microbiology, reflecting what was once seen as the major divisions of life. A third way is by approaches, such as field biology, theoretical biology, experimental evolution, and paleontology. These alternative ways of dividing up the subject have been combined with evolution Document 3::: An acquired characteristic is a non-heritable change in a function or structure of a living organism caused after birth by disease, injury, accident, deliberate modification, variation, repeated use, disuse, misuse, or other environmental influence. Acquired traits are synonymous with acquired characteristics. They are not passed on to offspring through reproduction. The changes that constitute acquired characteristics can have many manifestations and degrees of visibility, but they all have one thing in common. They change a facet of a living organism's function or structure after birth. For example: The muscles acquired by a bodybuilder through physical training and diet. The loss of a limb due to an injury. The miniaturization of bonsai plants through careful cultivation techniques. Acquired characteristics can be minor and temporary like bruises, blisters, or shaving body hair. Permanent but inconspicuous or invisible ones are corrective eye surgery and organ transplant or removal. Semi-permanent but inconspicuous or invisible traits are vaccination and laser hair removal. Perms, tattoos, scars, and amputations are semi-permanent and highly visible. Applying makeup, nail polish, dying one's hair, applying henna to the skin, and tooth whitening are not examples of acquired traits. They change the appearance of a facet of an organism, but do not change the structure or functionality. Inheritance of acquired characteristics was historically proposed by renowned theorists such as Hippocrates, Aristotle, and French naturalist Jean-Baptiste Lamarck. Conversely, this hypothesis was denounced by other renowned theorists such as Charles Darwin. Today, although Lamarckism is generally discredited, there is still debate on whether some acquired characteristics in organisms are actually inheritable. Disputes Acquired characteristics, by definition, are characteristics that are gained by an organism after birth as a result of external influences or the organism's ow Document 4::: In biology, evolution is the process of change in all forms of life over generations, and evolutionary biology is the study of how evolution occurs. Biological populations evolve through genetic changes that correspond to changes in the organisms' observable traits. Genetic changes include mutations, which are caused by damage or replication errors in organisms' DNA. As the genetic variation of a population drifts randomly over generations, natural selection gradually leads traits to become more or less common based on the relative reproductive success of organisms with those traits. The age of the Earth is about 4.5 billion years. The earliest undisputed evidence of life on Earth dates from at least 3.5 billion years ago. Evolution does not attempt to explain the origin of life (covered instead by abiogenesis), but it does explain how early lifeforms evolved into the complex ecosystem that we see today. Based on the similarities between all present-day organisms, all life on Earth is assumed to have originated through common descent from a last universal ancestor from which all known species have diverged through the process of evolution. All individuals have hereditary material in the form of genes received from their parents, which they pass on to any offspring. Among offspring there are variations of genes due to the introduction of new genes via random changes called mutations or via reshuffling of existing genes during sexual reproduction. The offspring differs from the parent in minor random ways. If those differences are helpful, the offspring is more likely to survive and reproduce. This means that more offspring in the next generation will have that helpful difference and individuals will not have equal chances of reproductive success. In this way, traits that result in organisms being better adapted to their living conditions become more common in descendant populations. These differences accumulate resulting in changes within the population. This proce The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What process would be impossible without some variation in the inherited traits of organisms within a species? A. environmental selection B. natural selection C. characteristic selection D. darwin's selection Answer:
ai2_arc-757
multiple_choice
The concept of continental drift, continents drifting over the ocean floors, was proposed by Alfred Wegner in 1915. This concept was replaced by the theory of plate tectonics in the 1960s. Changes from the continental drift concept to the plate tectonics theory required which of the following?
[ "strong earthquakes occurring along fault zones", "data and ideas being shared among scientists", "a scientific law being passed by Congress", "teachers beginning to teach the newer theory" ]
B
Relavent Documents: Document 0::: The Vine–Matthews–Morley hypothesis, also known as the Morley–Vine–Matthews hypothesis, was the first key scientific test of the seafloor spreading theory of continental drift and plate tectonics. Its key impact was that it allowed the rates of plate motions at mid-ocean ridges to be computed. It states that the Earth's oceanic crust acts as a recorder of reversals in the geomagnetic field direction as seafloor spreading takes place. History Harry Hess proposed the seafloor spreading hypothesis in 1960 (published in 1962); the term "spreading of the seafloor" was introduced by geophysicist Robert S. Dietz in 1961. According to Hess, seafloor was created at mid-oceanic ridges by the convection of the earth's mantle, pushing and spreading the older crust away from the ridge. Geophysicist Frederick John Vine and the Canadian geologist Lawrence W. Morley independently realized that if Hess's seafloor spreading theory was correct, then the rocks surrounding the mid-oceanic ridges should show symmetric patterns of magnetization reversals using newly collected magnetic surveys. Both of Morley's letters to Nature (February 1963) and Journal of Geophysical Research (April 1963) were rejected, hence Vine and his PhD adviser at Cambridge University, Drummond Hoyle Matthews, were first to publish the theory in September 1963. Some colleagues were skeptical of the hypothesis because of the numerous assumptions made—seafloor spreading, geomagnetic reversals, and remanent magnetism—all hypotheses that were still not widely accepted. The Vine–Matthews–Morley hypothesis describes the magnetic reversals of oceanic crust. Further evidence for this hypothesis came from Allan V. Cox and colleagues (1964) when they measured the remanent magnetization of lavas from land sites. Walter C. Pitman and J. R. Heirtzler offered further evidence with a remarkably symmetric magnetic anomaly profile from the Pacific-Antarctic Ridge. Marine magnetic anomalies The Vine–Matthews-Morley hypothesis Document 1::: The evolution of tectonophysics is closely linked to the history of the continental drift and plate tectonics hypotheses. The continental drift/ Airy-Heiskanen isostasy hypothesis had many flaws and scarce data. The fixist/ Pratt-Hayford isostasy, the contracting Earth and the expanding Earth concepts had many flaws as well. The idea of continents with a permanent location, the geosyncline theory, the Pratt-Hayford isostasy, the extrapolation of the age of the Earth by Lord Kelvin as a black body cooling down, the contracting Earth, the Earth as a solid and crystalline body, is one school of thought. A lithosphere creeping over the asthenosphere is a logical consequence of an Earth with internal heat by radioactivity decay, the Airy-Heiskanen isostasy, thrust faults and Niskanen's mantle viscosity determinations. Making sense of the puzzle pieces 1953, the Great Global Rift, running along the Mid-Atlantic Ridge, was discovered by Bruce Heezen (Lamont Group) (Puzzle pieces: Seismic-refraction and Sonar survey of the rifts). , , , , Their world ocean floor map was published 1977. Austrian painter Heinrich Berann worked on it. Nowadays the seafloor maps have a better resolution by the SEASAT, Geosat/ERM and ERS-1/ERM (European Remote-Sensing Satellite/Exact Repeat Mission) missions. World map of earthquake epicenters, oceanic ones mainly . 1954–1963: Alfred Rittmann was elected IAV President (IAV at that time) for three periods. 1956, S. K. Runcorn becomes a drifter. , Statistics by Ronald Fisher. , Jan Hospers work (magnetic poles and geographical poles coincide the last 23 Ma). Self-exciting dynamo theory of Elsasser-Bullard. S. W. Carey, plate tectonics . But he believed here in an Expanding Earth. 1958, Henry William Menard notes that most mid-ocean ridges are halfway between the two continental edges ( cited in ). 1959, analysis of Vanguard satellite orbit suggests "large-scale convection currents in the mantle" . Seafloor spreading December Document 2::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 3::: The cataclysmic pole shift hypothesis is a pseudo-scientific claim that there have been recent, geologically rapid shifts in the axis of rotation of Earth, causing calamities such as floods and tectonic events or relatively rapid climate changes. There is evidence of precession and changes in axial tilt, but this change is on much longer time-scales and does not involve relative motion of the spin axis with respect to the planet. However, in what is known as true polar wander, the Earth rotates with respect to a fixed spin axis. Research shows that during the last 200 million years a total true polar wander of some 30° has occurred, but that no rapid shifts in Earth's geographic axial pole were found during this period. A characteristic rate of true polar wander is 1° or less per million years. Between approximately 790 and 810 million years ago, when the supercontinent Rodinia existed, two geologically rapid phases of true polar wander may have occurred. In each of these, the magnetic poles of Earth shifted by approximately 55° due to a large shift in the crust. Definition and clarification The geographic poles are defined by the points on the surface of Earth that are intersected by the axis of rotation. The pole shift hypothesis describes a change in location of these poles with respect to the underlying surface – a phenomenon distinct from the changes in axial orientation with respect to the plane of the ecliptic that are caused by precession and nutation, and is an amplified event of a true polar wander. Geologically, a surface shift separate from a planetary shift, enabled by earth's molten core. Pole shift hypotheses are not connected with plate tectonics, the well-accepted geological theory that Earth's surface consists of solid plates which shift over a viscous, or semifluid asthenosphere; nor with continental drift, the corollary to plate tectonics which maintains that locations of the continents have moved slowly over the surface of Earth, resulting Document 4::: The evolution of tectonophysics is closely linked to the history of the continental drift and plate tectonics hypotheses. The continental drift/ Airy-Heiskanen isostasy hypothesis had many flaws and scarce data. The fixist/ Pratt-Hayford isostasy, the contracting Earth and the expanding Earth concepts had many flaws as well. The idea of continents with a permanent location, the geosyncline theory, the Pratt-Hayford isostasy, the extrapolation of the age of the Earth by Lord Kelvin as a black body cooling down, the contracting Earth, the Earth as a solid and crystalline body, is one school of thought. A lithosphere creeping over the asthenosphere is a logical consequence of an Earth with internal heat by radioactivity decay, the Airy-Heiskanen isostasy, thrust faults and Niskanen's mantle viscosity determinations. Introduction Christian creationism (Martin Luther) was popular until the 19th century, and the age of the Earth was thought to have been created circa 4,000 BC. There were stacks of calcareous rocks of maritime origin above sea level, and up and down motions were allowed (geosyncline hypothesis, James Hall and James D. Dana). Later on, the thrust fault concept appeared, and a contracting Earth (Eduard Suess, James D. Dana, Albert Heim) was its driving force. In 1862, the physicist William Thomson (who later became Lord Kelvin) calculated the age of Earth (as a cooling black body) at between 20 million and 400 million years. In 1895, John Perry produced an age of Earth estimate of 2 to 3 billion years old using a model of a convective mantle and thin crust. Finally, Arthur Holmes published The Age of the Earth, an Introduction to Geological Ideas in 1927, in which he presented a range of 1.6 to 3.0 billion years. Wegener had data for assuming that the relative positions of the continents change over time. It was a mistake to state the continents "plowed" through the sea, although it isn't sure that this fixist quote is true in the original in German. He The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The concept of continental drift, continents drifting over the ocean floors, was proposed by Alfred Wegner in 1915. This concept was replaced by the theory of plate tectonics in the 1960s. Changes from the continental drift concept to the plate tectonics theory required which of the following? A. strong earthquakes occurring along fault zones B. data and ideas being shared among scientists C. a scientific law being passed by Congress D. teachers beginning to teach the newer theory Answer:
sciq-4488
multiple_choice
What energy is the energy of motion?
[ "thermal", "optimal", "kinetic", "potential" ]
C
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: This is a list of topics that are included in high school physics curricula or textbooks. Mathematical Background SI Units Scalar (physics) Euclidean vector Motion graphs and derivatives Pythagorean theorem Trigonometry Motion and forces Motion Force Linear motion Linear motion Displacement Speed Velocity Acceleration Center of mass Mass Momentum Newton's laws of motion Work (physics) Free body diagram Rotational motion Angular momentum (Introduction) Angular velocity Centrifugal force Centripetal force Circular motion Tangential velocity Torque Conservation of energy and momentum Energy Conservation of energy Elastic collision Inelastic collision Inertia Moment of inertia Momentum Kinetic energy Potential energy Rotational energy Electricity and magnetism Ampère's circuital law Capacitor Coulomb's law Diode Direct current Electric charge Electric current Alternating current Electric field Electric potential energy Electron Faraday's law of induction Ion Inductor Joule heating Lenz's law Magnetic field Ohm's law Resistor Transistor Transformer Voltage Heat Entropy First law of thermodynamics Heat Heat transfer Second law of thermodynamics Temperature Thermal energy Thermodynamic cycle Volume (thermodynamics) Work (thermodynamics) Waves Wave Longitudinal wave Transverse waves Transverse wave Standing Waves Wavelength Frequency Light Light ray Speed of light Sound Speed of sound Radio waves Harmonic oscillator Hooke's law Reflection Refraction Snell's law Refractive index Total internal reflection Diffraction Interference (wave propagation) Polarization (waves) Vibrating string Doppler effect Gravity Gravitational potential Newton's law of universal gravitation Newtonian constant of gravitation See also Outline of physics Physics education Document 2::: Advanced Placement (AP) Physics C: Mechanics (also known as AP Mechanics) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a one-semester calculus-based university course in mechanics. The content of Physics C: Mechanics overlaps with that of AP Physics 1, but Physics 1 is algebra-based, while Physics C is calculus-based. Physics C: Mechanics may be combined with its electricity and magnetism counterpart to form a year-long course that prepares for both exams. Course content Intended to be equivalent to an introductory college course in mechanics for physics or engineering majors, the course modules are: Kinematics Newton's laws of motion Work, energy and power Systems of particles and linear momentum Circular motion and rotation Oscillations and gravitation. Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a Calculus I class. This course is often compared to AP Physics 1: Algebra Based for its similar course material involving kinematics, work, motion, forces, rotation, and oscillations. However, AP Physics 1: Algebra Based lacks concepts found in Calculus I, like derivatives or integrals. This course may be combined with AP Physics C: Electricity and Magnetism to make a unified Physics C course that prepares for both exams. AP test The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution. Registration The AP examination for AP Physics C: Mechanics is separate from the AP examination for AP Physics C: Electricity and Magnetism. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test. Format The exam is typically administered on a Monday aftern Document 3::: Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams. Course content E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are: Electrostatics Conductors, capacitors, and dielectrics Electric circuits Magnetic fields Electromagnetism. Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class. AP test The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution. Registration The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test. Format The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with Document 4::: Engineering mathematics is a branch of applied mathematics concerning mathematical methods and techniques that are typically used in engineering and industry. Along with fields like engineering physics and engineering geology, both of which may belong in the wider category engineering science, engineering mathematics is an interdisciplinary subject motivated by engineers' needs both for practical, theoretical and other considerations outside their specialization, and to deal with constraints to be effective in their work. Description Historically, engineering mathematics consisted mostly of applied analysis, most notably: differential equations; real and complex analysis (including vector and tensor analysis); approximation theory (broadly construed, to include asymptotic, variational, and perturbative methods, representations, numerical analysis); Fourier analysis; potential theory; as well as linear algebra and applied probability, outside of analysis. These areas of mathematics were intimately tied to the development of Newtonian physics, and the mathematical physics of that period. This history also left a legacy: until the early 20th century subjects such as classical mechanics were often taught in applied mathematics departments at American universities, and fluid mechanics may still be taught in (applied) mathematics as well as engineering departments. The success of modern numerical computer methods and software has led to the emergence of computational mathematics, computational science, and computational engineering (the last two are sometimes lumped together and abbreviated as CS&E), which occasionally use high-performance computing for the simulation of phenomena and the solution of problems in the sciences and engineering. These are often considered interdisciplinary fields, but are also of interest to engineering mathematics. Specialized branches include engineering optimization and engineering statistics. Engineering mathematics in tertiary educ The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What energy is the energy of motion? A. thermal B. optimal C. kinetic D. potential Answer:
ai2_arc-525
multiple_choice
A student investigates how speed changes as a ball travels down a ramp. Measurements taken by computer every second are recorded on a data table. Which diagram will best display the data from this table?
[ "a bar graph", "a line graph", "a pie chart", "a pictograph" ]
B
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: A chart (sometimes known as a graph) is a graphical representation for data visualization, in which "the data is represented by symbols, such as bars in a bar chart, lines in a line chart, or slices in a pie chart". A chart can represent tabular numeric data, functions or some kinds of quality structure and provides different info. The term "chart" as a graphical representation of data has multiple meanings: A data chart is a type of diagram or graph, that organizes and represents a set of numerical or qualitative data. Maps that are adorned with extra information (map surround) for a specific purpose are often known as charts, such as a nautical chart or aeronautical chart, typically spread over several map sheets. Other domain-specific constructs are sometimes called charts, such as the chord chart in music notation or a record chart for album popularity. Charts are often used to ease understanding of large quantities of data and the relationships between parts of the data. Charts can usually be read more quickly than the raw data. They are used in a wide variety of fields, and can be created by hand (often on graph paper) or by computer using a charting application. Certain types of charts are more useful for presenting a given data set than others. For example, data that presents percentages in different groups (such as "satisfied, not satisfied, unsure") are often displayed in a pie chart, but maybe more easily understood when presented in a horizontal bar chart. On the other hand, data that represents numbers that change over a period of time (such as "annual revenue from 1990 to 2000") might be best shown as a line chart1 Features A chart can take a large variety of forms. However, there are common features that provide the chart with its ability to extract meaning from data. Typically the data in a chart is represented graphically since humans can infer meaning from pictures more quickly than from text. Thus, the text is generally used only to annota Document 2::: This is a list of graphical methods with a mathematical basis. Included are diagram techniques, chart techniques, plot techniques, and other forms of visualization. There is also a list of computer graphics and descriptive geometry topics. Simple displays Area chart Box plot Dispersion fan diagram Graph of a function Logarithmic graph paper Heatmap Bar chart Histogram Line chart Pie chart Plotting Scatterplot Sparkline Stemplot Radar chart Set theory Venn diagram Karnaugh diagram Descriptive geometry Isometric projection Orthographic projection Perspective (graphical) Engineering drawing Technical drawing Graphical projection Mohr's circle Pantograph Circuit diagram Smith chart Sankey diagram Systems analysis Binary decision diagram Control-flow graph Functional flow block diagram Information flow diagram IDEF N2 chart Sankey diagram State diagram System context diagram Data-flow diagram Cartography Map projection Orthographic projection (cartography) Robinson projection Stereographic projection Dymaxion map Topographic map Craig retroazimuthal projection Hammer retroazimuthal projection Biological sciences Cladogram Punnett square Systems Biology Graphical Notation Physical sciences Free body diagram Greninger chart Phase diagram Wavenumber-frequency diagram Bode plot Nyquist plot Dalitz plot Feynman diagram Carnot Plot Business methods Flowchart Workflow Gantt chart Growth-share matrix (often called BCG chart) Work breakdown structure Control chart Ishikawa diagram Pareto chart (often used to prioritise outputs of an Ishikawa diagram) Conceptual analysis Mind mapping Concept mapping Conceptual graph Entity-relationship diagram Tag cloud, also known as word cloud Statistics Autocorrelation plot Bar chart Biplot Box plot Bullet graph Chernoff faces Control chart Fan chart Forest plot Funnel plot Galbraith plot Histogram Mosaic plot Multidimensional scaling np-chart p-chart Pie chart Probability plot Normal probability plot Poincaré plot Probability plot Document 3::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of Document 4::: A Gantt chart is a bar chart that illustrates a project schedule. It was designed and popularized by Henry Gantt around the years 1910–1915. Modern Gantt charts also show the dependency relationships between activities and the current schedule status. Definition A Gantt chart is a type of bar chart that illustrates a project schedule. This chart lists the tasks to be performed on the vertical axis, and time intervals on the horizontal axis. The width of the horizontal bars in the graph shows the duration of each activity. Gantt charts illustrate the start and finish dates of the terminal elements and summary elements of a project. Terminal elements and summary elements constitute the work breakdown structure of the project. Modern Gantt charts also show the dependency (i.e., precedence network) relationships between activities. Gantt charts can be used to show current schedule status using percent-complete shadings and a vertical "TODAY" line. Gantt charts are sometimes equated with bar charts. Gantt charts are usually created initially using an early start time approach, where each task is scheduled to start immediately when its prerequisites are complete. This method maximizes the float time available for all tasks. History Widely used in project planning in the present day, Gantt charts were considered revolutionary when introduced. The first known tool of this type was developed in 1896 by Karol Adamiecki, who called it a harmonogram. Adamiecki, however, published his chart only in Russian and Polish which limited both its adoption and recognition of his authorship. In 1912, published what could be considered Gantt charts while discussing a construction project. Charts of the type published by Schürch appear to have been in common use in Germany at the time; however, the prior development leading to Schürch's work is unclear. Unlike later Gantt charts, Schürch's charts did not display interdependencies, leaving them to be inferred by the reader. These w The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. A student investigates how speed changes as a ball travels down a ramp. Measurements taken by computer every second are recorded on a data table. Which diagram will best display the data from this table? A. a bar graph B. a line graph C. a pie chart D. a pictograph Answer:
sciq-5285
multiple_choice
Amino acids are joined together to form a chain at what molecular structure?
[ "ribosomes", "chloroplasts", "chromosomes", "DNA" ]
A
Relavent Documents: Document 0::: Biomolecular structure is the intricate folded, three-dimensional shape that is formed by a molecule of protein, DNA, or RNA, and that is important to its function. The structure of these molecules may be considered at any of several length scales ranging from the level of individual atoms to the relationships among entire protein subunits. This useful distinction among scales is often expressed as a decomposition of molecular structure into four levels: primary, secondary, tertiary, and quaternary. The scaffold for this multiscale organization of the molecule arises at the secondary level, where the fundamental structural elements are the molecule's various hydrogen bonds. This leads to several recognizable domains of protein structure and nucleic acid structure, including such secondary-structure features as alpha helixes and beta sheets for proteins, and hairpin loops, bulges, and internal loops for nucleic acids. The terms primary, secondary, tertiary, and quaternary structure were introduced by Kaj Ulrik Linderstrøm-Lang in his 1951 Lane Medical Lectures at Stanford University. Primary structure The primary structure of a biopolymer is the exact specification of its atomic composition and the chemical bonds connecting those atoms (including stereochemistry). For a typical unbranched, un-crosslinked biopolymer (such as a molecule of a typical intracellular protein, or of DNA or RNA), the primary structure is equivalent to specifying the sequence of its monomeric subunits, such as amino acids or nucleotides. The primary structure of a protein is reported starting from the amino N-terminus to the carboxyl C-terminus, while the primary structure of DNA or RNA molecule is known as the nucleic acid sequence reported from the 5' end to the 3' end. The nucleic acid sequence refers to the exact sequence of nucleotides that comprise the whole molecule. Often, the primary structure encodes sequence motifs that are of functional importance. Some examples of such motif Document 1::: This is a list of topics in molecular biology. See also index of biochemistry articles. Document 2::: A nucleic acid sequence is a succession of bases within the nucleotides forming alleles within a DNA (using GACT) or RNA (GACU) molecule. This succession is denoted by a series of a set of five different letters that indicate the order of the nucleotides. By convention, sequences are usually presented from the 5' end to the 3' end. For DNA, with its double helix, there are two possible directions for the notated sequence; of these two, the sense strand is used. Because nucleic acids are normally linear (unbranched) polymers, specifying the sequence is equivalent to defining the covalent structure of the entire molecule. For this reason, the nucleic acid sequence is also termed the primary structure. The sequence represents biological information. Biological deoxyribonucleic acid represents the information which directs the functions of an organism. Nucleic acids also have a secondary structure and tertiary structure. Primary structure is sometimes mistakenly referred to as "primary sequence". However there is no parallel concept of secondary or tertiary sequence. Nucleotides Nucleic acids consist of a chain of linked units called nucleotides. Each nucleotide consists of three subunits: a phosphate group and a sugar (ribose in the case of RNA, deoxyribose in DNA) make up the backbone of the nucleic acid strand, and attached to the sugar is one of a set of nucleobases. The nucleobases are important in base pairing of strands to form higher-level secondary and tertiary structures such as the famed double helix. The possible letters are A, C, G, and T, representing the four nucleotide bases of a DNA strand – adenine, cytosine, guanine, thymine – covalently linked to a phosphodiester backbone. In the typical case, the sequences are printed abutting one another without gaps, as in the sequence AAAGTCTGAC, read left to right in the 5' to 3' direction. With regards to transcription, a sequence is on the coding strand if it has the same order as the transcribed RNA. Document 3::: Topoisomers or topological isomers are molecules with the same chemical formula and stereochemical bond connectivities but different topologies. Examples of molecules for which there exist topoisomers include DNA, which can form knots, and catenanes. Each topoisomer of a given DNA molecule possesses a different linking number associated with it. DNA topoisomers can be interchanged by enzymes called topoisomerases. Using a topoisomerase along with an intercalator, topoisomers with different linking number may be separated on an agarose gel via gel electrophoresis. See also Mechanically-interlocked molecular architectures Catenane Rotaxanes Molecular knot Molecular Borromean rings Document 4::: The sequence hypothesis was first formally proposed in the review "On Protein Synthesis" by Francis Crick in 1958. It states that the sequence of bases in the genetic material (DNA or RNA) determines the sequence of amino acids for which that segment of nucleic acid codes, and this amino acid sequence determines the three-dimensional structure into which the protein folds. The three-dimensional structure of a protein is required for a protein to be functional. This hypothesis then lays the essential link between information stored and inherited in nucleic acids to the chemical processes which enable life to exist. Or, as Crick put it in 1958: This description is further amplified in the article and, in discussing how a protein folds up into its three-dimensional structure, Crick suggested that "the folding is simply a function of the order of the amino acids" in the protein. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Amino acids are joined together to form a chain at what molecular structure? A. ribosomes B. chloroplasts C. chromosomes D. DNA Answer:
sciq-9066
multiple_choice
In arthropods, coxal glands and malphigian tubules perform what role?
[ "excretion", "ingestion", "photosynthesis", "reproduction" ]
A
Relavent Documents: Document 0::: Bacillary band is a specialized row of longitudinal cells of some nematodes (Trichuris and Capillaria), consisting of glandular and nonglandular cells, formed by the hypodermis. The glandular cells opens up to the exterior through cuticular pores. The function of bacillary bands is unknown, their ultrastructure suggests that the gland cells may have a role in osmotic or ion regulation, and the nongland cells may function in cuticle formation and food storage. Document 1::: In a multicellular organism, an organ is a collection of tissues joined in a structural unit to serve a common function. In the hierarchy of life, an organ lies between tissue and an organ system. Tissues are formed from same type cells to act together in a function. Tissues of different types combine to form an organ which has a specific function. The intestinal wall for example is formed by epithelial tissue and smooth muscle tissue. Two or more organs working together in the execution of a specific body function form an organ system, also called a biological system or body system. An organ's tissues can be broadly categorized as parenchyma, the functional tissue, and stroma, the structural tissue with supportive, connective, or ancillary functions. For example, the gland's tissue that makes the hormones is the parenchyma, whereas the stroma includes the nerves that innervate the parenchyma, the blood vessels that oxygenate and nourish it and carry away its metabolic wastes, and the connective tissues that provide a suitable place for it to be situated and anchored. The main tissues that make up an organ tend to have common embryologic origins, such as arising from the same germ layer. Organs exist in most multicellular organisms. In single-celled organisms such as members of the eukaryotes, the functional analogue of an organ is known as an organelle. In plants, there are three main organs. The number of organs in any organism depends on the definition used. By one widely adopted definition, 79 organs have been identified in the human body. Animals Except for placozoans, multicellular animals including humans have a variety of organ systems. These specific systems are widely studied in human anatomy. The functions of these organ systems often share significant overlap. For instance, the nervous and endocrine system both operate via a shared organ, the hypothalamus. For this reason, the two systems are combined and studied as the neuroendocrine system. The sam Document 2::: Arthropods are covered with a tough, resilient integument or exoskeleton of chitin. Generally the exoskeleton will have thickened areas in which the chitin is reinforced or stiffened by materials such as minerals or hardened proteins. This happens in parts of the body where there is a need for rigidity or elasticity. Typically the mineral crystals, mainly calcium carbonate, are deposited among the chitin and protein molecules in a process called biomineralization. The crystals and fibres interpenetrate and reinforce each other, the minerals supplying the hardness and resistance to compression, while the chitin supplies the tensile strength. Biomineralization occurs mainly in crustaceans. In insects and arachnids, the main reinforcing materials are various proteins hardened by linking the fibres in processes called sclerotisation and the hardened proteins are called sclerotin. The dorsal tergum, ventral sternum, and the lateral pleura form the hardened plates or sclerites of a typical body segment. In either case, in contrast to the carapace of a tortoise or the cranium of a vertebrate, the exoskeleton has little ability to grow or change its form once it has matured. Except in special cases, whenever the animal needs to grow, it moults, shedding the old skin after growing a new skin from beneath. Microscopic structure A typical arthropod exoskeleton is a multi-layered structure with four functional regions: epicuticle, procuticle, epidermis and basement membrane. Of these, the epicuticle is a multi-layered external barrier that, especially in terrestrial arthropods, acts as a barrier against desiccation. The strength of the exoskeleton is provided by the underlying procuticle, which is in turn secreted by the epidermis. Arthropod cuticle is a biological composite material, consisting of two main portions: fibrous chains of alpha-chitin within a matrix of silk-like and globular proteins, of which the best-known is the rubbery protein called resilin. The rel Document 3::: The protocerebrum is the first segment of the panarthropod brain. Recent studies suggest that it comprises two regions. Region associated with the expression of six3 six3 is a transcription factor that marks the anteriormost part of the developing body in a whole host of Metazoa. In the panarthropod brain, the anteriormost (rostralmost) part of the germband expresses six3. This region is described as medial, and corresponds to the annelid prostomium. In arthropods, it contains the pars intercerebralis and pars lateralis. six3 is associated with the euarthropod labrum and the onychophoran frontal appendages (antennae). Region associated with the expression of orthodenticle The other region expresses homologues of orthodenticle, Otx or otd. This region is more caudal and lateral, and bears the eyes. Orthodenticle is associated with the protocerebral bridge, part of the central complex, traditionally a marker of the prosocerebrum. In the annelid brain, Otx expression characterises the peristomium, but also creeps forwards into the regions of the prostomium that bear the larval eyes. Names of regions Inconsistent use of the terms archicerebrum and the prosocerebrum makes them confusing. The regions were defined by Siewing (1963): the archicerebrum as containing the ocular lobes and the mushroom bodies (= corpora pedunculata), and the prosocerebrum as comprising the central complex. The archicerebrum has traditionally been equated with the anteriormost, 'non-segmental' part of the protocerebrum, equivalent to the acron in older terminology. The prosocerebrum is then equivalent to the 'segmental' part of the protocerebrum, bordered by segment polarity genes such as engrailed, and (on one interpretation) bearing modified segmental appendages (= camera-type eyes). But Urbach and Technau (2003) complicate the matter by seeing the prosocerebrum (central complex) + labrum as the anteriormost region Strausfeld 2016 identifies the anteriormost part of the b Document 4::: The epipharyngeal groove is a ciliated groove along the dorsal side of the inside of the pharynx in some plankton-feeding early chordates, such as Amphioxus. It helps to carry a stream of mucus with plankton stuck in it, through the pharynx into the gut to be digested. The subnotochordal rod or hypochord is a transient structure that appears ventral to the notochord in the heads of embryos of some vertebrates. Its appearance is stimulated by a chemical secreted by the notochord. The subnotochordal rod helps to stimulate development of the dorsal aorta. There is an opinion that these two structures are homologous. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. In arthropods, coxal glands and malphigian tubules perform what role? A. excretion B. ingestion C. photosynthesis D. reproduction Answer:
ai2_arc-164
multiple_choice
Compared to the Sun, a red star most likely has a greater
[ "volume.", "rate of rotation.", "surface temperature.", "number of orbiting planets." ]
A
Relavent Documents: Document 0::: The theorized habitability of red dwarf systems is determined by a large number of factors. Modern evidence indicates that planets in red dwarf systems are unlikely to be habitable, due to their low stellar flux, high probability of tidal locking and thus likely lack of magnetospheres and atmospheres, small circumstellar habitable zones and the high stellar variation experienced by planets of red dwarf stars, impeding their planetary habitability. However, the ubiquity and longevity of red dwarfs could provide ample opportunity to realize any small possibility of habitability. A major impediment to life developing in these systems is the intense tidal heating caused by the proximity of planets to their host red dwarfs. Other tidal effects reduce the probability of life around red dwarfs, such as the extreme temperature differences created by one side of habitable-zone planets permanently facing the star, and the other perpetually turned away and lack of planetary axial tilts. Still, a planetary atmosphere may redistribute the heat, making temperatures more uniform. Non-tidal factors further reduce the prospects for life in red-dwarf systems, such as extreme stellar variation, spectral energy distributions shifted to the infrared relative to the Sun, (though a planetary magnetic field could protect from these flares) and small circumstellar habitable zones due to low light output. There are, however, a few factors that could increase the likelihood of life on red dwarf planets. Intense cloud formation on the star-facing side of a tidally locked planet may reduce overall thermal flux and drastically reduce equilibrium temperature differences between the two sides of the planet. In addition, the sheer number of red dwarfs statistically increases the probability that there might exist habitable planets orbiting some of them. Red dwarfs account for about 85% of stars in the Milky Way and the vast majority of stars in spiral and elliptical galaxies. There are expected t Document 1::: In astrophysics, the mass–luminosity relation is an equation giving the relationship between a star's mass and its luminosity, first noted by Jakob Karl Ernst Halm. The relationship is represented by the equation: where L⊙ and M⊙ are the luminosity and mass of the Sun and 1 < a < 6. The value a = 3.5 is commonly used for main-sequence stars. This equation and the usual value of a = 3.5 only applies to main-sequence stars with masses and does not apply to red giants or white dwarfs. As a star approaches the Eddington luminosity then a = 1. In summary, the relations for stars with different ranges of mass are, to a good approximation, as the following: For stars with masses less than 0.43M⊙, convection is the sole energy transport process, so the relation changes significantly. For stars with masses M > 55M⊙ the relationship flattens out and becomes L ∝ M but in fact those stars don't last because they are unstable and quickly lose matter by intense solar winds. It can be shown this change is due to an increase in radiation pressure in massive stars. These equations are determined empirically by determining the mass of stars in binary systems to which the distance is known via standard parallax measurements or other techniques. After enough stars are plotted, stars will form a line on a logarithmic plot and slope of the line gives the proper value of a. Another form, valid for K-type main-sequence stars, that avoids the discontinuity in the exponent has been given by Cuntz & Wang; it reads: with (M in M⊙). This relation is based on data by Mann and collaborators, who used moderate-resolution spectra of nearby late-K and M dwarfs with known parallaxes and interferometrically determined radii to refine their effective temperatures and luminosities. Those stars have also been used as a calibration sample for Kepler candidate objects. Besides avoiding the discontinuity in the exponent at M = 0.43M⊙, the relation also recovers a = 4.0 for M ≃ 0.85M⊙. The mass/lu Document 2::: Red supergiants (RSGs) are stars with a supergiant luminosity class (Yerkes class I) of spectral type K or M. They are the largest stars in the universe in terms of volume, although they are not the most massive or luminous. Betelgeuse and Antares A are the brightest and best known red supergiants (RSGs), indeed the only first magnitude red supergiant stars. Classification Stars are classified as supergiants on the basis of their spectral luminosity class. This system uses certain diagnostic spectral lines to estimate the surface gravity of a star, hence determining its size relative to its mass. Larger stars are more luminous at a given temperature and can now be grouped into bands of differing luminosity. The luminosity differences between stars are most apparent at low temperatures, where giant stars are much brighter than main-sequence stars. Supergiants have the lowest surface gravities and hence are the largest and brightest at a particular temperature. The Yerkes or Morgan-Keenan (MK) classification system is almost universal. It groups stars into five main luminosity groups designated by roman numerals: I supergiant; II bright giant; III giant; IV subgiant; V dwarf (main sequence). Specific to supergiants, the luminosity class is further divided into normal supergiants of class Ib and brightest supergiants of class Ia. The intermediate class Iab is also used. Exceptionally bright, low surface gravity, stars with strong indications of mass loss may be designated by luminosity class 0 (zero) although this is rarely seen. More often the designation Ia-0 will be used, and more commonly still Ia+. These hypergiant spectral classifications are very rarely applied to red supergiants, although the term red hypergiant is sometimes used for the most extended and unstable red supergiants like VY Canis Majoris and NML Cygni. The "red" part of "red supergiant" refers to the cool temperature. Red supergiants are the coolest supergiants, M-type, and at le Document 3::: A color–color diagram is a means of comparing the colors of an astronomical object at different wavelengths. Astronomers typically observe at narrow bands around certain wavelengths, and objects observed will have different brightnesses in each band. The difference in brightness between two bands is referred to as color. On color–color diagrams, the color defined by two wavelength bands is plotted on the horizontal axis, and the color defined by another brightness difference will be plotted on the vertical axis. Background Although stars are not perfect blackbodies, to first order the spectra of light emitted by stars conforms closely to a black-body radiation curve, also referred to sometimes as a thermal radiation curve. The overall shape of a black-body curve is uniquely determined by its temperature, and the wavelength of peak intensity is inversely proportional to temperature, a relation known as Wien's Displacement Law. Thus, observation of a stellar spectrum allows determination of its effective temperature. Obtaining complete spectra for stars through spectrometry is much more involved than simple photometry in a few bands. Thus by comparing the magnitude of the star in multiple different color indices, the effective temperature of the star can still be determined, as magnitude differences between each color will be unique for that temperature. As such, color-color diagrams can be used as a means of representing the stellar population, much like a Hertzsprung–Russell diagram, and stars of different spectral classes will inhabit different parts of the diagram. This feature leads to applications within various wavelength bands. In the stellar locus, stars tend to align in a more or less straight feature. If stars were perfect black bodies, the stellar locus would be a pure straight line indeed. The divergences with the straight line are due to the absorptions and emission lines in the stellar spectra. These divergences can be more or less evident depending Document 4::: A red dwarf is the smallest and coolest kind of star on the main sequence. Red dwarfs are by far the most common type of star in the Milky Way, at least in the neighborhood of the Sun. However, as a result of their low luminosity, individual red dwarfs cannot be easily observed. From Earth, not one star that fits the stricter definitions of a red dwarf is visible to the naked eye. Proxima Centauri, the nearest star to the Sun, is a red dwarf, as are fifty of the sixty nearest stars. According to some estimates, red dwarfs make up three-quarters of the stars in the Milky Way. The coolest red dwarfs near the Sun have a surface temperature of about and the smallest have radii about 9% that of the Sun, with masses about 7.5% that of the Sun. These red dwarfs have spectral types of L0 to L2. There is some overlap with the properties of brown dwarfs, since the most massive brown dwarfs at lower metallicity can be as hot as and have late M spectral types. Definitions and usage of the term "red dwarf" vary on how inclusive they are on the hotter and more massive end. One definition is synonymous with stellar M dwarfs (M-type main sequence stars), yielding a maximum temperature of and . One includes all stellar M-type main-sequence and all K-type main-sequence stars (K dwarf), yielding a maximum temperature of and . Some definitions include any stellar M dwarf and part of the K dwarf classification. Other definitions are also in use. Many of the coolest, lowest mass M dwarfs are expected to be brown dwarfs, not true stars, and so those would be excluded from any definition of red dwarf. Stellar models indicate that red dwarfs less than are fully convective. Hence, the helium produced by the thermonuclear fusion of hydrogen is constantly remixed throughout the star, avoiding helium buildup at the core, thereby prolonging the period of fusion. Low-mass red dwarfs therefore develop very slowly, maintaining a constant luminosity and spectral type for trillions of years, The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Compared to the Sun, a red star most likely has a greater A. volume. B. rate of rotation. C. surface temperature. D. number of orbiting planets. Answer:
sciq-15
multiple_choice
Boron only occurs naturally in compounds with what element?
[ "oxygen", "helium", "carbon", "nitrogen" ]
A
Relavent Documents: Document 0::: Metals, and specifically rare-earth elements, form numerous chemical complexes with boron. Their crystal structure and chemical bonding depend strongly on the metal element M and on its atomic ratio to boron. When B/M ratio exceeds 12, boron atoms form B12 icosahedra which are linked into a three-dimensional boron framework, and the metal atoms reside in the voids of this framework. Those icosahedra are basic structural units of most allotropes of boron and boron-rich rare-earth borides. In such borides, metal atoms donate electrons to the boron polyhedra, and thus these compounds are regarded as electron-deficient solids. The crystal structures of many boron-rich borides can be attributed to certain types including MgAlB14, YB66, REB41Si1.2, B4C and other, more complex types such as RExB12C0.33Si3.0. Some of these formulas, for example B4C, YB66 and MgAlB14, historically reflect the idealistic structures, whereas the experimentally determined composition is nonstoichiometric and corresponds to fractional indexes. Boron-rich borides are usually characterized by large and complex unit cells, which can contain more than 1500 atomic sites and feature extended structures shaped as "tubes" and large modular polyhedra ("superpolyhedra"). Many of those sites have partial occupancy, meaning that the probability to find them occupied with a certain atom is smaller than one and thus that only some of them are filled with atoms. Scandium is distinguished among the rare-earth elements by that it forms numerous borides with uncommon structure types; this property of scandium is attributed to its relatively small atomic and ionic radii. Crystals of the specific rare-earth boride YB66 are used as X-ray monochromators for selecting X-rays with certain energies (in the 1–2 keV range) out of synchrotron radiation. Other rare-earth borides may find application as thermoelectric materials, owing to their low thermal conductivity; the latter originates from their complex, "amorphous-l Document 1::: A borate is any of a range of boron oxyanions, anions containing boron and oxygen, such as orthoborate , metaborate , or tetraborate ; or any salt of such anions, such as sodium metaborate, and borax . The name also refers to esters of such anions, such as trimethyl borate . Natural occurrence Borate ions occur, alone or with other anions, in many borate and borosilicate minerals such as borax, boracite, ulexite (boronatrocalcite) and colemanite. Borates also occur in seawater, where they make an important contribution to the absorption of low frequency sound in seawater. Borates also occur in plants, including almost all fruits. Anions The main borate anions are: tetrahydroxyborate , found in sodium tetrahydroxyborate . orthoborate , found in trisodium orthoborate , found in the calcium yttrium borosilicate oxyapatite perborate , as in sodium perborate metaborate or its cyclic trimer , found in sodium metaborate diborate , found in magnesium diborate (suanite), triborate , found in calcium aluminium triborate (johachidolite), tetraborate , found in anhydrous borax tetrahydroxytetraborate , found in borax "decahydrate" tetraborate(6-) found in lithium tetraborate(6-) pentaborate or , found in sodium pentaborate octaborate found in disodium octaborate Preparation In 1905, Burgess and Holt observed that fusing mixtures of boric oxide and sodium carbonate yielded on cooling two crystalline compounds with definite compositions, consistent with anhydrous borax (which can be written ) and sodium octaborate (which can be written ). Document 2::: Borirane is a heterocyclic organic compound with the formula C2H4BH. This colourless, flammable gas is the simplest borirane, a three-membered ring consisting of two carbon and one boron atom. It can be viewed as a structural analog of aziridine, with boron replacing the nitrogen atom of aziridine. Borirane is isomeric with ethylideneborane. This compound has five isomers. Document 3::: Zinc borate is an inorganic compound, a borate of zinc. It is a white crystalline or amorphous powder insoluble in water. Its toxicity is low. Its melting point is 980 °C. Variants Several variants of zinc borate exist, differing by the zinc/boron ratio and the water content: Zinc borate Firebrake ZB (2ZnO·3 B2O3·3.5H2O), CAS number 138265-88-0 Zinc borate Firebrake 500 (2ZnO·3 B2O3), CAS number 12767-90-7 Zinc borate Firebrake 415 (4ZnO·B2O3·H2O), CAS number 149749-62-2 ZB-467 (4ZnO·6B2O3·7H2O), CAS number 1332-07-6 ZB-223 (2ZnO·2B2O3·3H2O), CAS number 1332-07-6 The hydrated variants lose water between 290–415 °C. Uses Zinc borate is primarily used as a flame retardant in plastics and cellulose fibers, paper, rubbers and textiles. It is also used in paints, adhesives, and pigments. As a flame retardant, it can replace antimony(III) oxide as a synergist in both halogen-based and halogen-free systems. It is an anti-dripping and char-promoting agent, and suppresses the afterglow. In electrical insulator plastics it suppresses arcing and tracking. In halogen-containing systems, zinc borate is used together with antimony trioxide and alumina trihydrate. It catalyzes formation of char and creates a protective layer of glass. Zinc catalyzes the release of halogens by forming zinc halides and zinc oxyhalides. In halogen-free system, zinc borate can be used together with alumina trihydrate, magnesium hydroxide, red phosphorus, or ammonium polyphosphate. During burning the plastics, a porous borate ceramics is formed that protects the underlying layers. In presence of silica, borosilicate glass can be formed at plastic burning temperatures. Document 4::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Boron only occurs naturally in compounds with what element? A. oxygen B. helium C. carbon D. nitrogen Answer:
scienceQA-1595
multiple_choice
What do these two changes have in common? crushing a mineral into powder mixing lettuce and salad dressing
[ "Both are only physical changes.", "Both are caused by heating.", "Both are chemical changes.", "Both are caused by cooling." ]
A
Step 1: Think about each change. Crushing a mineral into powder is a physical change. The mineral breaks into tiny pieces. But it is still made of the same type of matter. Mixing lettuce and salad dressing is a physical change. Together, the salad and dressing make a mixture. But making this mixture does not form a different type of matter. Step 2: Look at each answer choice. Both are only physical changes. Both changes are physical changes. No new matter is created. Both are chemical changes. Both changes are physical changes. They are not chemical changes. Both are caused by heating. Neither change is caused by heating. Both are caused by cooling. Neither change is caused by cooling.
Relavent Documents: Document 0::: Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds. Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate. A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density. An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge. Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change. Examples Heating and cooling Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation. Magnetism Ferro-magnetic materials can become magnetic. The process is reve Document 1::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 2::: In physics, a dynamical system is said to be mixing if the phase space of the system becomes strongly intertwined, according to at least one of several mathematical definitions. For example, a measure-preserving transformation T is said to be strong mixing if whenever A and B are any measurable sets and μ is the associated measure. Other definitions are possible, including weak mixing and topological mixing. The mathematical definition of mixing is meant to capture the notion of physical mixing. A canonical example is the Cuba libre: suppose one is adding rum (the set A) to a glass of cola. After stirring the glass, the bottom half of the glass (the set B) will contain rum, and it will be in equal proportion as it is elsewhere in the glass. The mixing is uniform: no matter which region B one looks at, some of A will be in that region. A far more detailed, but still informal description of mixing can be found in the article on mixing (mathematics). Every mixing transformation is ergodic, but there are ergodic transformations which are not mixing. Physical mixing The mixing of gases or liquids is a complex physical process, governed by a convective diffusion equation that may involve non-Fickian diffusion as in spinodal decomposition. The convective portion of the governing equation contains fluid motion terms that are governed by the Navier–Stokes equations. When fluid properties such as viscosity depend on composition, the governing equations may be coupled. There may also be temperature effects. It is not clear that fluid mixing processes are mixing in the mathematical sense. Small rigid objects (such as rocks) are sometimes mixed in a rotating drum or tumbler. The 1969 Selective Service draft lottery was carried out by mixing plastic capsules which contained a slip of paper (marked with a day of the year). See also Miscibility Document 3::: In chemistry, a mixture is a material made up of two or more different chemical substances which are not chemically bonded. A mixture is the physical combination of two or more substances in which the identities are retained and are mixed in the form of solutions, suspensions and colloids. Mixtures are one product of mechanically blending or mixing chemical substances such as elements and compounds, without chemical bonding or other chemical change, so that each ingredient substance retains its own chemical properties and makeup. Despite the fact that there are no chemical changes to its constituents, the physical properties of a mixture, such as its melting point, may differ from those of the components. Some mixtures can be separated into their components by using physical (mechanical or thermal) means. Azeotropes are one kind of mixture that usually poses considerable difficulties regarding the separation processes required to obtain their constituents (physical or chemical processes or, even a blend of them). Characteristics of mixtures All mixtures can be characterized as being separable by mechanical means (e.g. purification, distillation, electrolysis, chromatography, heat, filtration, gravitational sorting, centrifugation). Mixtures differ from chemical compounds in the following ways: the substances in a mixture can be separated using physical methods such as filtration, freezing, and distillation. there is little or no energy change when a mixture forms (see Enthalpy of mixing). The substances in a mixture keep its separate properties. In the example of sand and water, neither one of the two substances changed in any way when they are mixed. Although the sand is in the water it still keeps the same properties that it had when it was outside the water. mixtures have variable compositions, while compounds have a fixed, definite formula. when mixed, individual substances keep their properties in a mixture, while if they form a compound their properties Document 4::: In chemistry, solubility is the ability of a substance, the solute, to form a solution with another substance, the solvent. Insolubility is the opposite property, the inability of the solute to form such a solution. The extent of the solubility of a substance in a specific solvent is generally measured as the concentration of the solute in a saturated solution, one in which no more solute can be dissolved. At this point, the two substances are said to be at the solubility equilibrium. For some solutes and solvents, there may be no such limit, in which case the two substances are said to be "miscible in all proportions" (or just "miscible"). The solute can be a solid, a liquid, or a gas, while the solvent is usually solid or liquid. Both may be pure substances, or may themselves be solutions. Gases are always miscible in all proportions, except in very extreme situations, and a solid or liquid can be "dissolved" in a gas only by passing into the gaseous state first. The solubility mainly depends on the composition of solute and solvent (including their pH and the presence of other dissolved substances) as well as on temperature and pressure. The dependency can often be explained in terms of interactions between the particles (atoms, molecules, or ions) of the two substances, and of thermodynamic concepts such as enthalpy and entropy. Under certain conditions, the concentration of the solute can exceed its usual solubility limit. The result is a supersaturated solution, which is metastable and will rapidly exclude the excess solute if a suitable nucleation site appears. The concept of solubility does not apply when there is an irreversible chemical reaction between the two substances, such as the reaction of calcium hydroxide with hydrochloric acid; even though one might say, informally, that one "dissolved" the other. The solubility is also not the same as the rate of solution, which is how fast a solid solute dissolves in a liquid solvent. This property de The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What do these two changes have in common? crushing a mineral into powder mixing lettuce and salad dressing A. Both are only physical changes. B. Both are caused by heating. C. Both are chemical changes. D. Both are caused by cooling. Answer:
sciq-4018
multiple_choice
Overproduction of offspring, combined with limited resources, results in what?
[ "concentration", "competition", "continuation", "contention" ]
B
Relavent Documents: Document 0::: The Bateson Lecture is an annual genetics lecture held as a part of the John Innes Symposium since 1972, in honour of the first Director of the John Innes Centre, William Bateson. Past Lecturers Source: John Innes Centre 1951 Sir Ronald Fisher - "Statistical methods in Genetics" 1953 Julian Huxley - "Polymorphic variation: a problem in genetical natural history" 1955 Sidney C. Harland - "Plant breeding: present position and future perspective" 1957 J.B.S. Haldane - "The theory of evolution before and after Bateson" 1959 Kenneth Mather - "Genetics Pure and Applied" 1972 William Hayes - "Molecular genetics in retrospect" 1974 Guido Pontecorvo - "Alternatives to sex: genetics by means of somatic cells" 1976 Max F. Perutz - "Mechanism of respiratory haemoglobin" 1979 J. Heslop-Harrison - "The forgotten generation: some thoughts on the genetics and physiology of Angiosperm Gametophytes " 1982 Sydney Brenner - "Molecular genetics in prospect" 1984 W.W. Franke - "The cytoskeleton - the insoluble architectural framework of the cell" 1986 Arthur Kornberg - "Enzyme systems initiating replication at the origin of the E. coli chromosome" 1988 Gottfried Schatz - "Interaction between mitochondria and the nucleus" 1990 Christiane Nusslein-Volhard - "Axis determination in the Drosophila embryo" 1992 Frank Stahl - "Genetic recombination: thinking about it in phage and fungi" 1994 Ira Herskowitz - "Violins and orchestras: what a unicellular organism can do" 1996 R.J.P. Williams - "An Introduction to Protein Machines" 1999 Eugene Nester - "DNA and Protein Transfer from Bacteria to Eukaryotes - the Agrobacterium story" 2001 David Botstein - "Extracting biological information from DNA Microarray Data" 2002 Elliot Meyerowitz 2003 Thomas Steitz - "The Macromolecular machines of gene expression" 2008 Sean Carroll - "Endless flies most beautiful: the role of cis-regulatory sequences in the evolution of animal form" 2009 Sir Paul Nurse - "Genetic transmission through Document 1::: Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women. The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development. Current status of girls and women in STEM education Overall trends in STEM education Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle. Learning achievement in STEM education Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and Document 2::: The "Vicar of Bray" hypothesis (or Fisher-Muller Model) attempts to explain why sexual reproduction might have advantages over asexual reproduction. Reproduction is the process by which organisms give rise to offspring. Asexual reproduction involves a single parent and results in offspring that are genetically identical to each other and to the parent. In contrast to asexual reproduction, sexual reproduction involves two parents. Both the parents produce gametes through meiosis, a special type of cell division that reduces the chromosome number by half. During an early stage of meiosis, before the chromosomes are separated in the two daughter cells, the chromosomes undergo genetic recombination. This allows them to exchange some of their genetic information. Therefore, the gametes from a single organism are all genetically different from each other. The process in which the two gametes from the two parents unite is called fertilization. Half of the genetic information from both parents is combined. This results in offspring that are genetically different from each other and from the parents. In short, sexual reproduction allows a continuous rearrangement of genes. Therefore, the offspring of a population of sexually reproducing individuals will show a more varied selection of phenotypes. Due to faster attainment of favorable genetic combinations, sexually reproducing populations evolve more rapidly in response to environmental changes. Under the Vicar of Bray hypothesis, sex benefits a population as a whole, but not individuals within it, making it a case of group selection. Disadvantage of sexual reproduction Sexual reproduction often takes a lot of effort. Finding a mate can sometimes be an expensive, risky and time consuming process. Courtship, copulation and taking care of the new born offspring may also take up a lot of time and energy. From this point of view, asexual reproduction may seem a lot easier and more efficient. But another important thing to co Document 3::: In genetics, underdominance, also known as homozygote advantage, heterozygote disadvantage, or negative overdominance," is the opposite of overdominance. It is the selection against the heterozygote, causing disruptive selection and divergent genotypes. Underdominance exists in situations where the heterozygotic genotype is inferior in fitness to either the dominant or recessive homozygotic genotype. Compared to examples of overdominance in actual populations, underdominance is considered more unstable and may lead to the fixation of either allele. An example of stable underdominance may occur in individuals who are heterozygotic for polymorphisms that would make them better suited for one of two niches. Consider a situation in which a population is completely homozygotic for an "A" allele, allowing exploitation of a particular resource. Eventually, a polymorphic "a" allele may be introduced into the population, resulting in an individual who is capable of exploiting a different resource. This would result in an "aa" homozygotic invasion of the population due to nonexistent competition of the unexploited resource. The frequency of "aa" individuals would increase until the abundance of the "a" resource begins to decline. Eventually, the "AA" and "aa" genotypes would reach equilibrium with each other, with "Aa" heterozygotic individuals potentially experiencing a reduced fitness compared to those individuals who are homozygotic for utilization of either resource. This example of underdominance is stable because any shift in equilibrium would result in selection for the rare allele due to increased resource abundance. This compensatory selection would ultimately return the dimorphic system to underdominant equilibrium. Incidence in butterfly populations An example of stable underdominance can be found in the African butterfly species Pseudacraea eurytus, which utilizes Batesian mimicry to escape predation. This species possesses two alleles which each confer an appe Document 4::: Genetic viability is the ability of the genes present to allow a cell, organism or population to survive and reproduce. The term is generally used to mean the chance or ability of a population to avoid the problems of inbreeding. Less commonly genetic viability can also be used in respect to a single cell or on an individual level. Inbreeding depletes heterozygosity of the genome, meaning there is a greater chance of identical alleles at a locus. When these alleles are non-beneficial, homozygosity could cause problems for genetic viability. These problems could include effects on the individual fitness (higher mortality, slower growth, more frequent developmental defects, reduced mating ability, lower fecundity, greater susceptibility to disease, lowered ability to withstand stress, reduced intra- and inter-specific competitive ability) or effects on the entire population fitness (depressed population growth rate, reduced regrowth ability, reduced ability to adapt to environmental change). See Inbreeding depression. When a population of plants or animals loses their genetic viability, their chance of going extinct increases. Necessary conditions To be genetically viable, a population of plants or animals requires a certain amount of genetic diversity and a certain population size. For long-term genetic viability, the population size should consist of enough breeding pairs to maintain genetic diversity. The precise effective population size can be calculated using a minimum viable population analysis.  Higher genetic diversity and a larger population size will decrease the negative effects of genetic drift and inbreeding in a population. When adequate measures have been met, the genetic viability of a population will increase. Causes for decrease The main cause of a decrease in genetic viability is loss of habitat. This loss can occur because of, for example urbanization or deforestation causing habitat fragmentation. Natural events like earthquakes, floods The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Overproduction of offspring, combined with limited resources, results in what? A. concentration B. competition C. continuation D. contention Answer:
sciq-10462
multiple_choice
What are unique about prokaryotic cells' organelles?
[ "no cell walls", "no epidermis", "not membrane-bound", "only membrane - bound" ]
C
Relavent Documents: Document 0::: Cell physiology is the biological study of the activities that take place in a cell to keep it alive. The term physiology refers to normal functions in a living organism. Animal cells, plant cells and microorganism cells show similarities in their functions even though they vary in structure. General characteristics There are two types of cells: prokaryotes and eukaryotes. Prokaryotes were the first of the two to develop and do not have a self-contained nucleus. Their mechanisms are simpler than later-evolved eukaryotes, which contain a nucleus that envelops the cell's DNA and some organelles. Prokaryotes Prokaryotes have DNA located in an area called the nucleoid, which is not separated from other parts of the cell by a membrane. There are two domains of prokaryotes: bacteria and archaea. Prokaryotes have fewer organelles than eukaryotes. Both have plasma membranes and ribosomes (structures that synthesize proteins and float free in cytoplasm). Two unique characteristics of prokaryotes are fimbriae (finger-like projections on the surface of a cell) and flagella (threadlike structures that aid movement). Eukaryotes Eukaryotes have a nucleus where DNA is contained. They are usually larger than prokaryotes and contain many more organelles. The nucleus, the feature of a eukaryote that distinguishes it from a prokaryote, contains a nuclear envelope, nucleolus and chromatin. In cytoplasm, endoplasmic reticulum (ER) synthesizes membranes and performs other metabolic activities. There are two types, rough ER (containing ribosomes) and smooth ER (lacking ribosomes). The Golgi apparatus consists of multiple membranous sacs, responsible for manufacturing and shipping out materials such as proteins. Lysosomes are structures that use enzymes to break down substances through phagocytosis, a process that comprises endocytosis and exocytosis. In the mitochondria, metabolic processes such as cellular respiration occur. The cytoskeleton is made of fibers that support the str Document 1::: Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence. Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism. Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry. See also Cell (biology) Cell biology Biomolecule Organelle Tissue (biology) External links https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm Document 2::: The cell is the basic structural and functional unit of all forms of life. Every cell consists of cytoplasm enclosed within a membrane, and contains many macromolecules such as proteins, DNA and RNA, as well as many small molecules of nutrients and metabolites. The term comes from the Latin word meaning 'small room'. Cells can acquire specified function and carry out various tasks within the cell such as replication, DNA repair, protein synthesis, and motility. Cells are capable of specialization and mobility within the cell. Most plant and animal cells are only visible under a light microscope, with dimensions between 1 and 100 micrometres. Electron microscopy gives a much higher resolution showing greatly detailed cell structure. Organisms can be classified as unicellular (consisting of a single cell such as bacteria) or multicellular (including plants and animals). Most unicellular organisms are classed as microorganisms. The study of cells and how they work has led to many other studies in related areas of biology, including: discovery of DNA, cancer systems biology, aging and developmental biology. Cell biology is the study of cells, which were discovered by Robert Hooke in 1665, who named them for their resemblance to cells inhabited by Christian monks in a monastery. Cell theory, first developed in 1839 by Matthias Jakob Schleiden and Theodor Schwann, states that all organisms are composed of one or more cells, that cells are the fundamental unit of structure and function in all living organisms, and that all cells come from pre-existing cells. Cells emerged on Earth about 4 billion years ago. Discovery With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as i Document 3::: Cellular compartments in cell biology comprise all of the closed parts within the cytosol of a eukaryotic cell, usually surrounded by a single or double lipid layer membrane. These compartments are often, but not always, defined as membrane-bound organelles. The formation of cellular compartments is called compartmentalization. Both organelles, the mitochondria and chloroplasts (in photosynthetic organisms), are compartments that are believed to be of endosymbiotic origin. Other compartments such as peroxisomes, lysosomes, the endoplasmic reticulum, the cell nucleus or the Golgi apparatus are not of endosymbiotic origin. Smaller elements like vesicles, and sometimes even microtubules can also be counted as compartments. It was thought that compartmentalization is not found in prokaryotic cells., but the discovery of carboxysomes and many other metabolosomes revealed that prokaryotic cells are capable of making compartmentalized structures, albeit these are in most cases not surrounded by a lipid bilayer, but of pure proteinaceous built. Types In general there are 4 main cellular compartments, they are: The nuclear compartment comprising the nucleus The intercisternal space which comprises the space between the membranes of the endoplasmic reticulum (which is continuous with the nuclear envelope) Organelles (the mitochondrion in all eukaryotes and the plastid in phototrophic eukaryotes) The cytosol Function Compartments have three main roles. One is to establish physical boundaries for biological processes that enables the cell to carry out different metabolic activities at the same time. This may include keeping certain biomolecules within a region, or keeping other molecules outside. Within the membrane-bound compartments, different intracellular pH, different enzyme systems, and other differences are isolated from other organelles and cytosol. With mitochondria, the cytosol has an oxidizing environment which converts NADH to NAD+. With these cases, the Document 4::: Cytochemistry is the branch of cell biology dealing with the detection of cell constituents by means of biochemical analysis and visualization techniques. This is the study of the localization of cellular components through the use of staining methods. The term is also used to describe a process of identification of the biochemical content of cells. Cytochemistry is a science of localizing chemical components of cells and cell organelles on thin histological sections by using several techniques like enzyme localization, micro-incineration, micro-spectrophotometry, radioautography, cryo-electron microscopy, X-ray microanalysis by energy-dispersive X-ray spectroscopy, immunohistochemistry and cytochemistry, etc. Freeze Fracture Enzyme Cytochemistry Freeze fracture enzyme cytochemistry was initially mentioned in the study of Pinto de silva in 1987. It is a technique that allows the introduction of cytochemistry into a freeze fracture cell membrane. immunocytochemistry is used in this technique to label and visualize the cell membrane's molecules. This technique could be useful in analyzing the ultrastructure of cell membranes. The combination of immunocytochemistry and freeze fracture enzyme technique, research can identify and have a better understanding of the structure and distribution of a cell membrane. Origin Jean Brachet's research in Brussel demonstrated the localization and relative abundance between RNA and DNA in the cells of both animals and plants opened up the door into the research of cytochemistry. The work by Moller and Holter in 1976 about endocytosis which discussed the relationship between a cell's structure and function had established the needs of cytochemical research. Aims Cytochemical research aims to study individual cells that may contain several cell types within a tissue. It takes a nondestructive approach to study the localization of the cell. By remaining the cell components intact, researcher are able to study the intact cell activ The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What are unique about prokaryotic cells' organelles? A. no cell walls B. no epidermis C. not membrane-bound D. only membrane - bound Answer:
sciq-7628
multiple_choice
What type of orbit do the planets make in the solar system?
[ "elliptical", "conical", "vertical", "figure eight" ]
A
Relavent Documents: Document 0::: This is a list of most likely gravitationally rounded objects of the Solar System, which are objects that have a rounded, ellipsoidal shape due to their own gravity (but are not necessarily in hydrostatic equilibrium). Apart from the Sun itself, these objects qualify as planets according to common geophysical definitions of that term. The sizes of these objects range over three orders of magnitude in radius, from planetary-mass objects like dwarf planets and some moons to the planets and the Sun. This list does not include small Solar System bodies, but it does include a sample of possible planetary-mass objects whose shapes have yet to be determined. The Sun's orbital characteristics are listed in relation to the Galactic Center, while all other objects are listed in order of their distance from the Sun. Star The Sun is a G-type main-sequence star. It contains almost 99.9% of all the mass in the Solar System. Planets In 2006, the International Astronomical Union (IAU) defined a planet as a body in orbit around the Sun that was large enough to have achieved hydrostatic equilibrium and to have "cleared the neighbourhood around its orbit". The practical meaning of "cleared the neighborhood" is that a planet is comparatively massive enough for its gravitation to control the orbits of all objects in its vicinity. In practice, the term "hydrostatic equilibrium" is interpreted loosely. Mercury is round but not actually in hydrostatic equilibrium, but it is universally regarded as a planet nonetheless. According to the IAU's explicit count, there are eight planets in the Solar System; four terrestrial planets (Mercury, Venus, Earth, and Mars) and four giant planets, which can be divided further into two gas giants (Jupiter and Saturn) and two ice giants (Uranus and Neptune). When excluding the Sun, the four giant planets account for more than 99% of the mass of the Solar System. Dwarf planets Dwarf planets are bodies orbiting the Sun that are massive and warm eno Document 1::: This article is a list of notable unsolved problems in astronomy. Some of these problems are theoretical, meaning that existing theories may be incapable of explaining certain observed phenomena or experimental results. Others are experimental, meaning that experiments necessary to test proposed theory or investigate a phenomenon in greater detail have not yet been performed. Some pertain to unique events or occurrences that have not repeated themselves and whose causes remain unclear. Planetary astronomy Our solar system Orbiting bodies and rotation: Are there any non-dwarf planets beyond Neptune? Why do extreme trans-Neptunian objects have elongated orbits? Rotation rate of Saturn: Why does the magnetosphere of Saturn rotate at a rate close to that at which the planet's clouds rotate? What is the rotation rate of Saturn's deep interior? Satellite geomorphology: What is the origin of the chain of high mountains that closely follows the equator of Saturn's moon, Iapetus? Are the mountains the remnant of hot and fast-rotating young Iapetus? Are the mountains the result of material (either from the rings of Saturn or its own ring) that over time collected upon the surface? Extra-solar How common are Solar System-like planetary systems? Some observed planetary systems contain Super-Earths and Hot Jupiters that orbit very close to their stars. Systems with Jupiter-like planets in Jupiter-like orbits appear to be rare. There are several possibilities why Jupiter-like orbits are rare, including that data is lacking or the grand tack hypothesis. Stellar astronomy and astrophysics Solar cycle: How does the Sun generate its periodically reversing large-scale magnetic field? How do other Sol-like stars generate their magnetic fields, and what are the similarities and differences between stellar activity cycles and that of the Sun? What caused the Maunder Minimum and other grand minima, and how does the solar cycle recover from a minimum state? Coronal heat Document 2::: The interstellar space opera epic Star Wars uses science and technology in its settings and storylines. The series has showcased many technological concepts, both in the movies and in the expanded universe of novels, comics and other forms of media. The Star Wars movies' primary objective is to build upon drama, philosophy, political science and less on scientific knowledge. Many of the on-screen technologies created or borrowed for the Star Wars universe were used mainly as plot devices. The iconic status that Star Wars has gained in popular culture and science fiction allows it to be used as an accessible introduction to real scientific concepts. Many of the features or technologies used in the Star Wars universe are not yet considered possible. Despite this, their concepts are still probable. Tatooine's twin stars In the past, scientists thought that planets would be unlikely to form around binary stars. However, recent simulations indicate that planets are just as likely to form around binary star systems as single-star systems. Of the 3457 exoplanets currently known, 146 actually orbit binary star systems (and 39 orbit multiple star systems with three or more stars). Specifically, they orbit what are known as "wide" binary star systems where the two stars are fairly far apart (several AU). Tatooine appears to be of the other type — a "close" binary, where the stars are very close, and the planets orbit their common center of mass. The first observationally confirmed binary — Kepler-16b — is a close binary. Exoplanet researchers' simulations indicate that planets form frequently around close binaries, though gravitational effects from the dual star system tend to make them very difficult to find with current Doppler and transit methods of planetary searches. In studies looking for dusty disks—where planet formation is likely—around binary stars, such disks were found in wide or narrow binaries, or those whose stars are more than 50 or less than 3 AU apart, r Document 3::: Orbits Astrodynamics In orbital mechanics, a transfer orbit is an intermediate elliptical orbit that is used to move a spacecraft in an orbital maneuver from one circular, or largely circular, orbit to another. There are several types of transfer orbits, which vary in their energy efficiency and speed of transfer. These include: Hohmann transfer orbit, an elliptical orbit used to transfer a spacecraft between two circular orbits of different altitudes in the same plane Bi-elliptic transfer, a slower method of transfer, but one that may be more efficient than a Hohmann transfer orbit Geostationary transfer orbit or geosynchronous transfer orbit is usually also a Hohmann transfer orbit Lunar transfer orbit is an orbit that touches Low Earth orbit and a lunar orbit. Document 4::: This page describes exoplanet orbital and physical parameters. Orbital parameters Most known extrasolar planet candidates have been discovered using indirect methods and therefore only some of their physical and orbital parameters can be determined. For example, out of the six independent parameters that define an orbit, the radial-velocity method can determine four: semi-major axis, eccentricity, longitude of periastron, and time of periastron. Two parameters remain unknown: inclination and longitude of the ascending node. Distance from star and orbital period There are exoplanets that are much closer to their parent star than any planet in the Solar System is to the Sun, and there are also exoplanets that are much further from their star. Mercury, the closest planet to the Sun at 0.4 astronomical units (AU), takes 88 days for an orbit, but the smallest known orbits of exoplanets have orbital periods of only a few hours, see Ultra-short period planet. The Kepler-11 system has five of its planets in smaller orbits than Mercury's. Neptune is 30 AU from the Sun and takes 165 years to orbit it, but there are exoplanets that are thousands of AU from their star and take tens of thousands of years to orbit, e.g. GU Piscium b. The radial-velocity and transit methods are most sensitive to planets with small orbits. The earliest discoveries such as 51 Peg b were gas giants with orbits of a few days. These "hot Jupiters" likely formed further out and migrated inwards. The direct imaging method is most sensitive to planets with large orbits, and has discovered some planets that have planet–star separations of hundreds of AU. However, protoplanetary disks are usually only around 100 AU in radius, and core accretion models predict giant planet formation to be within 10 AU, where the planets can coalesce quickly enough before the disk evaporates. Very-long-period giant planets may have been rogue planets that were captured, or formed close-in and gravitationally scattered The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What type of orbit do the planets make in the solar system? A. elliptical B. conical C. vertical D. figure eight Answer:
sciq-8313
multiple_choice
Where does the process of digestion start?
[ "small intestine", "stomach", "esophagus", "mouth" ]
D
Relavent Documents: Document 0::: Digestion is the breakdown of large insoluble food compounds into small water-soluble components so that they can be absorbed into the blood plasma. In certain organisms, these smaller substances are absorbed through the small intestine into the blood stream. Digestion is a form of catabolism that is often divided into two processes based on how food is broken down: mechanical and chemical digestion. The term mechanical digestion refers to the physical breakdown of large pieces of food into smaller pieces which can subsequently be accessed by digestive enzymes. Mechanical digestion takes place in the mouth through mastication and in the small intestine through segmentation contractions. In chemical digestion, enzymes break down food into the small compounds that the body can use. In the human digestive system, food enters the mouth and mechanical digestion of the food starts by the action of mastication (chewing), a form of mechanical digestion, and the wetting contact of saliva. Saliva, a liquid secreted by the salivary glands, contains salivary amylase, an enzyme which starts the digestion of starch in the food; the saliva also contains mucus, which lubricates the food, and hydrogen carbonate, which provides the ideal conditions of pH (alkaline) for amylase to work, and electrolytes (Na+, K+, Cl−, HCO−3). About 30% of starch is hydrolyzed into disaccharide in the oral cavity (mouth). After undergoing mastication and starch digestion, the food will be in the form of a small, round slurry mass called a bolus. It will then travel down the esophagus and into the stomach by the action of peristalsis. Gastric juice in the stomach starts protein digestion. Gastric juice mainly contains hydrochloric acid and pepsin. In infants and toddlers, gastric juice also contains rennin to digest milk proteins. As the first two chemicals may damage the stomach wall, mucus and bicarbonates are secreted by the stomach. They provide a slimy layer that acts as a shield against the damag Document 1::: The gastrointestinal wall of the gastrointestinal tract is made up of four layers of specialised tissue. From the inner cavity of the gut (the lumen) outwards, these are: Mucosa Submucosa Muscular layer Serosa or adventitia The mucosa is the innermost layer of the gastrointestinal tract. It surrounds the lumen of the tract and comes into direct contact with digested food (chyme). The mucosa itself is made up of three layers: the epithelium, where most digestive, absorptive and secretory processes occur; the lamina propria, a layer of connective tissue, and the muscularis mucosae, a thin layer of smooth muscle. The submucosa contains nerves including the submucous plexus (also called Meissner's plexus), blood vessels and elastic fibres with collagen, that stretches with increased capacity but maintains the shape of the intestine. The muscular layer surrounds the submucosa. It comprises layers of smooth muscle in longitudinal and circular orientation that also helps with continued bowel movements (peristalsis) and the movement of digested material out of and along the gut. In between the two layers of muscle lies the myenteric plexus (also called Auerbach's plexus). The serosa/adventitia are the final layers. These are made up of loose connective tissue and coated in mucus so as to prevent any friction damage from the intestine rubbing against other tissue. The serosa is present if the tissue is within the peritoneum, and the adventitia if the tissue is retroperitoneal. Structure When viewed under the microscope, the gastrointestinal wall has a consistent general form, but with certain parts differing along its course. Mucosa The mucosa is the innermost layer of the gastrointestinal tract. It surrounds the cavity (lumen) of the tract and comes into direct contact with digested food (chyme). The mucosa is made up of three layers: The epithelium is the innermost layer. It is where most digestive, absorptive and secretory processes occur. The lamina propr Document 2::: The large intestine, also known as the large bowel, is the last part of the gastrointestinal tract and of the digestive system in tetrapods. Water is absorbed here and the remaining waste material is stored in the rectum as feces before being removed by defecation. The colon is the longest portion of the large intestine, and the terms are often used interchangeably but most sources define the large intestine as the combination of the cecum, colon, rectum, and anal canal. Some other sources exclude the anal canal. In humans, the large intestine begins in the right iliac region of the pelvis, just at or below the waist, where it is joined to the end of the small intestine at the cecum, via the ileocecal valve. It then continues as the colon ascending the abdomen, across the width of the abdominal cavity as the transverse colon, and then descending to the rectum and its endpoint at the anal canal. Overall, in humans, the large intestine is about long, which is about one-fifth of the whole length of the human gastrointestinal tract. Structure The colon of the large intestine is the last part of the digestive system. It has a segmented appearance due to a series of saccules called haustra. It extracts water and salt from solid wastes before they are eliminated from the body and is the site in which the fermentation of unabsorbed material by the gut microbiota occurs. Unlike the small intestine, the colon does not play a major role in absorption of foods and nutrients. About 1.5 litres or 45 ounces of water arrives in the colon each day. The colon is the longest part of the large intestine and its average length in the adult human is 65 inches or 166 cm (range of 80 to 313 cm) for males, and 61 inches or 155 cm (range of 80 to 214 cm) for females. Sections In mammals, the large intestine consists of the cecum (including the appendix), colon (the longest part), rectum, and anal canal. The four sections of the colon are: the ascending colon, transverse colon, desce Document 3::: Hindgut fermentation is a digestive process seen in monogastric herbivores, animals with a simple, single-chambered stomach. Cellulose is digested with the aid of symbiotic bacteria. The microbial fermentation occurs in the digestive organs that follow the small intestine: the large intestine and cecum. Examples of hindgut fermenters include proboscideans and large odd-toed ungulates such as horses and rhinos, as well as small animals such as rodents, rabbits and koalas. In contrast, foregut fermentation is the form of cellulose digestion seen in ruminants such as cattle which have a four-chambered stomach, as well as in sloths, macropodids, some monkeys, and one bird, the hoatzin. Cecum Hindgut fermenters generally have a cecum and large intestine that are much larger and more complex than those of a foregut or midgut fermenter. Research on small cecum fermenters such as flying squirrels, rabbits and lemurs has revealed these mammals to have a GI tract about 10-13 times the length of their body. This is due to the high intake of fiber and other hard to digest compounds that are characteristic to the diet of monogastric herbivores. Unlike in foregut fermenters, the cecum is located after the stomach and small intestine in monogastric animals, which limits the amount of further digestion or absorption that can occur after the food is fermented. Large intestine In smaller hindgut fermenters of the order Lagomorpha (rabbits, hares, and pikas), cecotropes formed in the cecum are passed through the large intestine and subsequently reingested to allow another opportunity to absorb nutrients. Cecotropes are surrounded by a layer of mucus which protects them from stomach acid but which does not inhibit nutrient absorption in the small intestine. Coprophagy is also practiced by some rodents, such as the capybara, guinea pig and related species, and by the marsupial common ringtail possum. This process is also beneficial in allowing for restoration of the microflora pop Document 4::: The Joan Mott Prize Lecture is a prize lecture awarded annually by The Physiological Society in honour of Joan Mott. Laureates Laureates of the award have included: - Intestinal absorption of sugars and peptides: from textbook to surprises See also Physiological Society Annual Review Prize Lecture The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Where does the process of digestion start? A. small intestine B. stomach C. esophagus D. mouth Answer:
sciq-953
multiple_choice
Intensity is defined to be the power per unit area carried by a what?
[ "wave", "shift", "filament", "wire" ]
A
Relavent Documents: Document 0::: In physics, the intensity or flux of radiant energy is the power transferred per unit area, where the area is measured on the plane perpendicular to the direction of propagation of the energy. In the SI system, it has units watts per square metre (W/m2), or kg⋅s−3 in base units. Intensity is used most frequently with waves such as acoustic waves (sound) or electromagnetic waves such as light or radio waves, in which case the average power transfer over one period of the wave is used. Intensity can be applied to other circumstances where energy is transferred. For example, one could calculate the intensity of the kinetic energy carried by drops of water from a garden sprinkler. The word "intensity" as used here is not synonymous with "strength", "amplitude", "magnitude", or "level", as it sometimes is in colloquial speech. Intensity can be found by taking the energy density (energy per unit volume) at a point in space and multiplying it by the velocity at which the energy is moving. The resulting vector has the units of power divided by area (i.e., surface power density). The intensity of a wave is proportional to the square of its amplitude. For example, the intensity of an electromagnetic wave is proportional to the square of the wave's electric field amplitude. Mathematical description If a point source is radiating energy in all directions (producing a spherical wave), and no energy is absorbed or scattered by the medium, then the intensity decreases in proportion to the distance from the object squared. This is an example of the inverse-square law. Applying the law of conservation of energy, if the net power emanating is constant, where is the net power radiated; is the intensity vector as a function of position; the magnitude is the intensity as a function of position; is a differential element of a closed surface that contains the source. If one integrates a uniform intensity, , over a surface that is perpendicular to the intensity vector, for insta Document 1::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 2::: In signal processing, the energy of a continuous-time signal x(t) is defined as the area under the squared magnitude of the considered signal i.e., mathematically Unit of will be (unit of signal)2. And the energy of a discrete-time signal x(n) is defined mathematically as Relationship to energy in physics Energy in this context is not, strictly speaking, the same as the conventional notion of energy in physics and the other sciences. The two concepts are, however, closely related, and it is possible to convert from one to the other: where Z represents the magnitude, in appropriate units of measure, of the load driven by the signal. For example, if x(t) represents the potential (in volts) of an electrical signal propagating across a transmission line, then Z would represent the characteristic impedance (in ohms) of the transmission line. The units of measure for the signal energy would appear as volt2·seconds, which is not dimensionally correct for energy in the sense of the physical sciences. After dividing by Z, however, the dimensions of E would become volt2·seconds per ohm, which is equivalent to joules, the SI unit for energy as defined in the physical sciences. Spectral energy density Similarly, the spectral energy density of signal x(t) is where X(f) is the Fourier transform of x(t). For example, if x(t) represents the magnitude of the electric field component (in volts per meter) of an optical signal propagating through free space, then the dimensions of X(f) would become volt·seconds per meter and would represent the signal's spectral energy density (in volts2·second2 per meter2) as a function of frequency f (in hertz). Again, these units of measure are not dimensionally correct in the true sense of energy density as defined in physics. Dividing by Zo, the characteristic impedance of free space (in ohms), the dimensions become joule-seconds per meter2 or, equivalently, joules per meter2 per hertz, which is dimensionally correct in SI Document 3::: In mathematics, physics, and engineering, spatial frequency is a characteristic of any structure that is periodic across position in space. The spatial frequency is a measure of how often sinusoidal components (as determined by the Fourier transform) of the structure repeat per unit of distance. The SI unit of spatial frequency is the reciprocal metre (m-1), although cycles per meter (c/m) is also common. In image-processing applications, spatial frequency is often expressed in units of cycles per millimeter (c/mm) or also line pairs per millimeter (LP/mm). In wave propagation, the spatial frequency is also known as wavenumber. Ordinary wavenumber is defined as the reciprocal of wavelength and is commonly denoted by or sometimes : Angular wavenumber , expressed in radian per metre (rad/m), is related to ordinary wavenumber and wavelength by Visual perception In the study of visual perception, sinusoidal gratings are frequently used to probe the capabilities of the visual system, such as contrast sensitivity. In these stimuli, spatial frequency is expressed as the number of cycles per degree of visual angle. Sine-wave gratings also differ from one another in amplitude (the magnitude of difference in intensity between light and dark stripes), orientation, and phase. Spatial-frequency theory The spatial-frequency theory refers to the theory that the visual cortex operates on a code of spatial frequency, not on the code of straight edges and lines hypothesised by Hubel and Wiesel on the basis of early experiments on V1 neurons in the cat. In support of this theory is the experimental observation that the visual cortex neurons respond even more robustly to sine-wave gratings that are placed at specific angles in their receptive fields than they do to edges or bars. Most neurons in the primary visual cortex respond best when a sine-wave grating of a particular frequency is presented at a particular angle in a particular location in the visual field. (However, a Document 4::: The transmission curve or transmission characteristic is the mathematical function or graph that describes the transmission fraction of an optical or electronic filter as a function of frequency or wavelength. It is an instance of a transfer function but, unlike the case of, for example, an amplifier, output never exceeds input (maximum transmission is 100%). The term is often used in commerce, science, and technology to characterise filters. The term has also long been used in fields such as geophysics and astronomy to characterise the properties of regions through which radiation passes, such as the ionosphere. See also Electronic filter — examples of transmission characteristics of electronic filters The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Intensity is defined to be the power per unit area carried by a what? A. wave B. shift C. filament D. wire Answer:
sciq-4152
multiple_choice
What is the only mechanism that consistently causes adaptive evolution?
[ "neutral selection", "genetic drift", "natural selection", "artificial selection" ]
C
Relavent Documents: Document 0::: Adaptationism (also known as functionalism) is the Darwinian view that many physical and psychological traits of organisms are evolved adaptations. Pan-adaptationism is the strong form of this, deriving from the early 20th century modern synthesis, that all traits are adaptations, a view now shared by only a few biologists. The "adaptationist program" was heavily criticized by Stephen Jay Gould and Richard Lewontin in their 1979 paper "The Spandrels of San Marco and the Panglossian Paradigm". According to Gould and Lewontin, evolutionary biologists had a habit of proposing adaptive explanations for any trait by default without considering non-adaptive alternatives, and often by conflating products of adaptation with the process of natural selection. One formal alternative to adaptationist explanations for traits in organisms is the neutral theory of molecular evolution, which proposes that features in organisms can arise through neutral transitions and become fixed in a population by chance (genetic drift). Constructive neutral evolution (CNE) is another paradigm which proposes a means by which complex systems emerge through neutral transitions, and CNE has been used to help understand the origins of a wide variety of features from the spliceosome of eukaryotes to the interdependency and simplification widespread in microbial communities. For many, neutral evolution is seen as the null hypothesis when attempting to explain the origins of a complex trait, so that adaptive scenarios for the origins of traits undergo a more rigorous demonstration prior to their acceptance. Introduction Criteria to identify a trait as an adaptation Adaptationism is an approach to studying the evolution of form and function. It attempts to frame the existence and persistence of traits, assuming that each of them arose independently and improved the reproductive success of the organism's ancestors. A trait is an adaptation if it fulfils the following criteria: The trait is a variat Document 1::: Constructive neutral evolution (CNE) is a theory that seeks to explain how complex systems can evolve through neutral transitions and spread through a population by chance fixation (genetic drift). Constructive neutral evolution is a competitor for both adaptationist explanations for the emergence of complex traits and hypotheses positing that a complex trait emerged as a response to a deleterious development in an organism. Constructive neutral evolution often leads to irreversible or "irremediable" complexity and produces systems which, instead of being finely adapted for performing a task, represent an excess complexity that has been described with terms such as "runaway bureaucracy" or even a "Rube Goldberg machine". The groundworks for the concept of CNE were laid by two papers in the 1990s, although first explicitly proposed by Arlin Stoltzfus in 1999. The first proposals for the role CNE was in the evolutionary origins of complex macromolecular machines such as the spliceosome, RNA editing machinery, supernumerary ribosomal proteins, chaperones, and more. Since then and as an emerging trend of studies in molecular evolution, CNE has been applied to broader features of biology and evolutionary history including some models of eukaryogenesis, the emergence of complex interdependence in microbial communities, and de novo formation of functional elements from non-functional transcripts of junk DNA. Several approaches propose a combination of neutral and adaptive contributions in the evolutionary origins of various traits. Many evolutionary biologists posit that CNE must be the null hypothesis when explaining the emergence of complex systems to avoid assuming that a trait arose for an adaptive benefit. A trait may have arisen neutrally, even if later co-opted for another function. This approach stresses the need for rigorous demonstrations of adaptive explanations when describing the emergence of traits. This avoids the "adaptationist fallacy" which assumes that Document 2::: Tinbergen's four questions, named after 20th century biologist Nikolaas Tinbergen, are complementary categories of explanations for animal behaviour. These are also commonly referred to as levels of analysis. It suggests that an integrative understanding of behaviour must include ultimate (evolutionary) explanations, in particular: behavioural adaptive functions phylogenetic history; and the proximate explanations underlying physiological mechanisms ontogenetic/developmental history. Four categories of questions and explanations When asked about the purpose of sight in humans and animals, even elementary-school children can answer that animals have vision to help them find food and avoid danger (function/adaptation). Biologists have three additional explanations: sight is caused by a particular series of evolutionary steps (phylogeny), the mechanics of the eye (mechanism/causation), and even the process of an individual's development (ontogeny). This schema constitutes a basic framework of the overlapping behavioural fields of ethology, behavioural ecology, comparative psychology, sociobiology, evolutionary psychology, and anthropology. Julian Huxley identified the first three questions. Niko Tinbergen gave only the fourth question, as Huxley's questions failed to distinguish between survival value and evolutionary history; Tinbergen's fourth question helped resolve this problem. Evolutionary (ultimate) explanations First question: Function (adaptation) Darwin's theory of evolution by natural selection is the only scientific explanation for why an animal's behaviour is usually well adapted for survival and reproduction in its environment. However, claiming that a particular mechanism is well suited to the present environment is different from claiming that this mechanism was selected for in the past due to its history of being adaptive. The literature conceptualizes the relationship between function and evolution in two ways. On the one hand, function Document 3::: Andreas Wagner (born 26 January 1967) is an Austrian/US evolutionary biologist and professor at the University of Zürich, Switzerland. He is known for his work on the role of robustness and innovation in biological evolution. Wagner is professor and chairman at the Department of Evolutionary Biology and Environmental Studies at the University of Zürich. Biography Wagner studied biology at the University of Vienna. He received his Ph.D. at Yale University, Department of Biology in 1995. He also holds a M. Phil. from Yale. From 1995 to 1996 he was a fellow at the Institute for Advanced Study Berlin, Germany. From 1998 to 2002 he was assistant professor at the University of New Mexico, Department of Biology and from 2002 to 2012 associate professor (with tenure) at the University of New Mexico, Department of Biology. He was appointed professor at the University of Zürich, Institute of Biochemistry in 2006. In 2011, he joined the Department of Evolutionary Biology and Environmental Studies at the University of Zürich. Since 2016, he is chairman of this department. Since 1999, he is also external professor at the Santa Fe Institute, New Mexico, USA. Scientific contribution Wagner's work revolves around the robustness of biological systems, and about their ability to innovate, that is, to create novel organisms and traits that help them survive and reproduce. Robustness is the ability of a biological system to withstand perturbations, such as DNA mutations and environmental change. Early in his career Wagner developed a widely used mathematical model for gene regulatory circuits, (Wagner's gene network model) and used this model to demonstrate that natural selection can increase the robustness of such circuits to DNA mutations. Experimental work in Wagner's Zürich laboratory showed that proteins can evolve robustness to perturbations. One source of robustness to mutations are redundant duplicate genes. Natural selection can maintain their redundancy and the ensuing ro Document 4::: Adaptive type – in evolutionary biology – is any population or taxon which have the potential for a particular or total occupation of given free of underutilized home habitats or position in the general economy of nature. In evolutionary sense, the emergence of new adaptive type is usually a result of adaptive radiation certain groups of organisms in which they arise categories that can effectively exploit temporary, or new conditions of the environment. Such evolutive units with its distinctive – morphological and anatomical, physiological and other characteristics, i.e. genetic and adjustments (feature) have a predisposition for an occupation certain home habitats or position in the general nature economy. Simply, the adaptive type is one group organisms whose general biological properties represent a key to open the entrance to the observed adaptive zone in the observed natural ecological complex. Adaptive types are spatially and temporally specific. Since the frames of general biological properties these types of substantially genetic are defined between, in effect the emergence of new adaptive types of the corresponding change in population genetic structure and eternal contradiction between the need for optimal adapted well the conditions of living environment, while maintaining genetic variation for survival in a possible new circumstances. For example, the specific place in the economy of nature existed millions of years before the appearance of human type. However, just when the process of evolution of primates (order Primates) reached a level that is able to occupy that position, it is open, and then (in leaving world) an unprecedented acceleration increasingly spreading. Culture, in the broadest sense, is a key adaptation of adaptive type type of Homo sapiens the occupation of existing adaptive zone through work, also in the broadest sense of the term. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the only mechanism that consistently causes adaptive evolution? A. neutral selection B. genetic drift C. natural selection D. artificial selection Answer:
sciq-4626
multiple_choice
What is the term for the measure of the force of gravity pulling down on an object?
[ "mass", "pressure", "density", "weight" ]
D
Relavent Documents: Document 0::: In common usage, the mass of an object is often referred to as its weight, though these are in fact different concepts and quantities. Nevertheless, one object will always weigh more than another with less mass if both are subject to the same gravity (i.e. the same gravitational field strength). In scientific contexts, mass is the amount of "matter" in an object (though "matter" may be difficult to define), but weight is the force exerted on an object's matter by gravity. At the Earth's surface, an object whose mass is exactly one kilogram weighs approximately 9.81 newtons, the product of its mass and the gravitational field strength there. The object's weight is less on Mars, where gravity is weaker; more on Saturn, where gravity is stronger; and very small in space, far from significant sources of gravity, but it always has the same mass. Material objects at the surface of the Earth have weight despite such sometimes being difficult to measure. An object floating freely on water, for example, does not appear to have weight since it is buoyed by the water. But its weight can be measured if it is added to water in a container which is entirely supported by and weighed on a scale. Thus, the "weightless object" floating in water actually transfers its weight to the bottom of the container (where the pressure increases). Similarly, a balloon has mass but may appear to have no weight or even negative weight, due to buoyancy in air. However the weight of the balloon and the gas inside it has merely been transferred to a large area of the Earth's surface, making the weight difficult to measure. The weight of a flying airplane is similarly distributed to the ground, but does not disappear. If the airplane is in level flight, the same weight-force is distributed to the surface of the Earth as when the plane was on the runway, but spread over a larger area. A better scientific definition of mass is its description as being a measure of inertia, which is the tendency of an Document 1::: The gravity of Earth, denoted by , is the net acceleration that is imparted to objects due to the combined effect of gravitation (from mass distribution within Earth) and the centrifugal force (from the Earth's rotation). It is a vector quantity, whose direction coincides with a plumb bob and strength or magnitude is given by the norm . In SI units this acceleration is expressed in metres per second squared (in symbols, m/s2 or m·s−2) or equivalently in newtons per kilogram (N/kg or N·kg−1). Near Earth's surface, the acceleration due to gravity, accurate to 2 significant figures, is . This means that, ignoring the effects of air resistance, the speed of an object falling freely will increase by about per second every second. This quantity is sometimes referred to informally as little (in contrast, the gravitational constant is referred to as big ). The precise strength of Earth's gravity varies with location. The agreed upon value for is by definition. This quantity is denoted variously as , (though this sometimes means the normal gravity at the equator, ), , or simply (which is also used for the variable local value). The weight of an object on Earth's surface is the downwards force on that object, given by Newton's second law of motion, or (). Gravitational acceleration contributes to the total gravity acceleration, but other factors, such as the rotation of Earth, also contribute, and, therefore, affect the weight of the object. Gravity does not normally include the gravitational pull of the Moon and Sun, which are accounted for in terms of tidal effects. Variation in magnitude A non-rotating perfect sphere of uniform mass density, or whose density varies solely with distance from the centre (spherical symmetry), would produce a gravitational field of uniform magnitude at all points on its surface. The Earth is rotating and is also not spherically symmetric; rather, it is slightly flatter at the poles while bulging at the Equator: an oblate spheroid. Document 2::: Gravimetry is the measurement of the strength of a gravitational field. Gravimetry may be used when either the magnitude of a gravitational field or the properties of matter responsible for its creation are of interest. Units of measurement Gravity is usually measured in units of acceleration. In the SI system of units, the standard unit of acceleration is 1 metre per second squared (abbreviated as m/s2). Other units include the cgs gal (sometimes known as a galileo, in either case with symbol Gal), which equals 1 centimetre per second squared, and the g (gn), equal to 9.80665 m/s2. The value of the gn is defined approximately equal to the acceleration due to gravity at the Earth's surface (although the value of g varies by location). Gravimeters An instrument used to measure gravity is known as a gravimeter. For a small body, general relativity predicts gravitational effects indistinguishable from the effects of acceleration by the equivalence principle. Thus, gravimeters can be regarded as special-purpose accelerometers. Many weighing scales may be regarded as simple gravimeters. In one common form, a spring is used to counteract the force of gravity pulling on an object. The change in length of the spring may be calibrated to the force required to balance the gravitational pull. The resulting measurement may be made in units of force (such as the newton), but is more commonly made in units of gals or cm/s2. Researchers use more sophisticated gravimeters when precise measurements are needed. When measuring the Earth's gravitational field, measurements are made to the precision of microgals to find density variations in the rocks making up the Earth. Several types of gravimeters exist for making these measurements, including some that are essentially refined versions of the spring scale described above. These measurements are used to define gravity anomalies. Besides precision, stability is also an important property of a gravimeter, as it allows the monitor Document 3::: Specific force (SF) is a mass-specific quantity defined as the quotient of force per unit mass. It is a physical quantity of kind acceleration, with dimension of length per time squared and units of metre per second squared (m·s−2). It is normally applied to forces other than gravity, to emulate the relationship between gravitational acceleration and gravitational force. It can also be called mass-specific weight (weight per unit mass), as the weight of an object is equal to the magnitude of the gravity force acting on it. The g-force is an instance of specific force measured in units of the standard gravity (g) instead of m/s², i.e., in multiples of g (e.g., "3 g"). Type of acceleration The (mass-)specific force is not a coordinate acceleration, but rather a proper acceleration, which is the acceleration relative to free-fall. Forces, specific forces, and proper accelerations are the same in all reference frames, but coordinate accelerations are frame-dependent. For free bodies, the specific force is the cause of, and a measure of, the body's proper acceleration. The acceleration of an object free falling towards the earth depends on the reference frame (it disappears in the free-fall frame, also called the inertial frame), but any g-force "acceleration" will be present in all frames. This specific force is zero for freely-falling objects, since gravity acting alone does not produce g-forces or specific forces. Accelerometers on the surface of the Earth measure a constant 9.8 m/s^2 even when they are not accelerating (that is, when they do not undergo coordinate acceleration). This is because accelerometers measure the proper acceleration produced by the g-force exerted by the ground (gravity acting alone never produces g-force or specific force). Accelerometers measure specific force (proper acceleration), which is the acceleration relative to free-fall, not the "standard" acceleration that is relative to a coordinate system. Hydraulics In open channel hydr Document 4::: Surface force denoted fs is the force that acts across an internal or external surface element in a material body. Normal forces and shear forces between objects are types of surface force. All cohesive forces and contact forces between objects are considered as surface forces. Surface force can be decomposed into two perpendicular components: normal forces and shear forces. A normal force acts normally over an area and a shear force acts tangentially over an area. Equations for surface force Surface force due to pressure , where f = force, p = pressure, and A = area on which a uniform pressure acts Examples Pressure related surface force Since pressure is , and area is a , a pressure of over an area of will produce a surface force of . See also Body force Contact force The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the term for the measure of the force of gravity pulling down on an object? A. mass B. pressure C. density D. weight Answer:
scienceQA-12218
multiple_choice
What do these two changes have in common? water evaporating from a lake baking cookies
[ "Both are caused by cooling.", "Both are chemical changes.", "Both are only physical changes.", "Both are caused by heating." ]
D
Step 1: Think about each change. Water evaporating from a lake is a change of state. So, it is a physical change. The liquid changes into a gas, but a different type of matter is not formed. Baking cookies is a chemical change. The type of matter in the cookie dough changes when it is baked. The cookie dough turns into cookies! Step 2: Look at each answer choice. Both are only physical changes. Water evaporating is a physical change. But baking cookies is not. Both are chemical changes. Baking cookies is a chemical change. But water evaporating from a lake is not. Both are caused by heating. Both changes are caused by heating. Both are caused by cooling. Neither change is caused by cooling.
Relavent Documents: Document 0::: Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds. Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate. A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density. An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge. Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change. Examples Heating and cooling Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation. Magnetism Ferro-magnetic materials can become magnetic. The process is reve Document 1::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 2::: At equilibrium, the relationship between water content and equilibrium relative humidity of a material can be displayed graphically by a curve, the so-called moisture sorption isotherm. For each humidity value, a sorption isotherm indicates the corresponding water content value at a given, constant temperature. If the composition or quality of the material changes, then its sorption behaviour also changes. Because of the complexity of sorption process the isotherms cannot be determined explicitly by calculation, but must be recorded experimentally for each product. The relationship between water content and water activity (aw) is complex. An increase in aw is usually accompanied by an increase in water content, but in a non-linear fashion. This relationship between water activity and moisture content at a given temperature is called the moisture sorption isotherm. These curves are determined experimentally and constitute the fingerprint of a food system. BET theory (Brunauer-Emmett-Teller) provides a calculation to describe the physical adsorption of gas molecules on a solid surface. Because of the complexity of the process, these calculations are only moderately successful; however, Stephen Brunauer was able to classify sorption isotherms into five generalized shapes as shown in Figure 2. He found that Type II and Type III isotherms require highly porous materials or desiccants, with first monolayer adsorption, followed by multilayer adsorption and finally leading to capillary condensation, explaining these materials high moisture capacity at high relative humidity. Care must be used in extracting data from isotherms, as the representation for each axis may vary in its designation. Brunauer provided the vertical axis as moles of gas adsorbed divided by the moles of the dry material, and on the horizontal axis he used the ratio of partial pressure of the gas just over the sample, divided by its partial pressure at saturation. More modern isotherms showing the Document 3::: Thermofluids is a branch of science and engineering encompassing four intersecting fields: Heat transfer Thermodynamics Fluid mechanics Combustion The term is a combination of "thermo", referring to heat, and "fluids", which refers to liquids, gases and vapors. Temperature, pressure, equations of state, and transport laws all play an important role in thermofluid problems. Phase transition and chemical reactions may also be important in a thermofluid context. The subject is sometimes also referred to as "thermal fluids". Heat transfer Heat transfer is a discipline of thermal engineering that concerns the transfer of thermal energy from one physical system to another. Heat transfer is classified into various mechanisms, such as heat conduction, convection, thermal radiation, and phase-change transfer. Engineers also consider the transfer of mass of differing chemical species, either cold or hot, to achieve heat transfer. Sections include : Energy transfer by heat, work and mass Laws of thermodynamics Entropy Refrigeration Techniques Properties and nature of pure substances Applications Engineering : Predicting and analysing the performance of machines Thermodynamics Thermodynamics is the science of energy conversion involving heat and other forms of energy, most notably mechanical work. It studies and interrelates the macroscopic variables, such as temperature, volume and pressure, which describe physical, thermodynamic systems. Fluid mechanics Fluid Mechanics the study of the physical forces at work during fluid flow. Fluid mechanics can be divided into fluid kinematics, the study of fluid motion, and fluid kinetics, the study of the effect of forces on fluid motion. Fluid mechanics can further be divided into fluid statics, the study of fluids at rest, and fluid dynamics, the study of fluids in motion. Some of its more interesting concepts include momentum and reactive forces in fluid flow and fluid machinery theory and performance. Sections include: Flu Document 4::: The Stefan flow, occasionally called Stefan's flow, is a transport phenomenon concerning the movement of a chemical species by a flowing fluid (typically in the gas phase) that is induced to flow by the production or removal of the species at an interface. Any process that adds the species of interest to or removes it from the flowing fluid may cause the Stefan flow, but the most common processes include evaporation, condensation, chemical reaction, sublimation, ablation, adsorption, absorption, and desorption. It was named after the Slovenian physicist, mathematician, and poet Josef Stefan for his early work on calculating evaporation rates. The Stefan flow is distinct from diffusion as described by Fick's law, but diffusion almost always also occurs in multi-species systems that are experiencing the Stefan flow. In systems undergoing one of the species addition or removal processes mentioned previously, the addition or removal generates a mean flow in the flowing fluid as the fluid next to the interface is displaced by the production or removal of additional fluid by the processes occurring at the interface. The transport of the species by this mean flow is the Stefan flow. When concentration gradients of the species are also present, diffusion transports the species relative to the mean flow. The total transport rate of the species is then given by a summation of the Stefan flow and diffusive contributions. An example of the Stefan flow occurs when a droplet of liquid evaporates in air. In this case, the vapor/air mixture surrounding the droplet is the flowing fluid, and liquid/vapor boundary of the droplet is the interface. As heat is absorbed by the droplet from the environment, some of the liquid evaporates into vapor at the surface of the droplet, and flows away from the droplet as it is displaced by additional vapor evaporating from the droplet. This process causes the flowing medium to move away from the droplet at some mean speed that is dependent on The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What do these two changes have in common? water evaporating from a lake baking cookies A. Both are caused by cooling. B. Both are chemical changes. C. Both are only physical changes. D. Both are caused by heating. Answer:
ai2_arc-866
multiple_choice
Because water can hold a large amount of heat, which effect do oceans have on nearby land areas?
[ "They prevent rapid extreme temperature changes.", "They form high-pressure areas that cause magma currents.", "They provide the energy that triggers volcanic events.", "They lower the freezing point of fresh water." ]
A
Relavent Documents: Document 0::: The borders of the oceans are the limits of Earth's oceanic waters. The definition and number of oceans can vary depending on the adopted criteria. The principal divisions (in descending order of area) of the five oceans are the Pacific Ocean, Atlantic Ocean, Indian Ocean, Southern (Antarctic) Ocean, and Arctic Ocean. Smaller regions of the oceans are called seas, gulfs, bays, straits, and other terms. Geologically, an ocean is an area of oceanic crust covered by water. See also the list of seas article for the seas included in each ocean area. Overview Though generally described as several separate oceans, the world's oceanic waters constitute one global, interconnected body of salt water sometimes referred to as the World Ocean or Global Ocean. This concept of a continuous body of water with relatively free interchange among its parts is of fundamental importance to oceanography. The major oceanic divisions are defined in part by the continents, various archipelagos, and other criteria. The principal divisions (in descending order of area) are the: Pacific Ocean, Atlantic Ocean, Indian Ocean, Southern (Antarctic) Ocean, and Arctic Ocean. Smaller regions of the oceans are called seas, gulfs, bays, straits, and other terms. Geologically, an ocean is an area of oceanic crust covered by water. Oceanic crust is the thin layer of solidified volcanic basalt that covers the Earth's mantle. Continental crust is thicker but less dense. From this perspective, the Earth has three oceans: the World Ocean, the Caspian Sea, and the Black Sea. The latter two were formed by the collision of Cimmeria with Laurasia. The Mediterranean Sea is at times a discrete ocean because tectonic plate movement has repeatedly broken its connection to the World Ocean through the Strait of Gibraltar. The Black Sea is connected to the Mediterranean through the Bosporus, but the Bosporus is a natural canal cut through continental rock some 7,000 years ago, rather than a piece of oceanic sea floo Document 1::: Between 1901 and 2018, the average global sea level rose by , or an average of 1–2 mm per year. This rate accelerated to 4.62 mm/yr for the decade 2013–2022. Climate change due to human activities is the main cause. Between 1993 and 2018, thermal expansion of water accounted for 42% of sea level rise. Melting temperate glaciers accounted for 21%, with Greenland accounting for 15% and Antarctica 8%. Sea level rise lags changes in the Earth's temperature. So sea level rise will continue to accelerate between now and 2050 in response to warming that is already happening. What happens after that will depend on what happens with human greenhouse gas emissions. Sea level rise may slow down between 2050 and 2100 if there are deep cuts in emissions. It could then reach a little over from now by 2100. With high emissions it may accelerate. It could rise by or even by then. In the long run, sea level rise would amount to over the next 2000 years if warming amounts to . It would be if warming peaks at . Rising seas ultimately impact every coastal and island population on Earth. This can be through flooding, higher storm surges, king tides, and tsunamis. These have many knock-on effects. They lead to loss of coastal ecosystems like mangroves. Crop production falls because of salinization of irrigation water and damage to ports disrupts sea trade. The sea level rise projected by 2050 will expose places currently inhabited by tens of millions of people to annual flooding. Without a sharp reduction in greenhouse gas emissions, this may increase to hundreds of millions in the latter decades of the century. Areas not directly exposed to rising sea levels could be affected by large scale migrations and economic disruption. At the same time, local factors like tidal range or land subsidence, as well as the varying resilience and adaptive capacity of individual ecosystems, sectors, and countries will greatly affect the severity of impacts. For instance, sea level rise along the Document 2::: Oceanic freshwater fluxes are defined as the transport of non saline water between the oceans and the other components of the Earth's system (the lands, the atmosphere and the cryosphere). These fluxes have an impact on the local ocean properties (on sea surface salinity, temperature and elevation), as well as on the large scale circulation patterns (such as the thermohaline circulation). Introduction Freshwater fluxes in general describe how freshwater is transported between and stored in the earth's systems: oceans, land, the atmosphere and the cryosphere. While the total amount of water on Earth has remained virtually constant over human timescales, the relative distribution of that total mass between the four reservoirs has been influenced by past climate states, such as glacial cycles. Since the oceans account for 71% of the Earth's surface area, 86% of evaporation (E) and 78% of precipitation (P) occur over the ocean, the oceanic freshwater fluxes represent a large part of the world's freshwater fluxes. There are five major freshwater fluxes into and out of the ocean, namely: Precipitation Evaporation Riverine discharge Ice freezing or melting (Sea ice freezing or melting, ice shelf melting, iceberg melting) Groundwater discharge whereby the 1., 3. and 5. are all inputs, adding freshwater to the ocean, while 2. is an output, i.e. a negative freshwater flux and 4. can be either a freshwater loss (freezing) or gain (melting). The quantity and the spatial distribution of those fluxes determine the ocean salinity (the salt concentration of the ocean water). A positive freshwater flux leads to mixing of water with low to zero salinity with the salty ocean water, resulting in a decrease of the water salinity. This is for example the case in regions, where precipitation is greater than evaporation. On the contrary, if evaporation gets greater than precipitation, the ocean salinity increases, since only water (H2O) evaporates, but not the ions (e.g. Na+, Cl+) Document 3::: The potential temperature of a parcel of fluid at pressure is the temperature that the parcel would attain if adiabatically brought to a standard reference pressure , usually . The potential temperature is denoted and, for a gas well-approximated as ideal, is given by where is the current absolute temperature (in K) of the parcel, is the gas constant of air, and is the specific heat capacity at a constant pressure. for air (meteorology). The reference point for potential temperature in the ocean is usually at the ocean's surface which has a water pressure of 0 dbar. The potential temperature in the ocean doesn't account for the varying heat capacities of seawater, therefore it is not a conservative measure of heat content. Graphical representation of potential temperature will always be less than the actual temperature line in a temperature vs depth graph. Contexts The concept of potential temperature applies to any stratified fluid. It is most frequently used in the atmospheric sciences and oceanography. The reason that it is used in both fields is that changes in pressure can result in warmer fluid residing under colder fluid – examples being dropping air temperature with altitude and increasing water temperature with depth in very deep ocean trenches and within the ocean mixed layer. When the potential temperature is used instead, these apparently unstable conditions vanish as a parcel of fluid is invariant along its isolines. In the oceans, the potential temperature referenced to the surface will be slightly less than the in-situ temperature (the temperature that a water volume has at the specific depth that the instrument measured it in) since the expansion due to reduction in pressure leads to cooling. The numeric difference between the in situ and potential temperature is almost always less than 1.5 degrees Celsius. However, it's important to use potential temperature when comparing temperatures of water from very different depths. Comments Pot Document 4::: Physical oceanography is the study of physical conditions and physical processes within the ocean, especially the motions and physical properties of ocean waters. Physical oceanography is one of several sub-domains into which oceanography is divided. Others include biological, chemical and geological oceanography. Physical oceanography may be subdivided into descriptive and dynamical physical oceanography. Descriptive physical oceanography seeks to research the ocean through observations and complex numerical models, which describe the fluid motions as precisely as possible. Dynamical physical oceanography focuses primarily upon the processes that govern the motion of fluids with emphasis upon theoretical research and numerical models. These are part of the large field of Geophysical Fluid Dynamics (GFD) that is shared together with meteorology. GFD is a sub field of Fluid dynamics describing flows occurring on spatial and temporal scales that are greatly influenced by the Coriolis force. Physical setting Roughly 97% of the planet's water is in its oceans, and the oceans are the source of the vast majority of water vapor that condenses in the atmosphere and falls as rain or snow on the continents. The tremendous heat capacity of the oceans moderates the planet's climate, and its absorption of various gases affects the composition of the atmosphere. The ocean's influence extends even to the composition of volcanic rocks through seafloor metamorphism, as well as to that of volcanic gases and magmas created at subduction zones. From sea level, the oceans are far deeper than the continents are tall; examination of the Earth's hypsographic curve shows that the average elevation of Earth's landmasses is only , while the ocean's average depth is . Though this apparent discrepancy is great, for both land and sea, the respective extremes such as mountains and trenches are rare. Temperature, salinity and density Because the vast majority of the world ocean's volume The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Because water can hold a large amount of heat, which effect do oceans have on nearby land areas? A. They prevent rapid extreme temperature changes. B. They form high-pressure areas that cause magma currents. C. They provide the energy that triggers volcanic events. D. They lower the freezing point of fresh water. Answer:
sciq-11441
multiple_choice
In which phase do the sister chromatids separate?
[ "latent phase", "passivation", "prophase", "anaphase" ]
D
Relavent Documents: Document 0::: Chromosome segregation is the process in eukaryotes by which two sister chromatids formed as a consequence of DNA replication, or paired homologous chromosomes, separate from each other and migrate to opposite poles of the nucleus. This segregation process occurs during both mitosis and meiosis. Chromosome segregation also occurs in prokaryotes. However, in contrast to eukaryotic chromosome segregation, replication and segregation are not temporally separated. Instead segregation occurs progressively following replication. Mitotic chromatid segregation During mitosis chromosome segregation occurs routinely as a step in cell division (see mitosis diagram). As indicated in the mitosis diagram, mitosis is preceded by a round of DNA replication, so that each chromosome forms two copies called chromatids. These chromatids separate to opposite poles, a process facilitated by a protein complex referred to as cohesin. Upon proper segregation, a complete set of chromatids ends up in each of two nuclei, and when cell division is completed, each DNA copy previously referred to as a chromatid is now called a chromosome. Meiotic chromosome and chromatid segregation Chromosome segregation occurs at two separate stages during meiosis called anaphase I and anaphase II (see meiosis diagram). In a diploid cell there are two sets of homologous chromosomes of different parental origin (e.g. a paternal and a maternal set). During the phase of meiosis labeled “interphase s” in the meiosis diagram there is a round of DNA replication, so that each of the chromosomes initially present is now composed of two copies called chromatids. These chromosomes (paired chromatids) then pair with the homologous chromosome (also paired chromatids) present in the same nucleus (see prophase I in the meiosis diagram). The process of alignment of paired homologous chromosomes is called synapsis (see Synapsis). During synapsis, genetic recombination usually occurs. Some of the recombination even Document 1::: A kinetochore (, ) is a disc-shaped protein structure associated with duplicated chromatids in eukaryotic cells where the spindle fibers attach during cell division to pull sister chromatids apart. The kinetochore assembles on the centromere and links the chromosome to microtubule polymers from the mitotic spindle during mitosis and meiosis. The term kinetochore was first used in a footnote in a 1934 Cytology book by Lester W. Sharp and commonly accepted in 1936. Sharp's footnote reads: "The convenient term kinetochore (= movement place) has been suggested to the author by J. A. Moore", likely referring to John Alexander Moore who had joined Columbia University as a freshman in 1932. Monocentric organisms, including vertebrates, fungi, and most plants, have a single centromeric region on each chromosome which assembles a single, localized kinetochore. Holocentric organisms, such as nematodes and some plants, assemble a kinetochore along the entire length of a chromosome. Kinetochores start, control, and supervise the striking movements of chromosomes during cell division. During mitosis, which occurs after the amount of DNA is doubled in each chromosome (while maintaining the same number of chromosomes) in S phase, two sister chromatids are held together by a centromere. Each chromatid has its own kinetochore, which face in opposite directions and attach to opposite poles of the mitotic spindle apparatus. Following the transition from metaphase to anaphase, the sister chromatids separate from each other, and the individual kinetochores on each chromatid drive their movement to the spindle poles that will define the two new daughter cells. The kinetochore is therefore essential for the chromosome segregation that is classically associated with mitosis and meiosis. Structure of Kinetochore The kinetochore contains two regions: an inner kinetochore, which is tightly associated with the centromere DNA and assembled in a specialized form of chromatin that persists t Document 2::: The tetrad is the four spores produced after meiosis of a yeast or other Ascomycota, Chlamydomonas or other alga, or a plant. After parent haploids mate, they produce diploids. Under appropriate environmental conditions, diploids sporulate and undergo meiosis. The meiotic products, spores, remain packaged in the parental cell body to produce the tetrad. Genetic typification If the two parents have a mutation in two different genes, the tetrad can segregate these genes as the parental ditype (PD), the non-parental ditype (NPD) or as the tetratype (TT). Parental ditype is a tetrad type containing two different genotypes, both of which are parental. A spore arrangement in ascomycetes that contains only the two non-recombinant-type ascospores. Non-parental ditype (NPD) is a spore that contains only the two recombinant-type ascospores (assuming two segregating loci). A tetrad type containing two different genotypes, both of which are recombinant. Tetratype is a tetrad containing four different genotypes, two parental and two recombinant. A spore arrangement in ascomycetes that consists of two parental and two recombinant spores indicates a single crossover between two linked loci. Linkage analysis The ratio between the different segregation types arising after the sporulation is a measure of the linkage between the two genes. Tetrad dissection Tetrad dissection has become a powerful tool of yeast geneticists, and is used in conjunction with the many established procedures utilizing the versatility of yeasts as model organisms. Use of modern microscopy and micromanipulation techniques allows the four haploid spores of a yeast tetrad to be separated and germinated individually to form isolated spore colonies. Uses Tetrad analysis can be used to confirm whether a phenotype is caused by a specific mutation, construction of strains, and for investigating gene interaction. Since the frequency of tetrad segregation types is influenced by the recombination frequency for Document 3::: Sister chromatid cohesion refers to the process by which sister chromatids are paired and held together during certain phases of the cell cycle. Establishment of sister chromatid cohesion is the process by which chromatin-associated cohesin protein becomes competent to physically bind together the sister chromatids. In general, cohesion is established during S phase as DNA is replicated, and is lost when chromosomes segregate during mitosis and meiosis. Some studies have suggested that cohesion aids in aligning the kinetochores during mitosis by forcing the kinetochores to face opposite cell poles. Cohesin loading Cohesin first associates with the chromosomes during G1 phase. The cohesin ring is composed of two SMC (structural maintenance of chromosomes) proteins and two additional Scc proteins. Cohesin may originally interact with chromosomes via the ATPase domains of the SMC proteins. In yeast, the loading of cohesin on the chromosomes depends on proteins Scc2 and Scc4. Cohesin interacts with the chromatin at specific loci. High levels of cohesin binding are observed at the centromere. Cohesin is also loaded at cohesin attachment regions (CARs) along the length of the chromosomes. CARs are approximately 500-800 base pair regions spaced at approximately 9 kilobase intervals along the chromosomes. In yeast, CARs tend to be rich in adenine-thymine base pairs. CARs are independent of origins of replication. Establishment of cohesion Establishment of cohesion refers to the process by which chromatin-associated cohesin becomes cohesion-competent. Chromatin association of cohesin is not sufficient for cohesion. Cohesin must undergo subsequent modification ("establishment") to be capable of physically holding the sister chromosomes together. Though cohesin can associate with chromatin earlier in the cell cycle, cohesion is established during S phase. Early data suggesting that S phase is crucial to cohesion was based on the fact that after S phase, sister chromatids Document 4::: Nondisjunction is the failure of homologous chromosomes or sister chromatids to separate properly during cell division (mitosis/meiosis). There are three forms of nondisjunction: failure of a pair of homologous chromosomes to separate in meiosis I, failure of sister chromatids to separate during meiosis II, and failure of sister chromatids to separate during mitosis. Nondisjunction results in daughter cells with abnormal chromosome numbers (aneuploidy). Calvin Bridges and Thomas Hunt Morgan are credited with discovering nondisjunction in Drosophila melanogaster sex chromosomes in the spring of 1910, while working in the Zoological Laboratory of Columbia University. Types In general, nondisjunction can occur in any form of cell division that involves ordered distribution of chromosomal material. Higher animals have three distinct forms of such cell divisions: Meiosis I and meiosis II are specialized forms of cell division occurring during generation of gametes (eggs and sperm) for sexual reproduction, mitosis is the form of cell division used by all other cells of the body. Meiosis II Ovulated eggs become arrested in metaphase II until fertilization triggers the second meiotic division. Similar to the segregation events of mitosis, the pairs of sister chromatids resulting from the separation of bivalents in meiosis I are further separated in anaphase of meiosis II. In oocytes, one sister chromatid is segregated into the second polar body, while the other stays inside the egg. During spermatogenesis, each meiotic division is symmetric such that each primary spermatocyte gives rise to 2 secondary spermatocytes after meiosis I, and eventually 4 spermatids after meiosis II. Meiosis II-nondisjunction may also result in aneuploidy syndromes, but only to a much smaller extent than do segregation failures in meiosis I. Mitosis Division of somatic cells through mitosis is preceded by replication of the genetic material in S phase. As a result, each chromosome consists The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. In which phase do the sister chromatids separate? A. latent phase B. passivation C. prophase D. anaphase Answer:
sciq-5150
multiple_choice
What is the name of the major sex hormone in females?
[ "glucose", "estrogen", "testosterone", "insulin" ]
B
Relavent Documents: Document 0::: The following is a list of hormones found in Homo sapiens. Spelling is not uniform for many hormones. For example, current North American and international usage uses estrogen and gonadotropin, while British usage retains the Greek digraph in oestrogen and favours the earlier spelling gonadotrophin. Hormone listing Steroid Document 1::: Reproductive biology includes both sexual and asexual reproduction. Reproductive biology includes a wide number of fields: Reproductive systems Endocrinology Sexual development (Puberty) Sexual maturity Reproduction Fertility Human reproductive biology Endocrinology Human reproductive biology is primarily controlled through hormones, which send signals to the human reproductive structures to influence growth and maturation. These hormones are secreted by endocrine glands, and spread to different tissues in the human body. In humans, the pituitary gland synthesizes hormones used to control the activity of endocrine glands. Reproductive systems Internal and external organs are included in the reproductive system. There are two reproductive systems including the male and female, which contain different organs from one another. These systems work together in order to produce offspring. Female reproductive system The female reproductive system includes the structures involved in ovulation, fertilization, development of an embryo, and birth. These structures include: Ovaries Oviducts Uterus Vagina Mammary Glands Estrogen is one of the sexual reproductive hormones that aid in the sexual reproductive system of the female. Male reproductive system The male reproductive system includes testes, rete testis, efferent ductules, epididymis, sex accessory glands, sex accessory ducts and external genitalia. Testosterone, an androgen, although present in both males and females, is relatively more abundant in males. Testosterone serves as one of the major sexual reproductive hormones in the male reproductive system However, the enzyme aromatase is present in testes and capable of synthesizing estrogens from androgens. Estrogens are present in high concentrations in luminal fluids of the male reproductive tract. Androgen and estrogen receptors are abundant in epithelial cells of the male reproductive tract. Animal Reproductive Biology Animal reproduction oc Document 2::: Membrane estrogen receptors (mERs) are a group of receptors which bind estrogen. Unlike the estrogen receptor (ER), a nuclear receptor which mediates its effects via genomic mechanisms, mERs are cell surface receptors which rapidly alter cell signaling via modulation of intracellular signaling cascades. Putative mERs include membrane-associated ERα (mERα) and ERβ (mERβ), GPER (GPR30), GPRC6A, ER-X, ERx and Gq-mER. The mERs have been reviewed. See also Membrane steroid receptor Document 3::: The Vandenbergh effect is a phenomenon reported by J.G. Vandenbergh et al. in 1975, in which an early induction of the first estrous cycle in prepubertal female mice occurs as a result of exposure to the pheromone-laden urine of a sexually mature (dominant) male mouse. Physiologically, the exposure to male urine induces the release of GnRH, which provokes the first estrus. The Vandenbergh effect has also been seen with exposure to adult female mice. When an immature female mouse is exposed to the urine of mature female mouse, estrus is delayed in the prepubertal female. In this situation, GnRH is inhibited and therefore delays puberty in the juvenile female mouse. The Vandenbergh effect is caused by pheromones found in a male's urine. The male does not have to be present for this effect to take place; the urine alone is sufficient. These pheromones are detected by the vomeronasal organ in the septum of the female's nose. This occurs because the female body will only take the step to begin puberty if there are available mates around. She will not waste energy on puberty if there is no possibility of finding a mate. In addition to GnRH, exogenous estradiol has recently implicated as having a role in the Vandenbergh effect. Utilizing tritium-labeled estradiol implanted in male mice, researchers have been able to trace the pathways the estradiol takes once transmitted to a female. The estradiol was found in a multitude of regions within the females and appeared to enter her circulation nasally and through the skin. Their findings suggested that some aspects of the Vandenbergh effect as well as the Bruce effect may be related to exogenous estradiol from males. Additional studies have looked into the validity of estradiol's role in the Vandenbergh effect by means of exogenous estradiol placed in castrated rats. Castrated males were injected with either a control (oil) or estradiol in the oil vehicle. As expected, urinary androgens in the castrated males were below no Document 4::: The gonadotropin receptors are a group of receptors that bind a group of pituitary hormones called gonadotropins. They include the: Follicle-stimulating hormone receptor (FSHR) - binds follicle-stimulating hormone (FSH) Luteinizing hormone receptor (LHR) - binds luteinizing hormone (LH) and human chorionic gonadotropin (hCG) See also GnRH receptor Sex hormone receptor G protein-coupled receptors Gonadotropin-releasing hormone and gonadotropins Signal transduction Cell signaling The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the name of the major sex hormone in females? A. glucose B. estrogen C. testosterone D. insulin Answer:
sciq-2274
multiple_choice
What type of habitat do ferns need to grow?
[ "cold", "moist", "dry", "elevated" ]
B
Relavent Documents: Document 0::: Ceratopteris richardii is a fern species belonging to the genus Ceratopteris, one of only two genera of the subfamily Parkerioideae of the family Pteridaceae. It is one of several genera of ferns adapted to an aquatic existence. C. richardii was previously regarded as being part of the species Ceratopteris thalictroides. "C-Fern" This particular species is of special scientific interest because a patented strain, called "C-Fern", was developed as a scientific aid and teaching tool in biology in 1995. The use of "C-Fern" is facilitated by the fact that it grows readily in a cell-culture dish on agar media, reaching sexual maturity within 2–3 weeks of spore inoculation, with motile sperm cells being visible at this time. Over the course of about 6 weeks germination, sex determination and development of gametophytes, fertilization, embryogenesis, organogenesis, and sporophyte growth can all be observed, allowing an incredibly comprehensive study of the life cycle of homosporous ferns in a relatively short time period. In addition, due to the small size of the plant many specimens can be observed growing simultaneously, allowing for larger sample sizes in research studies. Following the culture of "C-Fern" in dishes it can be transplanted to a dirt substrate, where it can be further allowed to grow and future generations can be used for subsequent studies. Monilophytes are generally studied far less than other groups of plants and a full genome sequence is not yet available, however due to the development of "C-Fern" research into fern biology has been more prevalent and C. richardii has been used as a model organism to study vascular plant cell walls, alternation of generations (and associated mutations), genetics, population dynamics, and the effects of mitotic disrupter herbicides, among other topics. Despite being genetically identical the inoculated spores can give rise to both hermaphrodites and male gametophytes, depending on the secretion of antheridiogen; Document 1::: A Hygrophyte (Greek hygros = wet + phyton = plant) is a plant living above ground that is adapted to the conditions of abundant moisture pads of surrounding air. These plants inhabit mainly wet and dark forests and islands darkened swamp and very humid and floody meadows. Within the group of all types of terrestrial plants, they are at least resistant to drought. According to the environmental attributes are a group of plants between categories hydrophytes (aquatic plants) and mesophytes (plants in moderate environmental conditions) Plants living in the or moist habitats typically lack xeromorphic features. Examples of hygrophyte's genera Adoxa; Agrostis; Bidens; Caltha; Cardamine; Carex; Catabrosa; Chelidonium; Circea; Cyperus; Drosera; Equisetum; Galium; Glyceria; Hymenophyllum; Juncus; Lythrum: Oxalis, etc. See also Hydrophyte Mesophyte Xerophyte Document 2::: The tree ferns are arborescent (tree-like) ferns that grow with a trunk elevating the fronds above ground level, making them trees. Many extant tree ferns are members of the order Cyatheales, to which belong the families Cyatheaceae (scaly tree ferns), Dicksoniaceae, Metaxyaceae, and Cibotiaceae. It is estimated that Cyatheales originated in the early Jurassic, and is the third group of ferns known to have given rise to tree-like forms. The others are the extinct Tempskya of uncertain position, and Osmundales where the extinct Guaireaceae and some members of Osmundaceae also grew into trees. In addition there were the Psaroniaceae and Tietea in the Marattiales, which is the sister group to most living ferns including Cyatheales. Other ferns which are also tree ferns, are Leptopteris and Todea in the family Osmundaceae, which can achieve short trunks under a metre tall. Fern species with short trunks in the genera Blechnum, Cystodium and Sadleria from the order Polypodiales, and smaller members of Cyatheales like Calochlaena, Cnemedaria, Culcita (mountains only tree fern), Lophosoria and Thyrsopteris are also considered tree ferns. Range Tree ferns are found growing in tropical and subtropical areas worldwide, as well as cool to temperate rainforests in Australia, New Zealand and neighbouring regions (e.g. Lord Howe Island, etc.). Like all ferns, tree ferns reproduce by means of spores formed on the undersides of the fronds. Description The fronds of tree ferns are usually very large and multiple-pinnate. Their trunk is actually a vertical and modified rhizome, and woody tissue is absent. To add strength, there are deposits of lignin in the cell walls and the lower part of the stem is reinforced with thick, interlocking mats of tiny roots. If the crown of Dicksonia antarctica (the most common species in gardens) is damaged, it will inevitably die because that is where all the new growth occurs. But other clump-forming tree fern species, such as D. squarrosa and D Document 3::: The ferns (Polypodiopsida or Polypodiophyta) are a group of vascular plants (plants with xylem and phloem) that reproduce via spores and have neither seeds nor flowers. They differ from mosses by being vascular, i.e., having specialized tissues that conduct water and nutrients and in having life cycles in which the branched sporophyte is the dominant phase. Ferns have complex leaves called megaphylls, that are more complex than the microphylls of clubmosses. Most ferns are leptosporangiate ferns. They produce coiled fiddleheads that uncoil and expand into fronds. The group includes about 10,560 known extant species. Ferns are defined here in the broad sense, being all of the Polypodiopsida, comprising both the leptosporangiate (Polypodiidae) and eusporangiate ferns, the latter group including horsetails, whisk ferns, marattioid ferns, and ophioglossoid ferns. Ferns first appear in the fossil record about 360 million years ago in the late Devonian period, but Polypodiales, the group that makes up 80% of living fern diversity, did not appear and diversify until the Cretaceous, contemporaneous with the rise of flowering plants that came to dominate the world's flora. Ferns are not of major economic importance, but some are used for food, medicine, as biofertilizer, as ornamental plants, and for remediating contaminated soil. They have been the subject of research for their ability to remove some chemical pollutants from the atmosphere. Some fern species, such as bracken (Pteridium aquilinum) and water fern (Azolla filiculoides), are significant weeds worldwide. Some fern genera, such as Azolla, can fix nitrogen and make a significant input to the nitrogen nutrition of rice paddies. They also play certain roles in folklore. Description Sporophyte Extant ferns are herbaceous perennials and most lack woody growth. When woody growth is present, it is found in the stem. Their foliage may be deciduous or evergreen, and some are semi-evergreen depending on the climate. Document 4::: A fernery is a specialized garden for the cultivation and display of ferns. In many countries, ferneries are indoors or at least sheltered or kept in a shadehouse to provide a moist environment, filtered light and protection from frost and other extremes; on the other hand, some ferns native to arid regions require protection from rain and humid conditions, and grow best in full sun. In mild climates, ferneries are often outside and have an array of different species that grow under similar conditions. In 1855, parts of England were gripped by 'pteridomania' (the fern craze). This term was coined by Charles Kingsley, clergyman, naturalist (and later author of The Water Babies). It involved both British and exotic varieties being collected and displayed; many associated structures were constructed and paraphernalia was used to maintain the collections. In 1859, the Fernery at Tatton Park Gardens beside Tatton Hall had been built to a design by George Stokes, Joseph Paxton's assistant and son-in-law, to the west of the conservatory to house tree ferns from New Zealand and a collection of other ferns. The Fernery was also seen in the TV miniseries Brideshead Revisited. In 1874, the fernery in Benmore Botanic Garden (part of the Royal Botanic Garden Edinburgh) was built by James Duncan (a plant collector and sugar refiner). This was a large and expensive project since the fernery was based in a heated conservatory. In 1992, it was listed Historic Scotland for its architectural and botanical value and has been described by the Royal Commission on the Ancient and Historical Monuments of Scotland as "extremely rare and unique in its design". In 1903, Hever Castle in Kent was acquired and restored by the American millionaire William Waldorf Astor who used it as a family residence. He added the Italian Garden (including a fernery) to display his collection of statuary and ornaments. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What type of habitat do ferns need to grow? A. cold B. moist C. dry D. elevated Answer:
sciq-660
multiple_choice
Bacterial stis can be cured with what?
[ "pesticides", "antioxidants", "antibiotics", "antiviral drugs" ]
C
Relavent Documents: Document 0::: MicrobeLibrary is a permanent collection of over 1400 original peer-reviewed resources for teaching undergraduate microbiology. It is provided by the American Society for Microbiology, Washington DC, United States. Contents include curriculum activities; images and animations; reviews of books, websites and other resources; and articles from Focus on Microbiology Education, Microbiology Education and Microbe. Around 40% of the materials are free to educators and students, the remainder require a subscription. the service is suspended with the message to: "Please check back with us in 2017". External links MicrobeLibrary Microbiology Document 1::: The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work. History It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council. Function Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres. STEM ambassadors To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell. Funding STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments. See also The WISE Campaign Engineering and Physical Sciences Research Council National Centre for Excellence in Teaching Mathematics Association for Science Education Glossary of areas of mathematics Glossary of astronomy Glossary of biology Glossary of chemistry Glossary of engineering Glossary of physics Document 2::: Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research. Americas Human Biology major at Stanford University, Palo Alto (since 1970) Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government. Human and Social Biology (Caribbean) Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment. Human Biology Program at University of Toronto The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications. Asia BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002) BSc (honours) Human Biology at AIIMS (New Document 3::: Sexually transmitted infections (STIs), also referred to as sexually transmitted diseases (STDs), are infections that are commonly spread by sexual activity, especially vaginal intercourse, anal sex and oral sex. The most prevalent STIs may be carried by a significant fraction of the human population. Document 4::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Bacterial stis can be cured with what? A. pesticides B. antioxidants C. antibiotics D. antiviral drugs Answer:
sciq-6774
multiple_choice
What kinds of waves are composed of various oscillating electric and magnetic fields?
[ "elastic", "tidal", "electromagnetic", "seismic" ]
C
Relavent Documents: Document 0::: In physics, a transverse wave is a wave that oscillates perpendicularly to the direction of the wave's advance. In contrast, a longitudinal wave travels in the direction of its oscillations. All waves move energy from place to place without transporting the matter in the transmission medium if there is one. Electromagnetic waves are transverse without requiring a medium. The designation “transverse” indicates the direction of the wave is perpendicular to the displacement of the particles of the medium through which it passes, or in the case of EM waves, the oscillation is perpendicular to the direction of the wave. A simple example is given by the waves that can be created on a horizontal length of string by anchoring one end and moving the other end up and down. Another example is the waves that are created on the membrane of a drum. The waves propagate in directions that are parallel to the membrane plane, but each point in the membrane itself gets displaced up and down, perpendicular to that plane. Light is another example of a transverse wave, where the oscillations are the electric and magnetic fields, which point at right angles to the ideal light rays that describe the direction of propagation. Transverse waves commonly occur in elastic solids due to the shear stress generated; the oscillations in this case are the displacement of the solid particles away from their relaxed position, in directions perpendicular to the propagation of the wave. These displacements correspond to a local shear deformation of the material. Hence a transverse wave of this nature is called a shear wave. Since fluids cannot resist shear forces while at rest, propagation of transverse waves inside the bulk of fluids is not possible. In seismology, shear waves are also called secondary waves or S-waves. Transverse waves are contrasted with longitudinal waves, where the oscillations occur in the direction of the wave. The standard example of a longitudinal wave is a sound wave or " Document 1::: In physics, a mechanical wave is a wave that is an oscillation of matter, and therefore transfers energy through a medium. While waves can move over long distances, the movement of the medium of transmission—the material—is limited. Therefore, the oscillating material does not move far from its initial equilibrium position. Mechanical waves can be produced only in media which possess elasticity and inertia. There are three types of mechanical waves: transverse waves, longitudinal waves, and surface waves. Some of the most common examples of mechanical waves are water waves, sound waves, and seismic waves. Like all waves, mechanical waves transport energy. This energy propagates in the same direction as the wave. A wave requires an initial energy input; once this initial energy is added, the wave travels through the medium until all its energy is transferred. In contrast, electromagnetic waves require no medium, but can still travel through one. Transverse wave A transverse wave is the form of a wave in which particles of medium vibrate about their mean position perpendicular to the direction of the motion of the wave. To see an example, move an end of a Slinky (whose other end is fixed) to the left-and-right of the Slinky, as opposed to to-and-fro. Light also has properties of a transverse wave, although it is an electromagnetic wave. Longitudinal wave Longitudinal waves cause the medium to vibrate parallel to the direction of the wave. It consists of multiple compressions and rarefactions. The rarefaction is the farthest distance apart in the longitudinal wave and the compression is the closest distance together. The speed of the longitudinal wave is increased in higher index of refraction, due to the closer proximity of the atoms in the medium that is being compressed. Sound is a longitudinal wave. Surface waves This type of wave travels along the surface or interface between two media. An example of a surface wave would be waves in a pool, or in an ocean Document 2::: This is a list of wave topics. 0–9 21 cm line A Abbe prism Absorption spectroscopy Absorption spectrum Absorption wavemeter Acoustic wave Acoustic wave equation Acoustics Acousto-optic effect Acousto-optic modulator Acousto-optics Airy disc Airy wave theory Alfvén wave Alpha waves Amphidromic point Amplitude Amplitude modulation Animal echolocation Antarctic Circumpolar Wave Antiphase Aquamarine Power Arrayed waveguide grating Artificial wave Atmospheric diffraction Atmospheric wave Atmospheric waveguide Atom laser Atomic clock Atomic mirror Audience wave Autowave Averaged Lagrangian B Babinet's principle Backward wave oscillator Bandwidth-limited pulse beat Berry phase Bessel beam Beta wave Black hole Blazar Bloch's theorem Blueshift Boussinesq approximation (water waves) Bow wave Bragg diffraction Bragg's law Breaking wave Bremsstrahlung, Electromagnetic radiation Brillouin scattering Bullet bow shockwave Burgers' equation Business cycle C Capillary wave Carrier wave Cherenkov radiation Chirp Ernst Chladni Circular polarization Clapotis Closed waveguide Cnoidal wave Coherence (physics) Coherence length Coherence time Cold wave Collimated light Collimator Compton effect Comparison of analog and digital recording Computation of radiowave attenuation in the atmosphere Continuous phase modulation Continuous wave Convective heat transfer Coriolis frequency Coronal mass ejection Cosmic microwave background radiation Coulomb wave function Cutoff frequency Cutoff wavelength Cymatics D Damped wave Decollimation Delta wave Dielectric waveguide Diffraction Direction finding Dispersion (optics) Dispersion (water waves) Dispersion relation Dominant wavelength Doppler effect Doppler radar Douglas Sea Scale Draupner wave Droplet-shaped wave Duhamel's principle E E-skip Earthquake Echo (phenomenon) Echo sounding Echolocation (animal) Echolocation (human) Eddy (fluid dynamics) Edge wave Eikonal equation Ekman layer Ekman spiral Ekman transport El Niño–Southern Oscillation El Document 3::: Inertial waves, also known as inertial oscillations, are a type of mechanical wave possible in rotating fluids. Unlike surface gravity waves commonly seen at the beach or in the bathtub, inertial waves flow through the interior of the fluid, not at the surface. Like any other kind of wave, an inertial wave is caused by a restoring force and characterized by its wavelength and frequency. Because the restoring force for inertial waves is the Coriolis force, their wavelengths and frequencies are related in a peculiar way. Inertial waves are transverse. Most commonly they are observed in atmospheres, oceans, lakes, and laboratory experiments. Rossby waves, geostrophic currents, and geostrophic winds are examples of inertial waves. Inertial waves are also likely to exist in the molten core of the rotating Earth. Restoring force Inertial waves are restored to equilibrium by the Coriolis force, a result of rotation. To be precise, the Coriolis force arises (along with the centrifugal force) in a rotating frame to account for the fact that such a frame is always accelerating. Inertial waves, therefore, cannot exist without rotation. More complicated than tension on a string, the Coriolis force acts at a 90° angle to the direction of motion, and its strength depends on the rotation rate of the fluid. These two properties lead to the peculiar characteristics of inertial waves. Characteristics Inertial waves are possible only when a fluid is rotating, and exist in the bulk of the fluid, not at its surface. Like light waves, inertial waves are transverse, which means that their vibrations occur perpendicular to the direction of wave travel. One peculiar geometrical characteristic of inertial waves is that their phase velocity, which describes the movement of the crests and troughs of the wave, is perpendicular to their group velocity, which is a measure of the propagation of energy. Whereas a sound wave or an electromagnetic wave of any frequency is possible, inertial wa Document 4::: A wavenumber–frequency diagram is a plot displaying the relationship between the wavenumber (spatial frequency) and the frequency (temporal frequency) of certain phenomena. Usually frequencies are placed on the vertical axis, while wavenumbers are placed on the horizontal axis. In the atmospheric sciences, these plots are a common way to visualize atmospheric waves. In the geosciences, especially seismic data analysis, these plots also called f–k plot, in which energy density within a given time interval is contoured on a frequency-versus-wavenumber basis. They are used to examine the direction and apparent velocity of seismic waves and in velocity filter design. Origins In general, the relationship between wavelength , frequency , and the phase velocity of a sinusoidal wave is: Using the wavenumber () and angular frequency () notation, the previous equation can be rewritten as On the other hand, the group velocity is equal to the slope of the wavenumber–frequency diagram: Analyzing such relationships in detail often yields information on the physical properties of the medium, such as density, composition, etc. See also Dispersion relation The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What kinds of waves are composed of various oscillating electric and magnetic fields? A. elastic B. tidal C. electromagnetic D. seismic Answer:
sciq-2527
multiple_choice
What is the measure of change in velocity of a moving object?
[ "acceleration", "kinetic energy", "vibration", "transmission" ]
A
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Velocity is the speed in combination with the direction of motion of an object. Velocity is a fundamental concept in kinematics, the branch of classical mechanics that describes the motion of bodies. Velocity is a physical vector quantity: both magnitude and direction are needed to define it. The scalar absolute value (magnitude) of velocity is called , being a coherent derived unit whose quantity is measured in the SI (metric system) as metres per second (m/s or m⋅s−1). For example, "5 metres per second" is a scalar, whereas "5 metres per second east" is a vector. If there is a change in speed, direction or both, then the object is said to be undergoing an acceleration. Constant velocity vs acceleration To have a constant velocity, an object must have a constant speed in a constant direction. Constant direction constrains the object to motion in a straight path thus, a constant velocity means motion in a straight line at a constant speed. For example, a car moving at a constant 20 kilometres per hour in a circular path has a constant speed, but does not have a constant velocity because its direction changes. Hence, the car is considered to be undergoing an acceleration. Difference between speed and velocity While the terms speed and velocity are often colloquially used interchangeably to connote how fast an object is moving, in scientific terms they are different. Speed, the scalar magnitude of a velocity vector, denotes only how fast an object is moving, while velocity indicates both an objects speed and direction. Equation of motion Average velocity Velocity is defined as the rate of change of position with respect to time, which may also be referred to as the instantaneous velocity to emphasize the distinction from the average velocity. In some applications the average velocity of an object might be needed, that is to say, the constant velocity that would provide the same resultant displacement as a variable velocity in the same time interval, , over some Document 2::: Advanced Placement (AP) Physics C: Mechanics (also known as AP Mechanics) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a one-semester calculus-based university course in mechanics. The content of Physics C: Mechanics overlaps with that of AP Physics 1, but Physics 1 is algebra-based, while Physics C is calculus-based. Physics C: Mechanics may be combined with its electricity and magnetism counterpart to form a year-long course that prepares for both exams. Course content Intended to be equivalent to an introductory college course in mechanics for physics or engineering majors, the course modules are: Kinematics Newton's laws of motion Work, energy and power Systems of particles and linear momentum Circular motion and rotation Oscillations and gravitation. Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a Calculus I class. This course is often compared to AP Physics 1: Algebra Based for its similar course material involving kinematics, work, motion, forces, rotation, and oscillations. However, AP Physics 1: Algebra Based lacks concepts found in Calculus I, like derivatives or integrals. This course may be combined with AP Physics C: Electricity and Magnetism to make a unified Physics C course that prepares for both exams. AP test The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution. Registration The AP examination for AP Physics C: Mechanics is separate from the AP examination for AP Physics C: Electricity and Magnetism. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test. Format The exam is typically administered on a Monday aftern Document 3::: This is a list of topics that are included in high school physics curricula or textbooks. Mathematical Background SI Units Scalar (physics) Euclidean vector Motion graphs and derivatives Pythagorean theorem Trigonometry Motion and forces Motion Force Linear motion Linear motion Displacement Speed Velocity Acceleration Center of mass Mass Momentum Newton's laws of motion Work (physics) Free body diagram Rotational motion Angular momentum (Introduction) Angular velocity Centrifugal force Centripetal force Circular motion Tangential velocity Torque Conservation of energy and momentum Energy Conservation of energy Elastic collision Inelastic collision Inertia Moment of inertia Momentum Kinetic energy Potential energy Rotational energy Electricity and magnetism Ampère's circuital law Capacitor Coulomb's law Diode Direct current Electric charge Electric current Alternating current Electric field Electric potential energy Electron Faraday's law of induction Ion Inductor Joule heating Lenz's law Magnetic field Ohm's law Resistor Transistor Transformer Voltage Heat Entropy First law of thermodynamics Heat Heat transfer Second law of thermodynamics Temperature Thermal energy Thermodynamic cycle Volume (thermodynamics) Work (thermodynamics) Waves Wave Longitudinal wave Transverse waves Transverse wave Standing Waves Wavelength Frequency Light Light ray Speed of light Sound Speed of sound Radio waves Harmonic oscillator Hooke's law Reflection Refraction Snell's law Refractive index Total internal reflection Diffraction Interference (wave propagation) Polarization (waves) Vibrating string Doppler effect Gravity Gravitational potential Newton's law of universal gravitation Newtonian constant of gravitation See also Outline of physics Physics education Document 4::: Linear motion, also called rectilinear motion, is one-dimensional motion along a straight line, and can therefore be described mathematically using only one spatial dimension. The linear motion can be of two types: uniform linear motion, with constant velocity (zero acceleration); and non-uniform linear motion, with variable velocity (non-zero acceleration). The motion of a particle (a point-like object) along a line can be described by its position , which varies with (time). An example of linear motion is an athlete running a 100-meter dash along a straight track. Linear motion is the most basic of all motion. According to Newton's first law of motion, objects that do not experience any net force will continue to move in a straight line with a constant velocity until they are subjected to a net force. Under everyday circumstances, external forces such as gravity and friction can cause an object to change the direction of its motion, so that its motion cannot be described as linear. One may compare linear motion to general motion. In general motion, a particle's position and velocity are described by vectors, which have a magnitude and direction. In linear motion, the directions of all the vectors describing the system are equal and constant which means the objects move along the same axis and do not change direction. The analysis of such systems may therefore be simplified by neglecting the direction components of the vectors involved and dealing only with the magnitude. Background Displacement The motion in which all the particles of a body move through the same distance in the same time is called translatory motion. There are two types of translatory motions: rectilinear motion; curvilinear motion. Since linear motion is a motion in a single dimension, the distance traveled by an object in particular direction is the same as displacement. The SI unit of displacement is the metre. If is the initial position of an object and is the final position, then mat The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the measure of change in velocity of a moving object? A. acceleration B. kinetic energy C. vibration D. transmission Answer:
sciq-8475
multiple_choice
What's the name for an atom that has gained or lost an electron?
[ "isotope", "neutrino", "an ion", "an photon" ]
C
Relavent Documents: Document 0::: An atom is a particle that consists of a nucleus of protons and neutrons surrounded by an electromagnetically-bound cloud of electrons. The atom is the basic particle of the chemical elements, and the chemical elements are distinguished from each other by the number of protons that are in their atoms. For example, any atom that contains 11 protons is sodium, and any atom that contains 29 protons is copper. The number of neutrons defines the isotope of the element. Atoms are extremely small, typically around 100 picometers across. A human hair is about a million carbon atoms wide. This is smaller than the shortest wavelength of visible light, which means humans cannot see atoms with conventional microscopes. Atoms are so small that accurately predicting their behavior using classical physics is not possible due to quantum effects. More than 99.94% of an atom's mass is in the nucleus. Each proton has a positive electric charge, while each electron has a negative charge, and the neutrons, if any are present, have no electric charge. If the numbers of protons and electrons are equal, as they normally are, then the atom is electrically neutral. If an atom has more electrons than protons, then it has an overall negative charge, and is called a negative ion (or anion). Conversely, if it has more protons than electrons, it has a positive charge, and is called a positive ion (or cation). The electrons of an atom are attracted to the protons in an atomic nucleus by the electromagnetic force. The protons and neutrons in the nucleus are attracted to each other by the nuclear force. This force is usually stronger than the electromagnetic force that repels the positively charged protons from one another. Under certain circumstances, the repelling electromagnetic force becomes stronger than the nuclear force. In this case, the nucleus splits and leaves behind different elements. This is a form of nuclear decay. Atoms can attach to one or more other atoms by chemical bonds to Document 1::: Atomic physics is the field of physics that studies atoms as an isolated system of electrons and an atomic nucleus. Atomic physics typically refers to the study of atomic structure and the interaction between atoms. It is primarily concerned with the way in which electrons are arranged around the nucleus and the processes by which these arrangements change. This comprises ions, neutral atoms and, unless otherwise stated, it can be assumed that the term atom includes ions. The term atomic physics can be associated with nuclear power and nuclear weapons, due to the synonymous use of atomic and nuclear in standard English. Physicists distinguish between atomic physics—which deals with the atom as a system consisting of a nucleus and electrons—and nuclear physics, which studies nuclear reactions and special properties of atomic nuclei. As with many scientific fields, strict delineation can be highly contrived and atomic physics is often considered in the wider context of atomic, molecular, and optical physics. Physics research groups are usually so classified. Isolated atoms Atomic physics primarily considers atoms in isolation. Atomic models will consist of a single nucleus that may be surrounded by one or more bound electrons. It is not concerned with the formation of molecules (although much of the physics is identical), nor does it examine atoms in a solid state as condensed matter. It is concerned with processes such as ionization and excitation by photons or collisions with atomic particles. While modelling atoms in isolation may not seem realistic, if one considers atoms in a gas or plasma then the time-scales for atom-atom interactions are huge in comparison to the atomic processes that are generally considered. This means that the individual atoms can be treated as if each were in isolation, as the vast majority of the time they are. By this consideration, atomic physics provides the underlying theory in plasma physics and atmospheric physics, even though Document 2::: Isotopes are distinct nuclear species (or nuclides, as technical term) of the same chemical element. They have the same atomic number (number of protons in their nuclei) and position in the periodic table (and hence belong to the same chemical element), but differ in nucleon numbers (mass numbers) due to different numbers of neutrons in their nuclei. While all isotopes of a given element have almost the same chemical properties, they have different atomic masses and physical properties. The term isotope is formed from the Greek roots isos (ἴσος "equal") and topos (τόπος "place"), meaning "the same place"; thus, the meaning behind the name is that different isotopes of a single element occupy the same position on the periodic table. It was coined by Scottish doctor and writer Margaret Todd in 1913 in a suggestion to the British chemist Frederick Soddy. The number of protons within the atom's nucleus is called its atomic number and is equal to the number of electrons in the neutral (non-ionized) atom. Each atomic number identifies a specific element, but not the isotope; an atom of a given element may have a wide range in its number of neutrons. The number of nucleons (both protons and neutrons) in the nucleus is the atom's mass number, and each isotope of a given element has a different mass number. For example, carbon-12, carbon-13, and carbon-14 are three isotopes of the element carbon with mass numbers 12, 13, and 14, respectively. The atomic number of carbon is 6, which means that every carbon atom has 6 protons so that the neutron numbers of these isotopes are 6, 7, and 8 respectively. Isotope vs. nuclide A nuclide is a species of an atom with a specific number of protons and neutrons in the nucleus, for example, carbon-13 with 6 protons and 7 neutrons. The nuclide concept (referring to individual nuclear species) emphasizes nuclear properties over chemical properties, whereas the isotope concept (grouping all atoms of each element) emphasizes chemical over Document 3::: Secondary electrons are electrons generated as ionization products. They are called 'secondary' because they are generated by other radiation (the primary radiation). This radiation can be in the form of ions, electrons, or photons with sufficiently high energy, i.e. exceeding the ionization potential. Photoelectrons can be considered an example of secondary electrons where the primary radiation are photons; in some discussions photoelectrons with higher energy (>50 eV) are still considered "primary" while the electrons freed by the photoelectrons are "secondary". Applications Secondary electrons are also the main means of viewing images in the scanning electron microscope (SEM). The range of secondary electrons depends on the energy. Plotting the inelastic mean free path as a function of energy often shows characteristics of the "universal curve" familiar to electron spectroscopists and surface analysts. This distance is on the order of a few nanometers in metals and tens of nanometers in insulators. This small distance allows such fine resolution to be achieved in the SEM. For SiO2, for a primary electron energy of 100 eV, the secondary electron range is up to 20 nm from the point of incidence. See also Delta ray Everhart-Thornley detector Document 4::: Ionization (or ionisation) is the process by which an atom or a molecule acquires a negative or positive charge by gaining or losing electrons, often in conjunction with other chemical changes. The resulting electrically charged atom or molecule is called an ion. Ionization can result from the loss of an electron after collisions with subatomic particles, collisions with other atoms, molecules and ions, or through the interaction with electromagnetic radiation. Heterolytic bond cleavage and heterolytic substitution reactions can result in the formation of ion pairs. Ionization can occur through radioactive decay by the internal conversion process, in which an excited nucleus transfers its energy to one of the inner-shell electrons causing it to be ejected. Uses Everyday examples of gas ionization are such as within a fluorescent lamp or other electrical discharge lamps. It is also used in radiation detectors such as the Geiger-Müller counter or the ionization chamber. The ionization process is widely used in a variety of equipment in fundamental science (e.g., mass spectrometry) and in industry (e.g., radiation therapy). It is also widely used for air purification, though studies have shown harmful effects of this application. Production of ions Negatively charged ions are produced when a free electron collides with an atom and is subsequently trapped inside the electric potential barrier, releasing any excess energy. The process is known as electron capture ionization. Positively charged ions are produced by transferring an amount of energy to a bound electron in a collision with charged particles (e.g. ions, electrons or positrons) or with photons. The threshold amount of the required energy is known as ionization potential. The study of such collisions is of fundamental importance with regard to the few-body problem, which is one of the major unsolved problems in physics. Kinematically complete experiments, i.e. experiments in which the complete momentum vect The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What's the name for an atom that has gained or lost an electron? A. isotope B. neutrino C. an ion D. an photon Answer:
sciq-9404
multiple_choice
What are known as the building blocks of proteins?
[ "organism acids", "protein acids", "amino acids", "bases" ]
C
Relavent Documents: Document 0::: This is a list of topics in molecular biology. See also index of biochemistry articles. Document 1::: The Institute of Biophysics, Chinese Academy of Sciences, based in Beijing, China, focuses on biophysically oriented basic research in the life sciences. It was established by Bei Shizhang in 1958, from the former Beijing Experimental Biology Institute founded in 1957. Xu Tao is the current Director. The main research focus of the Institute is on the fields of protein science and brain & cognitive sciences. The Institute has two National Key Laboratories—"The National Laboratory of Biomacromolecules" and "The State Laboratory of Brain and Cognitive Sciences". The establishment of the National Laboratory of Protein Science was given approval by China's Ministry of Science and Technology (MOST) in December 2006. Research in the field of protein science emphasizes the following areas: 3D-structure and function of proteins, bio-membranes and membrane proteins, protein translation and folding, protein interaction networks, the molecular basis of infection and immunity, the molecular basis of sensation and cognition, protein and peptide drugs, and new techniques and methods in protein science research. Research areas in the brain and cognitive sciences include neural processes and mechanisms in complex cognition, expression of visual perception and attention, neural mechanisms of perceptional information processing, and dysfunction in brain cognition. The Institute has received National Natural Science Foundation, '973', '863', 'Knowledge Innovation Program', and other major research grants, supporting outstanding research in a range of areas. The achievements of the Institute in terms of awards, publications, patents, and applied research maintain the Institute at the highest level nationally, and it has worldwide recognition for research in the life sciences. Among other connections, since 2008 it has hosted an intensive course in macromolecular crystallography as a resource closely modeled on the course at Cold Spring Harbor Laboratory on Long Island, USA, and invol Document 2::: Biochemistry is the study of the chemical processes in living organisms. It deals with the structure and function of cellular components such as proteins, carbohydrates, lipids, nucleic acids and other biomolecules. Articles related to biochemistry include: 0–9 2-amino-5-phosphonovalerate - 3' end - 5' end Document 3::: MicrobeLibrary is a permanent collection of over 1400 original peer-reviewed resources for teaching undergraduate microbiology. It is provided by the American Society for Microbiology, Washington DC, United States. Contents include curriculum activities; images and animations; reviews of books, websites and other resources; and articles from Focus on Microbiology Education, Microbiology Education and Microbe. Around 40% of the materials are free to educators and students, the remainder require a subscription. the service is suspended with the message to: "Please check back with us in 2017". External links MicrobeLibrary Microbiology Document 4::: The Department of Biochemistry of Oxford University is located in the Science Area in Oxford, England. It is one of the largest biochemistry departments in Europe. The Biochemistry Department is part of the University of Oxford's Medical Sciences Division, the largest of the university's four academic divisions, which has been ranked first in the world for biomedicine. History The Department of Biochemistry at Oxford University began as the physiological chemistry section of the Physiology Department, and acquired its own separate department and building in the 1920s. In 1920, Benjamin Moore was elected to the position of the Whitley Professor of Biochemistry, the newly established Chair of Biochemistry at Oxford University. He was followed by Rudolph Peters in 1923, and an endowment of £75,000 was soon granted by the Rockefeller Foundation for the construction of a new departmental building, purchase of its equipment, and its maintenance. The Biochemistry Department building opened in 1927. In 1954, Hans Krebs was appointed the Whitley Chair of Biochemistry, and his appointment brought greater prominence to the department. He brought with him the Medical Research Council unit established to conduct research on cell metabolism. In 1955, a second professorship in the department, the Iveagh Chair of Microbiology, was established with funding from Guinness and the sub-department of Microbiology created, with Donald Woods its first holder. The eight-storey Hans Krebs Building was constructed in 1964 with funds from the Rockefeller Foundation. Krebs was succeeded by Rodney Porter in 1967. Genetics was brought into the Biochemistry Department when Walter Bodmer was appointed the first Professor of Genetics in 1970. The Laboratory of Molecular Biophysics, first established in the Zoology Department with support from Krebs and also linked to the Physical Chemistry Laboratory of the Chemistry Department, became part of the Biochemistry Department. It moved into the The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What are known as the building blocks of proteins? A. organism acids B. protein acids C. amino acids D. bases Answer:
sciq-3625
multiple_choice
What is used to measure current through a resistor?
[ "thermometers", "microscopes", "spectrographs", "ammeters" ]
D
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Electrical measurements are the methods, devices and calculations used to measure electrical quantities. Measurement of electrical quantities may be done to measure electrical parameters of a system. Using transducers, physical properties such as temperature, pressure, flow, force, and many others can be converted into electrical signals, which can then be conveniently measured and recorded. High-precision laboratory measurements of electrical quantities are used in experiments to determine fundamental physical properties such as the charge of the electron or the speed of light, and in the definition of the units for electrical measurements, with precision in some cases on the order of a few parts per million. Less precise measurements are required every day in industrial practice. Electrical measurements are a branch of the science of metrology. Measurable independent and semi-independent electrical quantities comprise: Voltage Electric current Electrical resistance and electrical conductance Electrical reactance and susceptance Magnetic flux Electrical charge by the means of electrometer Partial discharge measurement Magnetic field by the means of Hall sensor Electric field Electrical power by the means of electricity meter S-matrix by the means of network analyzer (electrical) Electrical power spectrum by the means of spectrum analyzer Measurable dependent electrical quantities comprise: Inductance Capacitance Electrical impedance defined as vector sum of electrical resistance and electrical reactance Electrical admittance, the reciprocal of electrical impedance Phase between current and voltage and related power factor Electrical spectral density Electrical phase noise Electrical amplitude noise Transconductance Transimpedance Electrical power gain Voltage gain Current gain Frequency Propagation delay Document 2::: In electronics, the Carey Foster bridge is a bridge circuit used to measure medium resistances, or to measure small differences between two large resistances. It was invented by Carey Foster as a variant on the Wheatstone bridge. He first described it in his 1872 paper "On a Modified Form of Wheatstone's Bridge, and Methods of Measuring Small Resistances" (Telegraph Engineer's Journal, 1872–1873, 1, 196). Use In the adjacent diagram, X and Y are resistances to be compared. P and Q are nearly equal resistances, forming the other half of the bridge. The bridge wire EF has a jockey contact D placed along it and is slid until the galvanometer G measures zero. The thick-bordered areas are thick copper busbars of very low resistance, to limit the influence on the measurement. Place a known resistance in position Y. Place the unknown resistance in position X. Adjust the contact D along the bridge wire EF so as to null the galvanometer. This position (as a percentage of distance from E to F) is . Swap X and Y. Adjust D to the new null point. This position is . If the resistance of the wire per percentage is , then the resistance difference is the resistance of the length of bridge wire between and : To measure a low unknown resistance X, replace Y with a copper busbar that can be assumed to be of zero resistance. In practical use, when the bridge is unbalanced, the galvanometer is shunted with a low resistance to avoid burning it out. It is only used at full sensitivity when the anticipated measurement is close to the null point. To measure σ To measure the unit resistance of the bridge wire EF, put a known resistance (e.g., a standard 1 ohm resistance) that is less than that of the wire as X, and a copper busbar of assumed zero resistance as Y. Theory Two resistances to be compared, X and Y, are connected in series with the bridge wire. Thus, considered as a Wheatstone bridge, the two resistances are X plus a length of bridge wire, and Y plus the remai Document 3::: In electrical engineering, current sensing is any one of several techniques used to measure electric current. The measurement of current ranges from picoamps to tens of thousands of amperes. The selection of a current sensing method depends on requirements such as magnitude, accuracy, bandwidth, robustness, cost, isolation or size. The current value may be directly displayed by an instrument, or converted to digital form for use by a monitoring or control system. Current sensing techniques include shunt resistor, current transformers and Rogowski coils, magnetic-field based transducers and others. Current sensor A current sensor is a device that detects electric current in a wire and generates a signal proportional to that current. The generated signal could be analog voltage or current or a digital output. The generated signal can be then used to display the measured current in an ammeter, or can be stored for further analysis in a data acquisition system, or can be used for the purpose of control. The sensed current and the output signal can be: Alternating current input, analog output, which duplicates the wave shape of the sensed current. bipolar output, which duplicates the wave shape of the sensed current. unipolar output, which is proportional to the average or RMS value of the sensed current. Direct current input, unipolar, with a unipolar output, which duplicates the wave shape of the sensed current digital output, which switches when the sensed current exceeds a certain threshold Requirements in current measurement Current sensing technologies must fulfill various requirements, for various applications. Generally, the common requirements are: High sensitivity High accuracy and linearity Wide bandwidth DC and AC measurement Low temperature drift Interference rejection IC packaging Low power consumption Low price Techniques The measurement of the electric current can be classified depending upon the underlying fundamental physical principles Document 4::: Ayrton shunt or universal shunt is a high-resistance shunt used in galvanometers to increase their range without changing the damping. The circuit is named after its inventor William E. Ayrton. Multirange ammeters that use this technique are more accurate than those using a make-before-break switch. Also it will eliminate the possibility of having a meter without a shunt which is a serious concern in make-before-break switches. The selector switch changes the amount of resistance in parallel with Rm (meter resistance). The voltage drop across parallel branches is always equal. When all resistances are placed in parallel with Rm maximum sensitivity of ammeter is reached. Ayrton shunt is rarely used for currents above 10 amperes. m1 = I1/Im , m2 = I2/Im, m3 = I3/Im The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is used to measure current through a resistor? A. thermometers B. microscopes C. spectrographs D. ammeters Answer:
sciq-4696
multiple_choice
Most motions in nature follow ________ rather than straight lines?
[ "vertical paths", "sharp paths", "curved paths", "horizontal paths" ]
C
Relavent Documents: Document 0::: Linear motion, also called rectilinear motion, is one-dimensional motion along a straight line, and can therefore be described mathematically using only one spatial dimension. The linear motion can be of two types: uniform linear motion, with constant velocity (zero acceleration); and non-uniform linear motion, with variable velocity (non-zero acceleration). The motion of a particle (a point-like object) along a line can be described by its position , which varies with (time). An example of linear motion is an athlete running a 100-meter dash along a straight track. Linear motion is the most basic of all motion. According to Newton's first law of motion, objects that do not experience any net force will continue to move in a straight line with a constant velocity until they are subjected to a net force. Under everyday circumstances, external forces such as gravity and friction can cause an object to change the direction of its motion, so that its motion cannot be described as linear. One may compare linear motion to general motion. In general motion, a particle's position and velocity are described by vectors, which have a magnitude and direction. In linear motion, the directions of all the vectors describing the system are equal and constant which means the objects move along the same axis and do not change direction. The analysis of such systems may therefore be simplified by neglecting the direction components of the vectors involved and dealing only with the magnitude. Background Displacement The motion in which all the particles of a body move through the same distance in the same time is called translatory motion. There are two types of translatory motions: rectilinear motion; curvilinear motion. Since linear motion is a motion in a single dimension, the distance traveled by an object in particular direction is the same as displacement. The SI unit of displacement is the metre. If is the initial position of an object and is the final position, then mat Document 1::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 2::: Differential geometry of curves is the branch of geometry that deals with smooth curves in the plane and the Euclidean space by methods of differential and integral calculus. Many specific curves have been thoroughly investigated using the synthetic approach. Differential geometry takes another path: curves are represented in a parametrized form, and their geometric properties and various quantities associated with them, such as the curvature and the arc length, are expressed via derivatives and integrals using vector calculus. One of the most important tools used to analyze a curve is the Frenet frame, a moving frame that provides a coordinate system at each point of the curve that is "best adapted" to the curve near that point. The theory of curves is much simpler and narrower in scope than the theory of surfaces and its higher-dimensional generalizations because a regular curve in a Euclidean space has no intrinsic geometry. Any regular curve may be parametrized by the arc length (the natural parametrization). From the point of view of a theoretical point particle on the curve that does not know anything about the ambient space, all curves would appear the same. Different space curves are only distinguished by how they bend and twist. Quantitatively, this is measured by the differential-geometric invariants called the curvature and the torsion of a curve. The fundamental theorem of curves asserts that the knowledge of these invariants completely determines the curve. Definitions A parametric -curve or a -parametrization is a vector-valued function that is -times continuously differentiable (that is, the component functions of are continuously differentiable), where , , and is a non-empty interval of real numbers. The of the parametric curve is . The parametric curve and its image must be distinguished because a given subset of can be the image of many distinct parametric curves. The parameter in can be thought of as representing time, and the traj Document 3::: This is a list of Wikipedia articles about curves used in different fields: mathematics (including geometry, statistics, and applied mathematics), physics, engineering, economics, medicine, biology, psychology, ecology, etc. Mathematics (Geometry) Algebraic curves Rational curves Rational curves are subdivided according to the degree of the polynomial. Degree 1 Line Degree 2 Plane curves of degree 2 are known as conics or conic sections and include Circle Unit circle Ellipse Parabola Hyperbola Unit hyperbola Degree 3 Cubic plane curves include Cubic parabola Folium of Descartes Cissoid of Diocles Conchoid of de Sluze Right strophoid Semicubical parabola Serpentine curve Trident curve Trisectrix of Maclaurin Tschirnhausen cubic Witch of Agnesi Degree 4 Quartic plane curves include Ampersand curve Bean curve Bicorn Bow curve Bullet-nose curve Cartesian oval Cruciform curve Deltoid curve Devil's curve Hippopede Kampyle of Eudoxus Kappa curve Lemniscate Lemniscate of Booth Lemniscate of Gerono Lemniscate of Bernoulli Limaçon Cardioid Limaçon trisectrix Ovals of Cassini Squircle Trifolium Curve Degree 5 Degree 6 Astroid Atriphtaloid Nephroid Quadrifolium Curve families of variable degree Epicycloid Epispiral Epitrochoid Hypocycloid Lissajous curve Poinsot's spirals Rational normal curve Rose curve Curves with genus 1 Bicuspid curve Cassinoide Cubic curve Elliptic curve Watt's curve Curves with genus > 1 Bolza surface (genus 2) Klein quartic (genus 3) Bring's curve (genus 4) Macbeath surface (genus 7) Butterfly curve (algebraic) (genus 7) Curve families with variable genus Polynomial lemniscate Fermat curve Sinusoidal spiral Superellipse Hurwitz surface Elkies trinomial curves Hyperelliptic curve Classical modular curve Cassini oval Transcendental curves Bowditch curve Brachistochrone Butterfly curve (transcendental) Catenary Clélies Cochleoid Cycloid Horopter Isochrone Isochrone of Huygens (Tautochrone) Isochrone of Leibniz Isochrone of Varignon Lamé Document 4::: This is a list of topics that are included in high school physics curricula or textbooks. Mathematical Background SI Units Scalar (physics) Euclidean vector Motion graphs and derivatives Pythagorean theorem Trigonometry Motion and forces Motion Force Linear motion Linear motion Displacement Speed Velocity Acceleration Center of mass Mass Momentum Newton's laws of motion Work (physics) Free body diagram Rotational motion Angular momentum (Introduction) Angular velocity Centrifugal force Centripetal force Circular motion Tangential velocity Torque Conservation of energy and momentum Energy Conservation of energy Elastic collision Inelastic collision Inertia Moment of inertia Momentum Kinetic energy Potential energy Rotational energy Electricity and magnetism Ampère's circuital law Capacitor Coulomb's law Diode Direct current Electric charge Electric current Alternating current Electric field Electric potential energy Electron Faraday's law of induction Ion Inductor Joule heating Lenz's law Magnetic field Ohm's law Resistor Transistor Transformer Voltage Heat Entropy First law of thermodynamics Heat Heat transfer Second law of thermodynamics Temperature Thermal energy Thermodynamic cycle Volume (thermodynamics) Work (thermodynamics) Waves Wave Longitudinal wave Transverse waves Transverse wave Standing Waves Wavelength Frequency Light Light ray Speed of light Sound Speed of sound Radio waves Harmonic oscillator Hooke's law Reflection Refraction Snell's law Refractive index Total internal reflection Diffraction Interference (wave propagation) Polarization (waves) Vibrating string Doppler effect Gravity Gravitational potential Newton's law of universal gravitation Newtonian constant of gravitation See also Outline of physics Physics education The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Most motions in nature follow ________ rather than straight lines? A. vertical paths B. sharp paths C. curved paths D. horizontal paths Answer:
sciq-6557
multiple_choice
Which organ in the body controls the amount of water loss in urine in response to maintaining homeostasis?
[ "kidneys", "liver", "spleen", "lungs" ]
A
Relavent Documents: Document 0::: The rock dove, Columbia livia, has a number of special adaptations for regulating water uptake and loss. Challenges C. livia pigeons drink directly by water source or indirectly from the food they ingest. They drink water through a process called double-suction mechanism. The daily diet of the pigeon places many physiological challenges that it must overcome through osmoregulation. Protein intake, for example, causes an excess of toxins of amine groups when it is broken down for energy. To regulate this excess and secrete these unwanted toxins, C. livia must remove the amine groups as uric acid. Nitrogen excretion through uric acid can be considered an advantage because it does not require a lot of water, but producing it takes more energy because of its complex molecular composition. Pigeons adjust their drinking rates and food intake in parallel, and when adequate water is unavailable for excretion, food intake is limited to maintain water balance. As this species inhabits arid environments, research attributes this to their strong flying capabilities to reach the available water sources, not because of exceptional potential for water conservation. C. livia kidneys, like mammalian kidneys, are capable of producing urine hyperosmotic to the plasma using the processes of filtration, reabsorption, and secretion. The medullary cones function as countercurrent units that achieve the production of hyperosmotic urine. Hyperosmotic urine can be understood in light of the law of diffusion and osmolarity. Organ of osmoregulation Unlike a number of other bird species which have the salt gland as the primary osmoregulatory organ, C. livia does not use its salt gland. It uses the function of the kidneys to maintain homeostatic balance of ions such as sodium and potassium while preserving water quantity in the body. Filtration of the blood, reabsorption of ions and water, and secretion of uric acid are all components of the kidney's process. Columba livia has two kidneys th Document 1::: The excretory system is a passive biological system that removes excess, unnecessary materials from the body fluids of an organism, so as to help maintain internal chemical homeostasis and prevent damage to the body. The dual function of excretory systems is the elimination of the waste products of metabolism and to drain the body of used up and broken down components in a liquid and gaseous state. In humans and other amniotes (mammals, birds and reptiles) most of these substances leave the body as urine and to some degree exhalation, mammals also expel them through sweating. Only the organs specifically used for the excretion are considered a part of the excretory system. In the narrow sense, the term refers to the urinary system. However, as excretion involves several functions that are only superficially related, it is not usually used in more formal classifications of anatomy or function. As most healthy functioning organs produce metabolic and other wastes, the entire organism depends on the function of the system. Breaking down of one of more of the systems is a serious health condition, for example kidney failure. Systems Urinary system The kidneys are large, bean-shaped organs which are present on each side of the vertebral column in the abdominal cavity. Humans have two kidneys and each kidney is supplied with blood from the renal artery. The kidneys remove from the blood the nitrogenous wastes such as urea, as well as salts and excess water, and excrete them in the form of urine. This is done with the help of millions of nephrons present in the kidney. The filtrated blood is carried away from the kidneys by the renal vein (or kidney vein). The urine from the kidney is collected by the ureter (or excretory tubes), one from each kidney, and is passed to the urinary bladder. The urinary bladder collects and stores the urine until urination. The urine collected in the bladder is passed into the external environment from the body through an opening called Document 2::: Renal pathology is a subspecialty of anatomic pathology that deals with the diagnosis and characterization of medical diseases (non-tumor) of the kidneys. In the academic setting, renal pathologists work closely with nephrologists and transplant surgeons, who typically obtain diagnostic specimens via percutaneous renal biopsy. The renal pathologist must synthesize findings from light microscopy, electron microscopy, and immunofluorescence to obtain a definitive diagnosis. Medical renal diseases may affect the glomerulus, the tubules and interstitium, the vessels, or a combination of these compartments. External links http://www.renalpathsoc.org/ Renal Pathology Tutorial written by J. Charles Jennette Pathologist Guide Anatomical pathology Document 3::: Drinking is the act of ingesting water or other liquids into the body through the mouth, proboscis, or elsewhere. Humans drink by swallowing, completed by peristalsis in the esophagus. The physiological processes of drinking vary widely among other animals. Most animals drink water to maintain bodily hydration, although many can survive on the water gained from their food. Water is required for many physiological processes. Both inadequate and (less commonly) excessive water intake are associated with health problems. Methods of drinking In humans When a liquid enters a human mouth, the swallowing process is completed by peristalsis which delivers the liquid through the esophagus to the stomach; much of the activity is abetted by gravity. The liquid may be poured from the hands or drinkware may be used as vessels. Drinking can also be performed by acts of inhalation, typically when imbibing hot liquids or drinking from a spoon. Infants employ a method of suction wherein the lips are pressed tight around a source, as in breastfeeding: a combination of breath and tongue movement creates a vacuum which draws in liquid. In other land mammals By necessity, terrestrial animals in captivity become accustomed to drinking water, but most free-roaming animals stay hydrated through the fluids and moisture in fresh food, and learn to actively seek foods with high fluid content. When conditions impel them to drink from bodies of water, the methods and motions differ greatly among species. Cats, canines, and ruminants all lower the neck and lap in water with their powerful tongues. Cats and canines lap up water with the tongue in a spoon-like shape. Canines lap water by scooping it into their mouth with a tongue which has taken the shape of a ladle. However, with cats, only the tip of their tongue (which is smooth) touches the water, and then the cat quickly pulls its tongue back into its mouth which soon closes; this results in a column of liquid being pulled into the ca Document 4::: Osmoregulation is the active regulation of the osmotic pressure of an organism's body fluids, detected by osmoreceptors, to maintain the homeostasis of the organism's water content; that is, it maintains the fluid balance and the concentration of electrolytes (salts in solution which in this case is represented by body fluid) to keep the body fluids from becoming too diluted or concentrated. Osmotic pressure is a measure of the tendency of water to move into one solution from another by osmosis. The higher the osmotic pressure of a solution, the more water tends to move into it. Pressure must be exerted on the hypertonic side of a selectively permeable membrane to prevent diffusion of water by osmosis from the side containing pure water. Although there may be hourly and daily variations in osmotic balance, an animal is generally in an osmotic steady state over the long term. Organisms in aquatic and terrestrial environments must maintain the right concentration of solutes and amount of water in their body fluids; this involves excretion (getting rid of metabolic nitrogen wastes and other substances such as hormones that would be toxic if allowed to accumulate in the blood) through organs such as the skin and the kidneys. Regulators and conformers Two major types of osmoregulation are osmoconformers and osmoregulators. Osmoconformers match their body osmolarity to their environment actively or passively. Most marine invertebrates are osmoconformers, although their ionic composition may be different from that of seawater. In a strictly osmoregulating animal, the amounts of internal salt and water are held relatively constant in the face of environmental changes. It requires that intake and outflow of water and salts be equal over an extended period of time. Organisms that maintain an internal osmolarity different from the medium in which they are immersed have been termed osmoregulators. They tightly regulate their body osmolarity, maintaining constant internal c The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Which organ in the body controls the amount of water loss in urine in response to maintaining homeostasis? A. kidneys B. liver C. spleen D. lungs Answer:
sciq-4585
multiple_choice
Snails, scallops, and squids are what type of invertebrate?
[ "mollusk", "arthropod", "crustacean", "algae" ]
A
Relavent Documents: Document 0::: Shellfish is a colloquial and fisheries term for exoskeleton-bearing aquatic invertebrates used as food, including various species of molluscs, crustaceans, and echinoderms. Although most kinds of shellfish are harvested from saltwater environments, some are found in freshwater. In addition, a few species of land crabs are eaten, for example Cardisoma guanhumi in the Caribbean. Shellfish are among the most common food allergens. Despite the name, shellfish are not fish. Most shellfish are low on the food chain and eat a diet composed primarily of phytoplankton and zooplankton. Many varieties of shellfish, and crustaceans in particular, are actually closely related to insects and arachnids; crustaceans make up one of the main subphyla of the phylum Arthropoda. Molluscs include cephalopods (squids, octopuses, cuttlefish) and bivalves (clams, oysters), as well as gastropods (aquatic species such as whelks and winkles; land species such as snails and slugs). Molluscs used as a food source by humans include many species of clams, mussels, oysters, winkles, and scallops. Some crustaceans that are commonly eaten are shrimp, lobsters, crayfish, crabs and barnacles. Echinoderms are not as frequently harvested for food as molluscs and crustaceans; however, sea urchin gonads are quite popular in many parts of the world, where the live delicacy is harder to transport. Though some shellfish harvesting has been unsustainable, and shrimp farming has been destructive in some parts of the world, shellfish farming can be important to environmental restoration, by developing reefs, filtering water and eating biomass. Terminology The term "shellfish" is used both broadly and specifically. In common parlance, as in "having shellfish for dinner", it can refer to anything from clams and oysters to lobster and shrimp. For regulatory purposes it is often narrowly defined as filter-feeding molluscs such as clams, mussels, and oyster to the exclusion of crustaceans and all else. Althoug Document 1::: Invertebrate zoology is the subdiscipline of zoology that consists of the study of invertebrates, animals without a backbone (a structure which is found only in fish, amphibians, reptiles, birds and mammals). Invertebrates are a vast and very diverse group of animals that includes sponges, echinoderms, tunicates, numerous different phyla of worms, molluscs, arthropods and many additional phyla. Single-celled organisms or protists are usually not included within the same group as invertebrates. Subdivisions Invertebrates represent 97% of all named animal species, and because of that fact, this subdivision of zoology has many further subdivisions, including but not limited to: Arthropodology - the study of arthropods, which includes Arachnology - the study of spiders and other arachnids Entomology - the study of insects Carcinology - the study of crustaceans Myriapodology - the study of centipedes, millipedes, and other myriapods Cnidariology - the study of Cnidaria Helminthology - the study of parasitic worms. Malacology - the study of mollusks, which includes Conchology - the study of Mollusk shells. Limacology - the study of slugs. Teuthology - the study of cephalopods. Invertebrate paleontology - the study of fossil invertebrates These divisions are sometimes further divided into more specific specialties. For example, within arachnology, acarology is the study of mites and ticks; within entomology, lepidoptery is the study of butterflies and moths, myrmecology is the study of ants and so on. Marine invertebrates are all those invertebrates that exist in marine habitats. History Early Modern Era In the early modern period starting in the late 16th century, invertebrate zoology saw growth in the number of publications made and improvement in the experimental practices associated with the field. (Insects are one of the most diverse groups of organisms on Earth. They play important roles in ecosystems, including pollination, natural enemies, saprophytes, and Document 2::: Pseudoplanktonic organisms are those that attach themselves to planktonic organisms or other floating objects, such as drifting wood, buoyant shells of organisms such as Spirula, or man-made flotsam. Examples include goose barnacles and the bryozoan Jellyella. By themselves these animals cannot float, which contrasts them with true planktonic organisms, such as Velella and the Portuguese Man o' War, which are buoyant. Pseudoplankton are often found in the guts of filtering zooplankters. Document 3::: A cnidariologist is a zoologist specializing in Cnidaria, a group of freshwater and marine aquatic animals that include the sea anemones, corals, and jellyfish. Examples Edward Thomas Browne (1866-1937) Henry Bryant Bigelow (1879-1967) Randolph Kirkpatrick (1863–1950) Kamakichi Kishinouye (1867-1929) Paul Lassenius Kramp (1887-1975) Alfred G. Mayer (1868-1922) See also Document 4::: The gastropods (), commonly known as slugs and snails, belong to a large taxonomic class of invertebrates within the phylum Mollusca called Gastropoda (). This class comprises snails and slugs from saltwater, freshwater, and from the land. There are many thousands of species of sea snails and slugs, as well as freshwater snails, freshwater limpets, land snails and slugs. The class Gastropoda is a diverse and highly successful class of mollusks within the phylum Mollusca. It contains a vast total of named species, second only to the insects in overall number. The fossil history of this class goes back to the Late Cambrian. , 721 families of gastropods are known, of which 245 are extinct and appear only in the fossil record, while 476 are currently extant with or without a fossil record. Gastropoda (previously known as univalves and sometimes spelled "Gasteropoda") are a major part of the phylum Mollusca, and are the most highly diversified class in the phylum, with 65,000 to 80,000 living snail and slug species. The anatomy, behavior, feeding, and reproductive adaptations of gastropods vary significantly from one clade or group to another, so stating many generalities for all gastropods is difficult. The class Gastropoda has an extraordinary diversification of habitats. Representatives live in gardens, woodland, deserts, and on mountains; in small ditches, great rivers, and lakes; in estuaries, mudflats, the rocky intertidal, the sandy subtidal, the abyssal depths of the oceans, including the hydrothermal vents, and numerous other ecological niches, including parasitic ones. Although the name "snail" can be, and often is, applied to all the members of this class, commonly this word means only those species with an external shell big enough that the soft parts can withdraw completely into it. Those gastropods without a shell, and those with only a very reduced or internal shell, are usually known as slugs; those with a shell into which they can partly but not com The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Snails, scallops, and squids are what type of invertebrate? A. mollusk B. arthropod C. crustacean D. algae Answer:
scienceQA-5941
multiple_choice
What do these two changes have in common? water evaporating from a lake building a tower out of magnetic blocks
[ "Both are chemical changes.", "Both are caused by heating.", "Both are caused by cooling.", "Both are only physical changes." ]
D
Step 1: Think about each change. Water evaporating from a lake is a change of state. So, it is a physical change. The liquid changes into a gas, but a different type of matter is not formed. Building a tower out of magnetic blocks is a physical change. The blocks stick to each other to form a tower. But the blocks are still made of the same type of matter as before. Step 2: Look at each answer choice. Both are only physical changes. Both changes are physical changes. No new matter is created. Both are chemical changes. Both changes are physical changes. They are not chemical changes. Both are caused by heating. Water evaporating is caused by heating. But building a tower out of magnetic blocks is not. Both are caused by cooling. Neither change is caused by cooling.
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds. Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate. A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density. An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge. Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change. Examples Heating and cooling Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation. Magnetism Ferro-magnetic materials can become magnetic. The process is reve Document 2::: Engineering mathematics is a branch of applied mathematics concerning mathematical methods and techniques that are typically used in engineering and industry. Along with fields like engineering physics and engineering geology, both of which may belong in the wider category engineering science, engineering mathematics is an interdisciplinary subject motivated by engineers' needs both for practical, theoretical and other considerations outside their specialization, and to deal with constraints to be effective in their work. Description Historically, engineering mathematics consisted mostly of applied analysis, most notably: differential equations; real and complex analysis (including vector and tensor analysis); approximation theory (broadly construed, to include asymptotic, variational, and perturbative methods, representations, numerical analysis); Fourier analysis; potential theory; as well as linear algebra and applied probability, outside of analysis. These areas of mathematics were intimately tied to the development of Newtonian physics, and the mathematical physics of that period. This history also left a legacy: until the early 20th century subjects such as classical mechanics were often taught in applied mathematics departments at American universities, and fluid mechanics may still be taught in (applied) mathematics as well as engineering departments. The success of modern numerical computer methods and software has led to the emergence of computational mathematics, computational science, and computational engineering (the last two are sometimes lumped together and abbreviated as CS&E), which occasionally use high-performance computing for the simulation of phenomena and the solution of problems in the sciences and engineering. These are often considered interdisciplinary fields, but are also of interest to engineering mathematics. Specialized branches include engineering optimization and engineering statistics. Engineering mathematics in tertiary educ Document 3::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 4::: Computer science and engineering (CSE) is an academic program at many universities which comprises computer science classes (e.g. data structures and algorithms) and computer engineering classes (e.g computer architecture). There is no clear division in computing between science and engineering, just like in the field of materials science and engineering. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both undergraduate as well postgraduate with specializations. Academic courses Academic programs vary between colleges, but typically include a combination of topics in computer science, computer engineering, and electrical engineering. Undergraduate courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most CSE programs require introductory mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, probability, and statistics, as well as the basics of electrical and electronic engineering, physics, and electromagnetism. Example universities with CSE majors and departments APJ Abdul Kalam Technological University American International University-B The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What do these two changes have in common? water evaporating from a lake building a tower out of magnetic blocks A. Both are chemical changes. B. Both are caused by heating. C. Both are caused by cooling. D. Both are only physical changes. Answer:
ai2_arc-305
multiple_choice
A student used the dimmest setting on a light microscope to observe a euglena and an amoeba. The student shined a narrow beam of light at the top of the cover slip. She observed that the euglena swam up toward the light but the amoeba did not. She knew the amoeba was alive because it slowly changed shape while she watched. What inference should the student draw from her observation?
[ "An amoeba can only move side to side.", "An amoeba is unable to respond to light.", "An amoeba moves too slowly to observe.", "An amoeba only moves when it is hungry." ]
B
Relavent Documents: Document 0::: Several organisms are capable of rolling locomotion. However, true wheels and propellers—despite their utility in human vehicles—do not play a significant role in the movement of living things (with the exception of certain flagella, which work like corkscrews). Biologists have offered several explanations for the apparent absence of biological wheels, and wheeled creatures have appeared often in speculative fiction. Given the ubiquity of the wheel in human technology, and the existence of biological analogues of many other technologies (such as wings and lenses), the lack of wheels in the natural world would seem to demand explanation—and the phenomenon is broadly explained by two main factors. First, there are several developmental and evolutionary obstacles to the advent of a wheel by natural selection, addressing the question "Why can't life evolve wheels?" Secondly, wheels are often at a competitive disadvantage when compared with other means of propulsion (such as walking, running, or slithering) in natural environments, addressing the question "If wheels evolve, why might they be rare nonetheless?" This environment-specific disadvantage also explains why humans abandoned the wheel in certain regions at least once in history. Known instances of rotation in biology There exist two distinct modes of locomotion using rotation: first, simple rolling; and second, the use of wheels or propellers, which spin on an axle or shaft, relative to a fixed body. While many creatures employ the former mode, the latter is restricted to microscopic, single-celled organisms. Rolling Some organisms use rolling as a means of locomotion. These examples do not constitute the use of a wheel, as the organism rotates as a whole, rather than employing separate parts which rotate independently. Several species of elongate organisms form their bodies into a loop to roll, including certain caterpillars (which do so to escape danger), tiger beetle larvae, myriapods, mantis shrimp, Arm Document 1::: N. europaea shows short rods with pointed ends cells, which size is (0.8-1.1 x 1.0- 1.7) µm; motility has not been observed. N. eutropha presents rod to pear shaped cells with one or both ends pointed, with a size of (1.0-1.3 x 1.6- 2.3) µm. They show motility. N. halophila cells have a coccoid shap Document 2::: In biology, cell theory is a scientific theory first formulated in the mid-nineteenth century, that organisms are made up of cells, that they are the basic structural/organizational unit of all organisms, and that all cells come from pre-existing cells. Cells are the basic unit of structure in all organisms and also the basic unit of reproduction. The theory was once universally accepted, but now some biologists consider non-cellular entities such as viruses living organisms, and thus disagree with the first tenet. As of 2021: "expert opinion remains divided roughly a third each between yes, no and don’t know". As there is no universally accepted definition of life, discussion still continues. History With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as it was believed no one else had seen these. To further support his theory, Matthias Schleiden and Theodor Schwann both also studied cells of both animal and plants. What they discovered were significant differences between the two types of cells. This put forth the idea that cells were not only fundamental to plants, but animals as well. Microscopes The discovery of the cell was made possible through the invention of the microscope. In the first century BC, Romans were able to make glass. They discovered that objects appeared to be larger under the glass. The expanded use of lenses in eyeglasses in the 13th century probably led to wider spread use of simple microscopes (magnifying glasses) with limited magnification. Compound microscopes, which combine an objective lens with an eyepiece to view a real image achieving much higher magnification, first appeared in Europe around 1620. In 1665, Robert Hooke used a microscope Document 3::: Knowledge of fish age characteristics is necessary for stock assessments, and to develop management or conservation plans. Size is generally associated with age; however, there are variations in size at any particular age for most fish species making it difficult to estimate one from the other with precision. Therefore, researchers interested in determining a fish age look for structures which increase incrementally with age. The most commonly used techniques involve counting natural growth rings on the scales, otoliths, vertebrae, fin spines, eye lenses, teeth, or bones of the jaw, pectoral girdle, and opercular series. Even reliable aging techniques may vary among species; often, several different bony structures are compared among a population in order to determine the most accurate method. History Aristotle (ca. 340 B.C.) may have been the first scientist to speculate on the use of hard parts of fishes to determine age, stating in Historica Animalium that “the age of a scaly fish may be told by the size and hardness of its scales.” However, it was not until the development of the microscope that more detailed studies were performed on the structure of scales. Antonie van Leeuwenhoek developed improved lenses which he went use in his creation of microscopes. He had a wide range of interests including the structure of fish scales from the European eel (Anguilla anguilla) and the burbot (Lota lota), species which were previously thought not to have scales. He observed that the scales contained “circular lines” and that each scale had the same number of these lines, and correctly inferred that the number of lines correlated to the age of the fish. He also correctly associated the darker areas of scale growth to the season of slowed growth, a characteristic he had previously observed in tree trunks. Leeuwenhoek's work went widely undiscovered by fisheries researchers, and the discovery of fish aging structures is widely credited to Hans Hederström (e.g., Ricker 19 Document 4::: Ministeria vibrans is a bacterivorous amoeba with filopodia that was originally described to be suspended by a flagellum-like stalk attached to the substrate. Molecular and experimental work later on demonstrated the stalk is indeed a flagellar apparatus. The amoeboid protist Ministeria vibrans occupies a key position to understand animal origins. It is a member of the Filasterea, that is the sister-group to Choanoflagellatea and Metazoa. Two Ministeria amoebae species have been reported so far, both of them from coastal marine water samples: M. vibrans and M. marisola. However, there is currently only one culture available, that of Ministeria vibrans. The life cycle of Ministeria remains unknown. Microvilli in Ministeria suggest their presence in the common ancestor of Filasterea and Choanoflagellata. The kinetid structure of Ministeria is similar to that of the choanocytes of the most deep-branching sponges, differing essentially from the kinetid of choanoflagellates. Thus, kinetid and microvilli of Ministeria illustrate features of the common ancestor of three holozoan groups: Filasterea, Metazoa and Choanoflagellata. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. A student used the dimmest setting on a light microscope to observe a euglena and an amoeba. The student shined a narrow beam of light at the top of the cover slip. She observed that the euglena swam up toward the light but the amoeba did not. She knew the amoeba was alive because it slowly changed shape while she watched. What inference should the student draw from her observation? A. An amoeba can only move side to side. B. An amoeba is unable to respond to light. C. An amoeba moves too slowly to observe. D. An amoeba only moves when it is hungry. Answer:
sciq-98
multiple_choice
Millions of years ago, plants used energy from the sun to form what?
[ "carbon compounds", "evolution", "fossil fuels", "greenhouse gases" ]
A
Relavent Documents: Document 0::: The Bionic Leaf is a biomimetic system that gathers solar energy via photovoltaic cells that can be stored or used in a number of different functions. Bionic leaves can be composed of both synthetic (metals, ceramics, polymers, etc.) and organic materials (bacteria), or solely made of synthetic materials. The Bionic Leaf has the potential to be implemented in communities, such as urbanized areas to provide clean air as well as providing needed clean energy. History In 2009 at MIT, Daniel Nocera's lab first developed the "artificial leaf", a device made from silicon and an anode electrocatalyst for the oxidation of water, capable of splitting water into hydrogen and oxygen gases. In 2012, Nocera came to Harvard and The Silver Lab of Harvard Medical School joined Nocera’s team. Together the teams expanded the existing technology to create the Bionic Leaf. It merged the concept of the artificial leaf with genetically engineered bacteria that feed on the hydrogen and convert CO2 in the air into alcohol fuels or chemicals. The first version of the teams Bionic Leaf was created in 2015 but the catalyst used was harmful to the bacteria. In 2016, a new catalyst was designed to solve this issue, named the "Bionic Leaf 2.0". Other versions of artificial leaves have been developed by the California Institute of Technology and the Joint Center for Artificial Photosynthesis, the University of Waterloo, and the University of Cambridge. Mechanics Photosynthesis In natural photosynthesis, photosynthetic organisms produce energy-rich organic molecules from water and carbon dioxide by using solar radiation. Therefore, the process of photosynthesis removes carbon dioxide, a greenhouse gas, from the air. Artificial photosynthesis, as performed by the Bionic Leaf, is approximately 10 times more efficient than natural photosynthesis. Using a catalyst, the Bionic Leaf can remove excess carbon dioxide in the air and convert that to useful alcohol fuels, like isopropanol and isobutan Document 1::: The molecules that an organism uses as its carbon source for generating biomass are referred to as "carbon sources" in biology. It is possible for organic or inorganic sources of carbon. Heterotrophs must use organic molecules as both are a source of carbon and energy, in contrast to autotrophs, which can use inorganic materials as both a source of carbon and an abiotic source of energy, such as, for instance, inorganic chemical energy or light (photoautotrophs) (chemolithotrophs). The carbon cycle, which begins with a carbon source that is inorganic, such as carbon dioxide and progresses through the carbon fixation process, includes the biological use of carbon as one of its components.[1] Types of organism by carbon source Heterotrophs Autotrophs Document 2::: Biotic material or biological derived material is any material that originates from living organisms. Most such materials contain carbon and are capable of decay. The earliest life on Earth arose at least 3.5 billion years ago. Earlier physical evidences of life include graphite, a biogenic substance, in 3.7 billion-year-old metasedimentary rocks discovered in southwestern Greenland, as well as, "remains of biotic life" found in 4.1 billion-year-old rocks in Western Australia. Earth's biodiversity has expanded continually except when interrupted by mass extinctions. Although scholars estimate that over 99 percent of all species of life (over five billion) that ever lived on Earth are extinct, there are still an estimated 10–14 million extant species, of which about 1.2 million have been documented and over 86% have not yet been described. Examples of biotic materials are wood, straw, humus, manure, bark, crude oil, cotton, spider silk, chitin, fibrin, and bone. The use of biotic materials, and processed biotic materials (bio-based material) as alternative natural materials, over synthetics is popular with those who are environmentally conscious because such materials are usually biodegradable, renewable, and the processing is commonly understood and has minimal environmental impact. However, not all biotic materials are used in an environmentally friendly way, such as those that require high levels of processing, are harvested unsustainably, or are used to produce carbon emissions. When the source of the recently living material has little importance to the product produced, such as in the production of biofuels, biotic material is simply called biomass. Many fuel sources may have biological sources, and may be divided roughly into fossil fuels, and biofuel. In soil science, biotic material is often referred to as organic matter. Biotic materials in soil include glomalin, Dopplerite and humic acid. Some biotic material may not be considered to be organic matte Document 3::: {{DISPLAYTITLE: C3 carbon fixation}} carbon fixation is the most common of three metabolic pathways for carbon fixation in photosynthesis, the other two being and CAM. This process converts carbon dioxide and ribulose bisphosphate (RuBP, a 5-carbon sugar) into two molecules of 3-phosphoglycerate through the following reaction: CO2 + H2O + RuBP → (2) 3-phosphoglycerate This reaction was first discovered by Melvin Calvin, Andrew Benson and James Bassham in 1950. C3 carbon fixation occurs in all plants as the first step of the Calvin–Benson cycle. (In and CAM plants, carbon dioxide is drawn out of malate and into this reaction rather than directly from the air.) Plants that survive solely on fixation ( plants) tend to thrive in areas where sunlight intensity is moderate, temperatures are moderate, carbon dioxide concentrations are around 200 ppm or higher, and groundwater is plentiful. The plants, originating during Mesozoic and Paleozoic eras, predate the plants and still represent approximately 95% of Earth's plant biomass, including important food crops such as rice, wheat, soybeans and barley. plants cannot grow in very hot areas at today's atmospheric CO2 level (significantly depleted during hundreds of millions of years from above 5000 ppm) because RuBisCO incorporates more oxygen into RuBP as temperatures increase. This leads to photorespiration (also known as the oxidative photosynthetic carbon cycle, or C2 photosynthesis), which leads to a net loss of carbon and nitrogen from the plant and can therefore limit growth. plants lose up to 97% of the water taken up through their roots by transpiration. In dry areas, plants shut their stomata to reduce water loss, but this stops from entering the leaves and therefore reduces the concentration of in the leaves. This lowers the :O2 ratio and therefore also increases photorespiration. and CAM plants have adaptations that allow them to survive in hot and dry areas, and they can therefore out-compete Document 4::: Plants are the eukaryotes that form the kingdom Plantae; they are predominantly photosynthetic. This means that they obtain their energy from sunlight, using chloroplasts derived from endosymbiosis with cyanobacteria to produce sugars from carbon dioxide and water, using the green pigment chlorophyll. Exceptions are parasitic plants that have lost the genes for chlorophyll and photosynthesis, and obtain their energy from other plants or fungi. Historically, as in Aristotle's biology, the plant kingdom encompassed all living things that were not animals, and included algae and fungi. Definitions have narrowed since then; current definitions exclude the fungi and some of the algae. By the definition used in this article, plants form the clade Viridiplantae (green plants), which consists of the green algae and the embryophytes or land plants (hornworts, liverworts, mosses, lycophytes, ferns, conifers and other gymnosperms, and flowering plants). A definition based on genomes includes the Viridiplantae, along with the red algae and the glaucophytes, in the clade Archaeplastida. There are about 380,000 known species of plants, of which the majority, some 260,000, produce seeds. They range in size from single cells to the tallest trees. Green plants provide a substantial proportion of the world's molecular oxygen; the sugars they create supply the energy for most of Earth's ecosystems; other organisms, including animals, either consume plants directly or rely on organisms which do so. Grain, fruit, and vegetables are basic human foods and have been domesticated for millennia. People use plants for many purposes, such as building materials, ornaments, writing materials, and, in great variety, for medicines. The scientific study of plants is known as botany, a branch of biology. Definition Taxonomic history All living things were traditionally placed into one of two groups, plants and animals. This classification dates from Aristotle (384–322 BC), who distinguished d The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Millions of years ago, plants used energy from the sun to form what? A. carbon compounds B. evolution C. fossil fuels D. greenhouse gases Answer:
sciq-8845
multiple_choice
Biomass is the mass of biological what?
[ "tissues", "lipids", "proteins", "organisms" ]
D
Relavent Documents: Document 0::: This list of life sciences comprises the branches of science that involve the scientific study of life – such as microorganisms, plants, and animals including human beings. This science is one of the two major branches of natural science, the other being physical science, which is concerned with non-living matter. Biology is the overall natural science that studies life, with the other life sciences as its sub-disciplines. Some life sciences focus on a specific type of organism. For example, zoology is the study of animals, while botany is the study of plants. Other life sciences focus on aspects common to all or many life forms, such as anatomy and genetics. Some focus on the micro-scale (e.g. molecular biology, biochemistry) other on larger scales (e.g. cytology, immunology, ethology, pharmacy, ecology). Another major branch of life sciences involves understanding the mindneuroscience. Life sciences discoveries are helpful in improving the quality and standard of life and have applications in health, agriculture, medicine, and the pharmaceutical and food science industries. For example, it has provided information on certain diseases which has overall aided in the understanding of human health. Basic life science branches Biology – scientific study of life Anatomy – study of form and function, in plants, animals, and other organisms, or specifically in humans Astrobiology – the study of the formation and presence of life in the universe Bacteriology – study of bacteria Biotechnology – study of combination of both the living organism and technology Biochemistry – study of the chemical reactions required for life to exist and function, usually a focus on the cellular level Bioinformatics – developing of methods or software tools for storing, retrieving, organizing and analyzing biological data to generate useful biological knowledge Biolinguistics – the study of the biology and evolution of language. Biological anthropology – the study of humans, non-hum Document 1::: The following outline is provided as an overview of and topical guide to biophysics: Biophysics – interdisciplinary science that uses the methods of physics to study biological systems. Nature of biophysics Biophysics is An academic discipline – branch of knowledge that is taught and researched at the college or university level. Disciplines are defined (in part), and recognized by the academic journals in which research is published, and the learned societies and academic departments or faculties to which their practitioners belong. A scientific field (a branch of science) – widely recognized category of specialized expertise within science, and typically embodies its own terminology and nomenclature. Such a field will usually be represented by one or more scientific journals, where peer-reviewed research is published. A natural science – one that seeks to elucidate the rules that govern the natural world using empirical and scientific methods. A biological science – concerned with the study of living organisms, including their structure, function, growth, evolution, distribution, and taxonomy. A branch of physics – concerned with the study of matter and its motion through space and time, along with related concepts such as energy and force. An interdisciplinary field – field of science that overlaps with other sciences Scope of biophysics research Biomolecular scale Biomolecule Biomolecular structure Organismal scale Animal locomotion Biomechanics Biomineralization Motility Environmental scale Biophysical environment Biophysics research overlaps with Agrophysics Biochemistry Biophysical chemistry Bioengineering Biogeophysics Nanotechnology Systems biology Branches of biophysics Astrobiophysics – field of intersection between astrophysics and biophysics concerned with the influence of the astrophysical phenomena upon life on planet Earth or some other planet in general. Medical biophysics – interdisciplinary field that applies me Document 2::: Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research. Americas Human Biology major at Stanford University, Palo Alto (since 1970) Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government. Human and Social Biology (Caribbean) Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment. Human Biology Program at University of Toronto The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications. Asia BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002) BSc (honours) Human Biology at AIIMS (New Document 3::: The European Algae Biomass Association (EABA), established on 2 June 2009, is the European association representing both research and industry in the field of algae technologies. EABA was founded during its inaugural conference on 1–2 June 2009 at Villa La Pietra in Florence. The association is headquartered in Florence, Italy. History The first EABA's President, Prof. Dr. Mario Tredici, served a 2-year term since his election on 2 June 2009. The EABA Vice-presidents were Mr. Claudio Rochietta, (Oxem, Italy), Prof. Patrick Sorgeloos (University of Ghent, Belgium) and Mr. Marc Van Aken (SBAE Industries, Belgium). The EABA Executive Director was Mr. Raffaello Garofalo. EABA had 58 founding members and the EABA reached 79 members in 2011. The last election occurred on 3 December 2018 in Amsterdam. The EABA's President is Mr. Jean-Paul Cadoret (Algama / France). The EABA Vice-presidents are Prof. Dr. Sammy Boussiba (Ben-Gurion University of the Negev / Israel), Prof. Dr. Gabriel Acien (University of Almeria / Spain) and Dr. Alexandra Mosch (Germany). The EABA General Manager is Dr. Vítor Verdelho (A4F AlgaFuel, S.A. / Portugal) and Prof. Dr. Mario Tredici (University of Florence / Italy) is elected as Honorary President. Cooperation with other organisations ART Fuels Forum European Society of Biochemical Engineering Sciences Algae Biomass Organization Document 4::: MicrobeLibrary is a permanent collection of over 1400 original peer-reviewed resources for teaching undergraduate microbiology. It is provided by the American Society for Microbiology, Washington DC, United States. Contents include curriculum activities; images and animations; reviews of books, websites and other resources; and articles from Focus on Microbiology Education, Microbiology Education and Microbe. Around 40% of the materials are free to educators and students, the remainder require a subscription. the service is suspended with the message to: "Please check back with us in 2017". External links MicrobeLibrary Microbiology The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Biomass is the mass of biological what? A. tissues B. lipids C. proteins D. organisms Answer:
sciq-4648
multiple_choice
What is defined in physics as the amount of force pushing against a given area?
[ "pressure", "resistance", "energy", "gravity" ]
A
Relavent Documents: Document 0::: The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered. The 1995 version has 30 five-way multiple choice questions. Example question (question 4): Gender differences The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher. Document 1::: In common usage, the mass of an object is often referred to as its weight, though these are in fact different concepts and quantities. Nevertheless, one object will always weigh more than another with less mass if both are subject to the same gravity (i.e. the same gravitational field strength). In scientific contexts, mass is the amount of "matter" in an object (though "matter" may be difficult to define), but weight is the force exerted on an object's matter by gravity. At the Earth's surface, an object whose mass is exactly one kilogram weighs approximately 9.81 newtons, the product of its mass and the gravitational field strength there. The object's weight is less on Mars, where gravity is weaker; more on Saturn, where gravity is stronger; and very small in space, far from significant sources of gravity, but it always has the same mass. Material objects at the surface of the Earth have weight despite such sometimes being difficult to measure. An object floating freely on water, for example, does not appear to have weight since it is buoyed by the water. But its weight can be measured if it is added to water in a container which is entirely supported by and weighed on a scale. Thus, the "weightless object" floating in water actually transfers its weight to the bottom of the container (where the pressure increases). Similarly, a balloon has mass but may appear to have no weight or even negative weight, due to buoyancy in air. However the weight of the balloon and the gas inside it has merely been transferred to a large area of the Earth's surface, making the weight difficult to measure. The weight of a flying airplane is similarly distributed to the ground, but does not disappear. If the airplane is in level flight, the same weight-force is distributed to the surface of the Earth as when the plane was on the runway, but spread over a larger area. A better scientific definition of mass is its description as being a measure of inertia, which is the tendency of an Document 2::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 3::: Mechanical load is the physical stress on a mechanical system or component. Loads can be static or dynamic. Some loads are specified as part of the design criteria of a mechanical system. Depending on the usage, some mechanical loads can be measured by an appropriate test method in a laboratory or in the field. Vehicle It can be the external mechanical resistance against which a machine (such as a motor or engine), acts. The load can often be expressed as a curve of force versus speed. For instance, a given car traveling on a road of a given slope presents a load which the engine must act against. Because air resistance increases with speed, the motor must put out more torque at a higher speed in order to maintain the speed. By shifting to a higher gear, one may be able to meet the requirement with a higher torque and a lower engine speed, whereas shifting to a lower gear has the opposite effect. Accelerating increases the load, whereas decelerating decreases the load. Pump Similarly, the load on a pump depends on the head against which the pump is pumping, and on the size of the pump. Fan Similar considerations apply to a fan. See Affinity laws. See also Structural load Physical test Document 4::: In physics, work is the energy transferred to or from an object via the application of force along a displacement. In its simplest form, for a constant force aligned with the direction of motion, the work equals the product of the force strength and the distance traveled. A force is said to do positive work if when applied it has a component in the direction of the displacement of the point of application. A force does negative work if it has a component opposite to the direction of the displacement at the point of application of the force. For example, when a ball is held above the ground and then dropped, the work done by the gravitational force on the ball as it falls is positive, and is equal to the weight of the ball (a force) multiplied by the distance to the ground (a displacement). If the ball is thrown upwards, the work done by the gravitational force is negative, and is equal to the weight multiplied by the displacement in the upwards direction. Both force and displacement are vectors. The work done is given by the dot product of the two vectors. When the force is constant and the angle between the force and the displacement is also constant, then the work done is given by: Work is a scalar quantity, so it has only magnitude and no direction. Work transfers energy from one place to another, or one form to another. The SI unit of work is the joule (J), the same unit as for energy. History The ancient Greek understanding of physics was limited to the statics of simple machines (the balance of forces), and did not include dynamics or the concept of work. During the Renaissance the dynamics of the Mechanical Powers, as the simple machines were called, began to be studied from the standpoint of how far they could lift a load, in addition to the force they could apply, leading eventually to the new concept of mechanical work. The complete dynamic theory of simple machines was worked out by Italian scientist Galileo Galilei in 1600 in Le Meccaniche (On Me The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is defined in physics as the amount of force pushing against a given area? A. pressure B. resistance C. energy D. gravity Answer:
sciq-5004
multiple_choice
During the embryonic stage of vertebrates, the notochord develops into what?
[ "brain stem", "umbilical cord", "rib cage", "backbone" ]
D
Relavent Documents: Document 0::: Axial mesoderm, or chordamesoderm, is the mesoderm in the embryo that lies along the central axis under the neural tube. will give rise to notochord starts as the notochordal process, whose formation finishes at day 20 in humans. important not only in forming the notochord itself but also in inducing development of the overlying ectoderm into the neural tube will eventually induce the formation of vertebral bodies. ventral floor of the notochordal process fuses with endoderm. The notochord will form the nucleus pulposus of intervertebral discs. There is some discussion as to whether these cells contributed from the notochord are replaced by others from the adjacent mesoderm. It gives rise to the notochordal process, which later becomes the notochord. Document 1::: Convergent extension (CE), sometimes called convergence and extension (C&E), is the process by which the tissue of an embryo is restructured to converge (narrow) along one axis and extend (elongate) along a perpendicular axis by cellular movement. Example and explanation An example of this process is where the anteroposterior axis (the axis drawn between the head and tail end of an embryo) becomes longer as the lateral tissues (those that make up the left and right sides of the embryo) move in towards the dorsal midline (the middle of the back of the animal). This process plays a crucial role in shaping the body plan during embryogenesis and occurs during gastrulation, neurulation, axis elongation, and organogenesis in both vertebrate and invertebrate embryos. In chordate animals, this process is utilized within a vast population of cells; from the smaller populations in the notochord of the sea squirt (ascidian) to the larger populations of the dorsal mesoderm and neural ectoderm of frogs (Xenopus) and fish. Many characteristics of convergent extension are conserved in the teleost fish, the bird, and very likely within mammals at the molecular, cellular, and tissue level. In amphibians and fish Convergent extension has been primarily studied in frogs and fish due to their large embryo size and their development outside of a maternal host (in egg clutches in the water, as opposed to in a uterus). Within frogs and fish, however, there exist fundamental differences in how convergent extension is achieved. Frog embryogenesis utilizes cell rearrangement as the sole player of this process. Fish, on the other hand, utilize both cell rearrangement as well as directed migration (Fig. 1) . Cellular rearrangement is the process by which individual cells of a tissue rearrange to reshape the tissue as a whole, while cellular migration is the directed movement of a singular cell or small group of cells across a substrate such as a membrane or tissue. Frog (Xenopus), as Document 2::: In the development of vertebrate animals, the prechordal plate is a "uniquely thickened portion" of the endoderm that is in contact with ectoderm immediately rostral to the cephalic tip of the notochord. It is the most likely origin of the rostral cranial mesoderm. STAGE 6 The prechordal plate is a thickening of the endoderm at the cranial end of the primitive streak seen in Embryo Beneke by Hill J.P., Florian J (1963) STAGE 7 The prechordal plate is described as a median mass of cells, located at the anterior end of the notochord, which appears in early embryos as an integral part of the roof of the foregut. e.g. Embryos Bi 24 and Manchester 1285. and Gilbert P.W., (1957) STAGE 8 O'Rahilly R., Müller F. (1987) present a detailed discussion of the term 'prechordal plate' and its relation to the 'prochordal plate'. These essentially synonymous terms refer to the horseshoe-shaped band of thickened endoderm rostral to the notochord but not quite reaching the rostral extremity of the embryo. It reaches its maximum state of development at about this stage and contributes mesodermal type cells to the surrounding tissue. Cells derived from the prechordal plate become incorporated into the cephalic mesenchyme ( including the 'premandibular' condensation described by Gilbert P.W., (1957) and some of the foregut endoderm. STAGE 9 The prechordal plate is continuous rostrally with the cardiac mesenchyme and it is rotated caudo-ventrally as the cranial flexure develops and the head moves ventrally. STAGE 10 In the 10 somite embryo, Carnegie No. 5074, the prechordal plate is continuous posteriorly with the notochord, and is made up of about 35-40 cells. The prechordal mesenchyme proliferates laterally over the junction of the dorsal aorta and first aortic arch on each side. Gilbert P.W., (1957) STAGE 11 The prechordal plate contributes largely to the premandibular condensation and the mesenchyme of the heart such that little is seen in the median plane at this stage Document 3::: Vegetal rotation is a morphogenetic movement that drives mesoderm internalization during gastrulation in amphibian embryos. The internalization of vegetal cells prior to gastrulation was first observed in the 1930s by Abraham Mandel Schechtman through the use of vital dye labeling experiments in Triturus torosus embryos. More recently, Winklbauer and Schürfeld (1999) described the internal movements in more detail using pregastrular explants of Xenopus laevis. Gastrulation in amphibians is initiated by formation of bottle cells at the dorsal marginal zone, followed by involution of prospective mesodermal cells. The mesoderm and endoderm then migrate animally along the blastocoel roof, driven in part by movement of the vegetal endoderm cells. In Xenopus embryos in which the blastocoel roof is removed prior to gastrulation, the movement of vegetal cells toward the blastocoel and their intercalation into the blastocoel floor causes the floor to spread, pushing the dorsal edge downward. In the context of the embryo, active vegetal rotation, together with epiboly of the animal cap ectodermal cells, appears to bring the vegetal mesendoderm into contact with the blastocoel roof. This movement results in formation of Brachet's cleft. As gastrulation continues, further spreading of the blastocoel floor by upward movement of vegetal cells contributes to the advancement of the mesendoderm along the blastocoel roof. This process is aided by crawling mesodermal cells at the leading edge of the mesendoderm. Much like bottle cell formation at the blastopore lip, vegetal rotation begins at the dorsal side of the embryo, and spreads laterally to the ventral side. These processes, however, occur independently. While vegetal rotation appears to be important prior to and in the early stages of gastrulation, by stages 10.5–11, vegetal rotation ceases and further involution appears to be driven primarily by cell rearrangements. Document 4::: A teloblast is a large cell in the embryos of clitellate annelids which asymmetrically divide to form many smaller cells known as blast cells. These blast cells further proliferate and differentiate to form the segmental tissues of the annelid. Teloblasts are well studied in leeches, though they are also present in the other major class of clitellates: the oligochaetes. Developmental role and morphology All teloblasts are specified from the D quadrant macromere after the second round of divisions post-fertilization. There are five pairs of teloblasts, one on each side of the embryo. Four of the teloblasts (N, O, P, and Q) give rise to ectodermal tissue and one pair (M) gives rise to mesodermal tissue. The column of blast cells arising out of each teloblast is known as a bandlet. All five bandlets coalesce into one germinal band on each side of the embryo, extending out from the teloblast towards the head (in the rostral direction). The teloblasts are located at the rear of the embryo. Teloblasts have two separate cytoplasmic domains: the teloplasm and the vitelloplasm. The teloplasm contains the nucleus, ribosomes, mitochondria, and other subcellular organelles. The vitelloplasm contains mostly yolk platelets. Only the teloplasm gets passed onto the daughter stem cells after cell division. O/P specification The O and P teloblasts are specified from two separate but identical precursors, which form an equivalence group These two precursor cells are termed O/P cells for their ability to become either O or P teloblasts. Signals from the surrounding cells act to specify which fate the teloblasts and their progeny take on. Interactions with the q bandlet, however transient, can induce the p fate in the adjacent o/p bandlet. The M bandlet has been shown to In some species (i.e. Helobdella triserialis), the provisional epithelium covering the cells plays a role in inducing the O fate. In the absence of cell-cell interactions, the O/P precursors will become O tel The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. During the embryonic stage of vertebrates, the notochord develops into what? A. brain stem B. umbilical cord C. rib cage D. backbone Answer:
sciq-372
multiple_choice
Both diffusion and effusion are related to the speed at which what objects move?
[ "solids", "electricity", "gas molecules", "copper molecules" ]
C
Relavent Documents: Document 0::: Dispersive mass transfer, in fluid dynamics, is the spreading of mass from highly concentrated areas to less concentrated areas. It is one form of mass transfer. Dispersive mass flux is analogous to diffusion, and it can also be described using Fick's first law: where c is mass concentration of the species being dispersed, E is the dispersion coefficient, and x is the position in the direction of the concentration gradient. Dispersion can be differentiated from diffusion in that it is caused by non-ideal flow patterns (i.e. deviations from plug flow) and is a macroscopic phenomenon, whereas diffusion is caused by random molecular motions (i.e. Brownian motion) and is a microscopic phenomenon. Dispersion is often more significant than diffusion in convection-diffusion problems. The dispersion coefficient is frequently modeled as the product of the fluid velocity, U, and some characteristic length scale, α: Transport phenomena Document 1::: Transport Phenomena is the first textbook about transport phenomena. It is specifically designed for chemical engineering students. The first edition was published in 1960, two years after having been preliminarily published under the title Notes on Transport Phenomena based on mimeographed notes prepared for a chemical engineering course taught at the University of Wisconsin–Madison during the academic year 1957-1958. The second edition was published in August 2001. A revised second edition was published in 2007. This text is often known simply as BSL after its authors' initials. History As the chemical engineering profession developed in the first half of the 20th century, the concept of "unit operations" arose as being needed in the education of undergraduate chemical engineers. The theories of mass, momentum and energy transfer were being taught at that time only to the extent necessary for a narrow range of applications. As chemical engineers began moving into a number of new areas, problem definitions and solutions required a deeper knowledge of the fundamentals of transport phenomena than those provided in the textbooks then available on unit operations. In the 1950s, R. Byron Bird, Warren E. Stewart and Edwin N. Lightfoot stepped forward to develop an undergraduate course at the University of Wisconsin–Madison to integrate the teaching of fluid flow, heat transfer, and diffusion. From this beginning, they prepared their landmark textbook Transport Phenomena. Subjects covered in the book The book is divided into three basic sections, named Momentum Transport, Energy Transport and Mass Transport: Momentum Transport Viscosity and the Mechanisms of Momentum Transport Momentum Balances and Velocity Distributions in Laminar Flow The Equations of Change for Isothermal Systems Velocity Distributions in Turbulent Flow Interphase Transport in Isothermal Systems Macroscopic Balances for Isothermal Flow Systems Energy Transport Thermal Conductivity and the Me Document 2::: The Stefan flow, occasionally called Stefan's flow, is a transport phenomenon concerning the movement of a chemical species by a flowing fluid (typically in the gas phase) that is induced to flow by the production or removal of the species at an interface. Any process that adds the species of interest to or removes it from the flowing fluid may cause the Stefan flow, but the most common processes include evaporation, condensation, chemical reaction, sublimation, ablation, adsorption, absorption, and desorption. It was named after the Slovenian physicist, mathematician, and poet Josef Stefan for his early work on calculating evaporation rates. The Stefan flow is distinct from diffusion as described by Fick's law, but diffusion almost always also occurs in multi-species systems that are experiencing the Stefan flow. In systems undergoing one of the species addition or removal processes mentioned previously, the addition or removal generates a mean flow in the flowing fluid as the fluid next to the interface is displaced by the production or removal of additional fluid by the processes occurring at the interface. The transport of the species by this mean flow is the Stefan flow. When concentration gradients of the species are also present, diffusion transports the species relative to the mean flow. The total transport rate of the species is then given by a summation of the Stefan flow and diffusive contributions. An example of the Stefan flow occurs when a droplet of liquid evaporates in air. In this case, the vapor/air mixture surrounding the droplet is the flowing fluid, and liquid/vapor boundary of the droplet is the interface. As heat is absorbed by the droplet from the environment, some of the liquid evaporates into vapor at the surface of the droplet, and flows away from the droplet as it is displaced by additional vapor evaporating from the droplet. This process causes the flowing medium to move away from the droplet at some mean speed that is dependent on Document 3::: At equilibrium, the relationship between water content and equilibrium relative humidity of a material can be displayed graphically by a curve, the so-called moisture sorption isotherm. For each humidity value, a sorption isotherm indicates the corresponding water content value at a given, constant temperature. If the composition or quality of the material changes, then its sorption behaviour also changes. Because of the complexity of sorption process the isotherms cannot be determined explicitly by calculation, but must be recorded experimentally for each product. The relationship between water content and water activity (aw) is complex. An increase in aw is usually accompanied by an increase in water content, but in a non-linear fashion. This relationship between water activity and moisture content at a given temperature is called the moisture sorption isotherm. These curves are determined experimentally and constitute the fingerprint of a food system. BET theory (Brunauer-Emmett-Teller) provides a calculation to describe the physical adsorption of gas molecules on a solid surface. Because of the complexity of the process, these calculations are only moderately successful; however, Stephen Brunauer was able to classify sorption isotherms into five generalized shapes as shown in Figure 2. He found that Type II and Type III isotherms require highly porous materials or desiccants, with first monolayer adsorption, followed by multilayer adsorption and finally leading to capillary condensation, explaining these materials high moisture capacity at high relative humidity. Care must be used in extracting data from isotherms, as the representation for each axis may vary in its designation. Brunauer provided the vertical axis as moles of gas adsorbed divided by the moles of the dry material, and on the horizontal axis he used the ratio of partial pressure of the gas just over the sample, divided by its partial pressure at saturation. More modern isotherms showing the Document 4::: The convection–diffusion equation is a combination of the diffusion and convection (advection) equations, and describes physical phenomena where particles, energy, or other physical quantities are transferred inside a physical system due to two processes: diffusion and convection. Depending on context, the same equation can be called the advection–diffusion equation, drift–diffusion equation, or (generic) scalar transport equation. Equation General The general equation is where is the variable of interest (species concentration for mass transfer, temperature for heat transfer), is the diffusivity (also called diffusion coefficient), such as mass diffusivity for particle motion or thermal diffusivity for heat transport, is the velocity field that the quantity is moving with. It is a function of time and space. For example, in advection, might be the concentration of salt in a river, and then would be the velocity of the water flow as a function of time and location. Another example, might be the concentration of small bubbles in a calm lake, and then would be the velocity of bubbles rising towards the surface by buoyancy (see below) depending on time and location of the bubble. For multiphase flows and flows in porous media, is the (hypothetical) superficial velocity. describes sources or sinks of the quantity . For example, for a chemical species, means that a chemical reaction is creating more of the species, and means that a chemical reaction is destroying the species. For heat transport, might occur if thermal energy is being generated by friction. represents gradient and represents divergence. In this equation, represents concentration gradient. Understanding the terms involved The right-hand side of the equation is the sum of three contributions. The first, , describes diffusion. Imagine that is the concentration of a chemical. When concentration is low somewhere compared to the surrounding areas (e.g. a local minimum of concentration), t The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Both diffusion and effusion are related to the speed at which what objects move? A. solids B. electricity C. gas molecules D. copper molecules Answer:
sciq-8589
multiple_choice
What kind of radiation can be used to disrupt the dna-rna protein synthesis cycle that allows bacteria to reproduce?
[ "infrared", "ionizing", "non-ionizing", "microwave" ]
B
Relavent Documents: Document 0::: Microbes can be damaged or killed by elements of their physical environment such as temperature, radiation, or exposure to chemicals; these effects can be exploited in efforts to control pathogens, often for the purpose of food safety. Irradiation Irradiation is the use of ionising gamma rays emitted by cobalt-60 and caesium-137, or, high-energy electrons and X-rays to inactivate microbial pathogens, particularly in the food industry. Bacteria such as Deinococcus radiodurans are particularly resistant to radiation, but are not pathogenic. Active microbes, such as Corynebacterium aquaticum, Pseudomonas putida, Comamonas acidovorans, Gluconobacter cerinus, Micrococcus diversus and Rhodococcus rhodochrous, have been retrieved from spent nuclear fuel storage pools at the Idaho National Engineering and Environmental Laboratory (INEEL). These microbes were again exposed to controlled doses of radiation. All the species survived weaker radiation doses with little damage, while only the gram-positive species survived much larger doses. The spores of gram-positive bacteria contain storage proteins that bind tightly to DNA, possibly acting as a protective barrier to radiation damage. Ionising radiation kills cells indirectly by creating reactive free radicals. These free radicals can chemically alter sensitive macromolecules in the cell leading to their inactivation. Most of the cell's macromolecules are affected by ionising radiation, but damage to the DNA macromolecule is most often the cause of cell death, since DNA often contains only a single copy of its genes; proteins, on the other hand, often have several copies so that damage of one will not lead to cell death, and in any case may always be re-synthesized provided the DNA has remained intact. Ultraviolet radiation has been used as a germicide by both industry and medicine for more than a century (see Ultraviolet germicidal irradiation). Use of ultraviolet leads to both inactivation and the stimulating of mutations. Document 1::: A microbeam is a narrow beam of radiation, of micrometer or sub-micrometer dimensions. Together with integrated imaging techniques, microbeams allow precisely defined quantities of damage to be introduced at precisely defined locations. Thus, the microbeam is a tool for investigators to study intra- and inter-cellular mechanisms of damage signal transduction. A schematic of microbeam operation is shown on the right. Essentially, an automated imaging system locates user-specified targets, and these targets are sequentially irradiated, one by one, with a highly-focused radiation beam. Targets can be single cells, sub-cellular locations, or precise locations in 3D tissues. Key features of a microbeam are throughput, precision, and accuracy. While irradiating targeted regions, the system must guarantee that adjacent locations receive no energy deposition. History The first microbeam facilities were developed in the mid-90s. These facilities were a response to challenges in studying radiobiological processes using broadbeam exposures. Microbeams were originally designed to address two main issues: The belief that the radiation-sensitivity of the nucleus was not uniform, and The need to be able to hit an individual cell with an exact number (particularly one) of particles for low dose risk assessment. Additionally, microbeams were seen as ideal vehicles to investigate the mechanisms of radiation response. Radiation-sensitivity of the cell At the time it was believed that radiation damage to cells was entirely the result of damage to DNA. Charged particle microbeams could probe the radiation sensitivity of the nucleus, which at the time appeared not to be uniformly sensitive. Experiments performed at microbeam facilities have since shown the existence of a bystander effect. A bystander effect is any biological response to radiation in cells or tissues that did not experience a radiation traversal. These "bystander" cells are neighbors of cells that have experience Document 2::: Radicidation is a specific case of food irradiation where the dose of ionizing radiation applied to the food is sufficient to reduce the number of viable specific non-spore-forming pathogenic bacteria to such a level that none are detectable when the treated food is examined by any recognized method. The required dose is in the range of 2 – 8 kGy. The term may also be applied to the destruction of parasites such as tapeworm and trichina in meat, in which case the required dose is in the range of 0.1 – 1 kGy. When the process is used specifically for destroying enteropathogenic and enterotoxinogenic organisms belonging to the genus Salmonella, it is referred to as Salmonella radicidation. The term Radicidation is derived from radiation and 'caedere' (Latin for fell, cut, kill). See also Radappertization Radurization Document 3::: MicrobeLibrary is a permanent collection of over 1400 original peer-reviewed resources for teaching undergraduate microbiology. It is provided by the American Society for Microbiology, Washington DC, United States. Contents include curriculum activities; images and animations; reviews of books, websites and other resources; and articles from Focus on Microbiology Education, Microbiology Education and Microbe. Around 40% of the materials are free to educators and students, the remainder require a subscription. the service is suspended with the message to: "Please check back with us in 2017". External links MicrobeLibrary Microbiology Document 4::: Radioresistance is the level of ionizing radiation that organisms are able to withstand. Ionizing-radiation-resistant organisms (IRRO) were defined as organisms for which the dose of acute ionizing radiation (IR) required to achieve 90% reduction (D10) is greater than 1,000 gray (Gy) Radioresistance is surprisingly high in many organisms, in contrast to previously held views. For example, the study of environment, animals and plants around the Chernobyl disaster area has revealed an unexpected survival of many species, despite the high radiation levels. A Brazilian study in a hill in the state of Minas Gerais which has high natural radiation levels from uranium deposits, has also shown many radioresistant insects, worms and plants. Certain extremophiles, such as the bacteria Deinococcus radiodurans and the tardigrades, can withstand large doses of ionizing radiation on the order of 5,000 Gy. Induced radioresistance In the graph on left, a dose/survival curve for a hypothetical group of cells has been drawn with and without a rest time for the cells to recover. Other than the recovery time partway through the irradiation, the cells would have been treated identically. Radioresistance may be induced by exposure to small doses of ionizing radiation. Several studies have documented this effect in yeast, bacteria, protozoa, algae, plants, insects, as well as in in vitro mammalian and human cells and in animal models. Several cellular radioprotection mechanisms may be involved, such as alterations in the levels of some cytoplasmic and nuclear proteins and increased gene expression, DNA repair and other processes. Also biophysical models presented general basics for this phenomenon. Many organisms have been found to possess a self-repair mechanism that can be activated by exposure to radiation in some cases. Two examples of this self-repair process in humans are described below. Devair Alves Ferreira received a large dose (7.0 Gy) during the Goiânia accident, and The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What kind of radiation can be used to disrupt the dna-rna protein synthesis cycle that allows bacteria to reproduce? A. infrared B. ionizing C. non-ionizing D. microwave Answer:
sciq-1042
multiple_choice
The chemical behavior of elements can largely be explained by what?
[ "chemical configuration", "neutron configuration", "electron configurations", "proton configuration" ]
C
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: The periodic table is an arrangement of the chemical elements, structured by their atomic number, electron configuration and recurring chemical properties. In the basic form, elements are presented in order of increasing atomic number, in the reading sequence. Then, rows and columns are created by starting new rows and inserting blank cells, so that rows (periods) and columns (groups) show elements with recurring properties (called periodicity). For example, all elements in group (column) 18 are noble gases that are largely—though not completely—unreactive. The history of the periodic table reflects over two centuries of growth in the understanding of the chemical and physical properties of the elements, with major contributions made by Antoine-Laurent de Lavoisier, Johann Wolfgang Döbereiner, John Newlands, Julius Lothar Meyer, Dmitri Mendeleev, Glenn T. Seaborg, and others. Early history Nine chemical elements – carbon, sulfur, iron, copper, silver, tin, gold, mercury, and lead, have been known since before antiquity, as they are found in their native form and are relatively simple to mine with primitive tools. Around 330 BCE, the Greek philosopher Aristotle proposed that everything is made up of a mixture of one or more roots, an idea originally suggested by the Sicilian philosopher Empedocles. The four roots, which the Athenian philosopher Plato called elements, were earth, water, air and fire. Similar ideas about these four elements existed in other ancient traditions, such as Indian philosophy. A few extra elements were known in the age of alchemy: zinc, arsenic, antimony, and bismuth. Platinum was also known to pre-Columbian South Americans, but knowledge of it did not reach Europe until the 16th century. First categorizations The history of the periodic table is also a history of the discovery of the chemical elements. The first person in recorded history to discover a new element was Hennig Brand, a bankrupt German merchant. Brand tried to discover Document 2::: This page shows the electron configurations of the neutral gaseous atoms in their ground states. For each atom the subshells are given first in concise form, then with all subshells written out, followed by the number of electrons per shell. Electron configurations of elements beyond hassium (element 108) have never been measured; predictions are used below. As an approximate rule, electron configurations are given by the Aufbau principle and the Madelung rule. However there are numerous exceptions; for example the lightest exception is chromium, which would be predicted to have the configuration , written as , but whose actual configuration given in the table below is . Note that these electron configurations are given for neutral atoms in the gas phase, which are not the same as the electron configurations for the same atoms in chemical environments. In many cases, multiple configurations are within a small range of energies and the irregularities shown below do not necessarily have a clear relation to chemical behaviour. For the undiscovered eighth-row elements, mixing of configurations is expected to be very important, and sometimes the result can no longer be well-described by a single configuration. See also Extended periodic table#Electron configurations – Predictions for undiscovered elements 119–173 and 184 Document 3::: A nonmetal is a chemical element that mostly lacks metallic properties. Seventeen elements are generally considered nonmetals, though some authors recognize more or fewer depending on the properties considered most representative of metallic or nonmetallic character. Some borderline elements further complicate the situation. Nonmetals tend to have low density and high electronegativity (the ability of an atom in a molecule to attract electrons to itself). They range from colorless gases like hydrogen to shiny solids like the graphite form of carbon. Nonmetals are often poor conductors of heat and electricity, and when solid tend to be brittle or crumbly. In contrast, metals are good conductors and most are pliable. While compounds of metals tend to be basic, those of nonmetals tend to be acidic. The two lightest nonmetals, hydrogen and helium, together make up about 98% of the observable ordinary matter in the universe by mass. Five nonmetallic elements—hydrogen, carbon, nitrogen, oxygen, and silicon—make up the overwhelming majority of the Earth's crust, atmosphere, oceans and biosphere. The distinct properties of nonmetallic elements allow for specific uses that metals often cannot achieve. Elements like hydrogen, oxygen, carbon, and nitrogen are essential building blocks for life itself. Moreover, nonmetallic elements are integral to industries such as electronics, energy storage, agriculture, and chemical production. Most nonmetallic elements were not identified until the 18th and 19th centuries. While a distinction between metals and other minerals had existed since antiquity, a basic classification of chemical elements as metallic or nonmetallic emerged only in the late 18th century. Since then nigh on two dozen properties have been suggested as single criteria for distinguishing nonmetals from metals. Definition and applicable elements Properties mentioned hereafter refer to the elements in their most stable forms in ambient conditions unless otherwise Document 4::: In chemistry and physics, the iron group refers to elements that are in some way related to iron; mostly in period (row) 4 of the periodic table. The term has different meanings in different contexts. In chemistry, the term is largely obsolete, but it often means iron, cobalt, and nickel, also called the iron triad; or, sometimes, other elements that resemble iron in some chemical aspects. In astrophysics and nuclear physics, the term is still quite common, and it typically means those three plus chromium and manganese—five elements that are exceptionally abundant, both on Earth and elsewhere in the universe, compared to their neighbors in the periodic table. Titanium and vanadium are also produced in Type Ia supernovae. General chemistry In chemistry, "iron group" used to refer to iron and the next two elements in the periodic table, namely cobalt and nickel. These three comprised the "iron triad". They are the top elements of groups 8, 9, and 10 of the periodic table; or the top row of "group VIII" in the old (pre-1990) IUPAC system, or of "group VIIIB" in the CAS system. These three metals (and the three of the platinum group, immediately below them) were set aside from the other elements because they have obvious similarities in their chemistry, but are not obviously related to any of the other groups. The iron group and its alloys exhibit ferromagnetism. The similarities in chemistry were noted as one of Döbereiner's triads and by Adolph Strecker in 1859. Indeed, Newlands' "octaves" (1865) were harshly criticized for separating iron from cobalt and nickel. Mendeleev stressed that groups of "chemically analogous elements" could have similar atomic weights as well as atomic weights which increase by equal increments, both in his original 1869 paper and his 1889 Faraday Lecture. Analytical chemistry In the traditional methods of qualitative inorganic analysis, the iron group consists of those cations which have soluble chlorides; and are not precipitated The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The chemical behavior of elements can largely be explained by what? A. chemical configuration B. neutron configuration C. electron configurations D. proton configuration Answer:
sciq-6316
multiple_choice
Most reptiles reproduce sexually and have what type of fertilization?
[ "mechanical", "external", "additional", "internal" ]
D
Relavent Documents: Document 0::: An associated reproductive pattern is a seasonal change in reproduction which is highly correlated with a change in gonad and associated hormone. Notable Model Organisms Parthenogenic Whiptail Lizards Document 1::: Roshd Biological Education is a quarterly science educational magazine covering recent developments in biology and biology education for a biology teacher Persian -speaking audience. Founded in 1985, it is published by The Teaching Aids Publication Bureau, Organization for Educational Planning and Research, Ministry of Education, Iran. Roshd Biological Education has an editorial board composed of Iranian biologists, experts in biology education, science journalists and biology teachers. It is read by both biology teachers and students, as a way of launching innovations and new trends in biology education, and helping biology teachers to teach biology in better and more effective ways. Magazine layout As of Autumn 2012, the magazine is laid out as follows: Editorial—often offering a view of point from editor in chief on an educational and/or biological topics. Explore— New research methods and results on biology and/or education. World— Reports and explores on biological education worldwide. In Brief—Summaries of research news and discoveries. Trends—showing how new technology is altering the way we live our lives. Point of View—Offering personal commentaries on contemporary topics. Essay or Interview—often with a pioneer of a biological and/or educational researcher or an influential scientific educational leader. Muslim Biologists—Short histories of Muslim Biologists. Environment—An article on Iranian environment and its problems. News and Reports—Offering short news and reports events on biology education. In Brief—Short articles explaining interesting facts. Questions and Answers—Questions about biology concepts and their answers. Book and periodical Reviews—About new publication on biology and/or education. Reactions—Letter to the editors. Editorial staff Mohammad Karamudini, editor in chief History Roshd Biological Education started in 1985 together with many other magazines in other science and art. The first editor was Dr. Nouri-Dalooi, th Document 2::: External fertilization is a mode of reproduction in which a male organism's sperm fertilizes a female organism's egg outside of the female's body. It is contrasted with internal fertilization, in which sperm are introduced via insemination and then combine with an egg inside the body of a female organism. External fertilization typically occurs in water or a moist area to facilitate the movement of sperm to the egg. The release of eggs and sperm into the water is known as spawning. In motile species, spawning females often travel to a suitable location to release their eggs. However, sessile species are less able to move to spawning locations and must release gametes locally. Among vertebrates, external fertilization is most common in amphibians and fish. Invertebrates utilizing external fertilization are mostly benthic, sessile, or both, including animals such as coral, sea anemones, and tube-dwelling polychaetes. Benthic marine plants also use external fertilization to reproduce. Environmental factors and timing are key challenges to the success of external fertilization. While in the water, the male and female must both release gametes at similar times in order to fertilize the egg. Gametes spawned into the water may also be washed away, eaten, or damaged by external factors. Sexual selection Sexual selection may not seem to occur during external fertilization, but there are ways it actually can. The two types of external fertilizers are nest builders and broadcast spawners. For female nest builders, the main choice is the location of where to lay her eggs. A female can choose a nest close to the male she wants to fertilize her eggs, but there is no guarantee that the preferred male will fertilize any of the eggs. Broadcast spawners have a very weak selection, due to the randomness of releasing gametes. To look into the effect of female choice on external fertilization, an in vitro sperm competition experiment was performed. The results concluded that ther Document 3::: Temperature-dependent sex determination (TSD) is a type of environmental sex determination in which the temperatures experienced during embryonic/larval development determine the sex of the offspring. It is observed in reptiles and teleost fish, with some reports of it occurring in species of shrimp.TSD differs from the chromosomal sex-determination systems common among vertebrates. It is the most studied type of environmental sex determination (ESD). Some other conditions, e.g. density, pH, and environmental background color, are also observed to alter sex ratio, which could be classified either as temperature-dependent sex determination or temperature-dependent sex differentiation, depending on the involved mechanisms. As sex-determining mechanisms, TSD and genetic sex determination (GSD) should be considered in an equivalent manner, which can lead to reconsidering the status of fish species that are claimed to have TSD when submitted to extreme temperatures instead of the temperature experienced during development in the wild, since changes in sex ratio with temperature variation are ecologically and evolutionally relevant. While TSD has been observed in many reptile and fish species, the genetic differences between sexes and molecular mechanisms of TSD have not been determined. The cortisol-mediated pathway and epigenetic regulatory pathway are thought to be the potential mechanisms involved in TSD. The eggs are affected by the temperature at which they are incubated during the middle one-third of embryonic development. This critical period of incubation is known as the thermosensitive period. The specific time of sex-commitment is known due to several authors resolving histological chronology of sex differentiation in the gonads of turtles with TSD. Thermosensitive period The thermosensitive, or temperature-sensitive, period is the period during development when sex is irreversibly determined. It is used in reference to species with temperature-dependent Document 4::: Sexual characteristics are physical traits of an organism (typically of a sexually dimorphic organism) which are indicative of or resultant from biological sexual factors. These include both primary sex characteristics, such as gonads, and secondary sex characteristics. Humans In humans, sex organs or primary sexual characteristics, which are those a person is born with, can be distinguished from secondary sex characteristics, which develop later in life, usually during puberty. The development of both is controlled by sex hormones produced by the body after the initial fetal stage where the presence or absence of the Y-chromosome and/or the SRY gene determine development. Male primary sex characteristics are the penis, the scrotum and the ability to ejaculate when matured. Female primary sex characteristics are the vagina, uterus, fallopian tubes, clitoris, cervix, and the ability to give birth and menstruate when matured. Hormones that express sexual differentiation in humans include: estrogens progesterone androgens such as testosterone The following table lists the typical sexual characteristics in humans (even though some of these can also appear in other animals as well): Other organisms In invertebrates and plants, hermaphrodites (which have both male and female reproductive organs either at the same time or during their life cycle) are common, and in many cases, the norm. In other varieties of multicellular life (e.g. the fungi division, Basidiomycota) sexual characteristics can be much more complex, and may involve many more than two sexes. For details on the sexual characteristics of fungi, see: Hypha and Plasmogamy. Secondary sex characteristics in non-human animals include manes of male lions, long tail feathers of male peafowl, the tusks of male narwhals, enlarged proboscises in male elephant seals and proboscis monkeys, the bright facial and rump coloration of male mandrills, and horns in many goats and antelopes. See also Mammalian gesta The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Most reptiles reproduce sexually and have what type of fertilization? A. mechanical B. external C. additional D. internal Answer:
sciq-6428
multiple_choice
Lattice energy cannot be measured directly. what is its calculation based on?
[ "change in temperature", "microscopic inspection", "measured energy changes", "chemical reactions" ]
C
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Engineering mathematics is a branch of applied mathematics concerning mathematical methods and techniques that are typically used in engineering and industry. Along with fields like engineering physics and engineering geology, both of which may belong in the wider category engineering science, engineering mathematics is an interdisciplinary subject motivated by engineers' needs both for practical, theoretical and other considerations outside their specialization, and to deal with constraints to be effective in their work. Description Historically, engineering mathematics consisted mostly of applied analysis, most notably: differential equations; real and complex analysis (including vector and tensor analysis); approximation theory (broadly construed, to include asymptotic, variational, and perturbative methods, representations, numerical analysis); Fourier analysis; potential theory; as well as linear algebra and applied probability, outside of analysis. These areas of mathematics were intimately tied to the development of Newtonian physics, and the mathematical physics of that period. This history also left a legacy: until the early 20th century subjects such as classical mechanics were often taught in applied mathematics departments at American universities, and fluid mechanics may still be taught in (applied) mathematics as well as engineering departments. The success of modern numerical computer methods and software has led to the emergence of computational mathematics, computational science, and computational engineering (the last two are sometimes lumped together and abbreviated as CS&E), which occasionally use high-performance computing for the simulation of phenomena and the solution of problems in the sciences and engineering. These are often considered interdisciplinary fields, but are also of interest to engineering mathematics. Specialized branches include engineering optimization and engineering statistics. Engineering mathematics in tertiary educ Document 2::: In chemistry, the lattice energy is the energy change upon formation of one mole of a crystalline ionic compound from its constituent ions, which are assumed to initially be in the gaseous state. It is a measure of the cohesive forces that bind ionic solids. The size of the lattice energy is connected to many other physical properties including solubility, hardness, and volatility. Since it generally cannot be measured directly, the lattice energy is usually deduced from experimental data via the Born–Haber cycle. Lattice energy and lattice enthalpy The concept of lattice energy was originally applied to the formation of compounds with structures like rocksalt (NaCl) and sphalerite (ZnS) where the ions occupy high-symmetry crystal lattice sites. In the case of NaCl, lattice energy is the energy change of the reaction Na+ (g) + Cl− (g) → NaCl (s) which amounts to −786 kJ/mol. Some chemistry textbooks as well as the widely used CRC Handbook of Chemistry and Physics define lattice energy with the opposite sign, i.e. as the energy required to convert the crystal into infinitely separated gaseous ions in vacuum, an endothermic process. Following this convention, the lattice energy of NaCl would be +786 kJ/mol. Both sign conventions are widely used. The relationship between the lattice energy and the lattice enthalpy at pressure is given by the following equation: , where is the lattice energy (i.e., the molar internal energy change), is the lattice enthalpy, and the change of molar volume due to the formation of the lattice. Since the molar volume of the solid is much smaller than that of the gases, . The formation of a crystal lattice from ions in vacuum must lower the internal energy due to the net attractive forces involved, and so . The term is positive but is relatively small at low pressures, and so the value of the lattice enthalpy is also negative (and exothermic). Theoretical treatments The lattice energy of an ionic compound depends strongly upo Document 3::: The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work. History It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council. Function Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres. STEM ambassadors To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell. Funding STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments. See also The WISE Campaign Engineering and Physical Sciences Research Council National Centre for Excellence in Teaching Mathematics Association for Science Education Glossary of areas of mathematics Glossary of astronomy Glossary of biology Glossary of chemistry Glossary of engineering Glossary of physics Document 4::: Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams. Course content E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are: Electrostatics Conductors, capacitors, and dielectrics Electric circuits Magnetic fields Electromagnetism. Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class. AP test The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution. Registration The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test. Format The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Lattice energy cannot be measured directly. what is its calculation based on? A. change in temperature B. microscopic inspection C. measured energy changes D. chemical reactions Answer:
ai2_arc-266
multiple_choice
A student is trying to identify a mineral that has a nonmetallic luster and is black. It can also be scratched with a fingernail. According to the mineral reference sheet, the unidentified mineral is most likely
[ "mica.", "magnetite.", "hornblende.", "quartz." ]
A
Relavent Documents: Document 0::: Mineral tests are several methods which can help identify the mineral type. This is used widely in mineralogy, hydrocarbon exploration and general mapping. There are over 4000 types of minerals known with each one with different sub-classes. Elements make minerals and minerals make rocks so actually testing minerals in the lab and in the field is essential to understand the history of the rock which aids data, zonation, metamorphic history, processes involved and other minerals. The following tests are used on specimen and thin sections through polarizing microscope. Color Color of the mineral. This is not mineral specific. For example quartz can be almost any color, shape and within many rock types. Streak Color of the mineral's powder. This can be found by rubbing the mineral onto a concrete. This is more accurate but not always mineral specific. Lustre This is the way light reflects from the mineral's surface. A mineral can be metallic (shiny) or non-metallic (not shiny). Transparency The way light travels through minerals. The mineral can be transparent (clear), translucent (cloudy) or opaque (none). Specific gravity Ratio between the weight of the mineral relative to an equal volume of water. Mineral habitat The shape of the crystal and habitat. Magnetism Magnetic or nonmagnetic. Can be tested by using a magnet or a compass. This does not apply to all ion minerals (for example, pyrite). Cleavage Number, behaviour, size and way cracks fracture in the mineral. UV fluorescence Many minerals glow when put under a UV light. Radioactivity Is the mineral radioactive or non-radioactive? This is measured by a Geiger counter. Taste This is not recommended. Is the mineral salty, bitter or does it have no taste? Bite Test This is not recommended. This involves biting a mineral to see if its generally soft or hard. This was used in early gold exploration to tell the difference between pyrite (fools gold, hard) and gold (soft). Hardness The Mohs Hardn Document 1::: See also List of minerals Document 2::: Microcline (KAlSi3O8) is an important igneous rock-forming tectosilicate mineral. It is a potassium-rich alkali feldspar. Microcline typically contains minor amounts of sodium. It is common in granite and pegmatites. Microcline forms during slow cooling of orthoclase; it is more stable at lower temperatures than orthoclase. Sanidine is a polymorph of alkali feldspar stable at yet higher temperature. Microcline may be clear, white, pale-yellow, brick-red, or green; it is generally characterized by cross-hatch twinning that forms as a result of the transformation of monoclinic orthoclase into triclinic microcline. The chemical compound name is potassium aluminium silicate, and it is known as E number reference E555. Geology Microcline may be chemically the same as monoclinic orthoclase, but because it belongs to the triclinic crystal system, the prism angle is slightly less than right angles; hence the name "microcline" from the Greek "small slope." It is a fully ordered triclinic modification of potassium feldspar and is dimorphous with orthoclase. Microcline is identical to orthoclase in many physical properties, and can be distinguished by x-ray or optical examination. When viewed under a polarizing microscope, microcline exhibits a minute multiple twinning which forms a grating-like structure that is unmistakable. Perthite is either microcline or orthoclase with thin lamellae of exsolved albite. Amazon stone, or amazonite, is a green variety of microcline. It is not found anywhere in the Amazon Basin, however. The Spanish explorers who named it apparently confused it with another green mineral from that region. The largest documented single crystals of microcline were found in Devils Hole Beryl Mine, Colorado, US and measured ~50x36x14 m. This could be one of the largest crystals of any material found so far. Microcline is commonly used for the manufacturing of porcelain. As food additive The chemical compound name is potassium aluminium silicate, and it Document 3::: Molybdenite is a mineral of molybdenum disulfide, MoS2. Similar in appearance and feel to graphite, molybdenite has a lubricating effect that is a consequence of its layered structure. The atomic structure consists of a sheet of molybdenum atoms sandwiched between sheets of sulfur atoms. The Mo-S bonds are strong, but the interaction between the sulfur atoms at the top and bottom of separate sandwich-like tri-layers is weak, resulting in easy slippage as well as cleavage planes. Molybdenite crystallizes in the hexagonal crystal system as the common polytype 2H and also in the trigonal system as the 3R polytype. Description Occurrence Molybdenite occurs in high temperature hydrothermal ore deposits. Its associated minerals include pyrite, chalcopyrite, quartz, anhydrite, fluorite, and scheelite. Important deposits include the disseminated porphyry molybdenum deposits at Questa, New Mexico and the Henderson and Climax mines in Colorado. Molybdenite also occurs in porphyry copper deposits of Arizona, Utah, and Mexico. The element rhenium is always present in molybdenite as a substitute for molybdenum, usually in the parts per million (ppm ) range, but often up to 1–2%. High rhenium content results in a structural variety detectable by X-ray diffraction techniques. Molybdenite ores are essentially the only source for rhenium. The presence of the radioactive isotope rhenium-187 and its daughter isotope osmium-187 provides a useful geochronologic dating technique. Features Molybdenite is extremely soft with a metallic luster, and is superficially almost identical to graphite, to the point where it is not possible to positively distinguish between the two minerals without scientific equipment. It marks paper in much the same way as graphite. Its distinguishing feature from graphite is its higher specific gravity, as well as its tendency to occur in a matrix. Uses Molybdenite is an important ore of molybdenum, and is the most common source of the metal. While Document 4::: Materials science has shaped the development of civilizations since the dawn of mankind. Better materials for tools and weapons has allowed mankind to spread and conquer, and advancements in material processing like steel and aluminum production continue to impact society today. Historians have regarded materials as such an important aspect of civilizations such that entire periods of time have defined by the predominant material used (Stone Age, Bronze Age, Iron Age). For most of recorded history, control of materials had been through alchemy or empirical means at best. The study and development of chemistry and physics assisted the study of materials, and eventually the interdisciplinary study of materials science emerged from the fusion of these studies. The history of materials science is the study of how different materials were used and developed through the history of Earth and how those materials affected the culture of the peoples of the Earth. The term "Silicon Age" is sometimes used to refer to the modern period of history during the late 20th to early 21st centuries. Prehistory In many cases, different cultures leave their materials as the only records; which anthropologists can use to define the existence of such cultures. The progressive use of more sophisticated materials allows archeologists to characterize and distinguish between peoples. This is partially due to the major material of use in a culture and to its associated benefits and drawbacks. Stone-Age cultures were limited by which rocks they could find locally and by which they could acquire by trading. The use of flint around 300,000 BCE is sometimes considered the beginning of the use of ceramics. The use of polished stone axes marks a significant advance, because a much wider variety of rocks could serve as tools. The innovation of smelting and casting metals in the Bronze Age started to change the way that cultures developed and interacted with each other. Starting around 5,500 BCE, The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. A student is trying to identify a mineral that has a nonmetallic luster and is black. It can also be scratched with a fingernail. According to the mineral reference sheet, the unidentified mineral is most likely A. mica. B. magnetite. C. hornblende. D. quartz. Answer:
sciq-9155
multiple_choice
Amino acids broken down by metabolic process are mostly recycled into new what?
[ "enzymes", "hormones", "lipids", "proteins" ]
D
Relavent Documents: Document 0::: In molecular biology, protein catabolism is the breakdown of proteins into smaller peptides and ultimately into amino acids. Protein catabolism is a key function of digestion process. Protein catabolism often begins with pepsin, which converts proteins into polypeptides. These polypeptides are then further degraded. In humans, the pancreatic proteases include trypsin, chymotrypsin, and other enzymes. In the intestine, the small peptides are broken down into amino acids that can be absorbed into the bloodstream. These absorbed amino acids can then undergo amino acid catabolism, where they are utilized as an energy source or as precursors to new proteins. The amino acids produced by catabolism may be directly recycled to form new proteins, converted into different amino acids, or can undergo amino acid catabolism to be converted to other compounds via the Krebs cycle. Interface with other metabolic and salvage pathways Protein catabolism produces amino acids that are used to form bacterial proteins or oxidized to meet the energy needs of the cell. The amino acids that are produced by protein catabolism can then be further catabolized in amino acid catabolism. Among the several degradative processes for amino acids are Deamination (removal of an amino group), transamination (transfer of amino group), decarboxylation (removal of carboxyl group), and dehydrogenation (removal of hydrogen). Degradation of amino acids can function as part of a salvage pathway, whereby parts of degraded amino acids are used to create new amino acids, or as part of a metabolic pathway whereby the amino acid is broken down to release or recapture chemical energy. For example, the chemical energy that is released by oxidization in a dehydrogenation reaction can be used to reduce NAD+ to NADH, which can then be fed directly into the Krebs/Citric Acid (TCA) Cycle. Protein degradation Protein degradation differs from protein catabolism. Proteins are produced and destroyed routinely as par Document 1::: Catabolism () is the set of metabolic pathways that breaks down molecules into smaller units that are either oxidized to release energy or used in other anabolic reactions. Catabolism breaks down large molecules (such as polysaccharides, lipids, nucleic acids, and proteins) into smaller units (such as monosaccharides, fatty acids, nucleotides, and amino acids, respectively). Catabolism is the breaking-down aspect of metabolism, whereas anabolism is the building-up aspect. Cells use the monomers released from breaking down polymers to either construct new polymer molecules or degrade the monomers further to simple waste products, releasing energy. Cellular wastes include lactic acid, acetic acid, carbon dioxide, ammonia, and urea. The formation of these wastes is usually an oxidation process involving a release of chemical free energy, some of which is lost as heat, but the rest of which is used to drive the synthesis of adenosine triphosphate (ATP). This molecule acts as a way for the cell to transfer the energy released by catabolism to the energy-requiring reactions that make up anabolism. Catabolism is a destructive metabolism and anabolism is a constructive metabolism. Catabolism, therefore, provides the chemical energy necessary for the maintenance and growth of cells. Examples of catabolic processes include glycolysis, the citric acid cycle, the breakdown of muscle protein in order to use amino acids as substrates for gluconeogenesis, the breakdown of fat in adipose tissue to fatty acids, and oxidative deamination of neurotransmitters by monoamine oxidase. Catabolic hormones There are many signals that control catabolism. Most of the known signals are hormones and the molecules involved in metabolism itself. Endocrinologists have traditionally classified many of the hormones as anabolic or catabolic, depending on which part of metabolism they stimulate. The so-called classic catabolic hormones known since the early 20th century are cortisol, glucagon, and Document 2::: This is a list of articles that describe particular biomolecules or types of biomolecules. A For substances with an A- or α- prefix such as α-amylase, please see the parent page (in this case Amylase). A23187 (Calcimycin, Calcium Ionophore) Abamectine Abietic acid Acetic acid Acetylcholine Actin Actinomycin D Adenine Adenosmeme Adenosine diphosphate (ADP) Adenosine monophosphate (AMP) Adenosine triphosphate (ATP) Adenylate cyclase Adiponectin Adonitol Adrenaline, epinephrine Adrenocorticotropic hormone (ACTH) Aequorin Aflatoxin Agar Alamethicin Alanine Albumins Aldosterone Aleurone Alpha-amanitin Alpha-MSH (Melaninocyte stimulating hormone) Allantoin Allethrin α-Amanatin, see Alpha-amanitin Amino acid Amylase (also see α-amylase) Anabolic steroid Anandamide (ANA) Androgen Anethole Angiotensinogen Anisomycin Antidiuretic hormone (ADH) Anti-Müllerian hormone (AMH) Arabinose Arginine Argonaute Ascomycin Ascorbic acid (vitamin C) Asparagine Aspartic acid Asymmetric dimethylarginine ATP synthase Atrial-natriuretic peptide (ANP) Auxin Avidin Azadirachtin A – C35H44O16 B Bacteriocin Beauvericin beta-Hydroxy beta-methylbutyric acid beta-Hydroxybutyric acid Bicuculline Bilirubin Biopolymer Biotin (Vitamin H) Brefeldin A Brassinolide Brucine Butyric acid C Document 3::: The metabolome refers to the complete set of small-molecule chemicals found within a biological sample. The biological sample can be a cell, a cellular organelle, an organ, a tissue, a tissue extract, a biofluid or an entire organism. The small molecule chemicals found in a given metabolome may include both endogenous metabolites that are naturally produced by an organism (such as amino acids, organic acids, nucleic acids, fatty acids, amines, sugars, vitamins, co-factors, pigments, antibiotics, etc.) as well as exogenous chemicals (such as drugs, environmental contaminants, food additives, toxins and other xenobiotics) that are not naturally produced by an organism. In other words, there is both an endogenous metabolome and an exogenous metabolome. The endogenous metabolome can be further subdivided to include a "primary" and a "secondary" metabolome (particularly when referring to plant or microbial metabolomes). A primary metabolite is directly involved in the normal growth, development, and reproduction. A secondary metabolite is not directly involved in those processes, but usually has important ecological function. Secondary metabolites may include pigments, antibiotics or waste products derived from partially metabolized xenobiotics. The study of the metabolome is called metabolomics. Origins The word metabolome appears to be a blending of the words "metabolite" and "chromosome". It was constructed to imply that metabolites are indirectly encoded by genes or act on genes and gene products. The term "metabolome" was first used in 1998 and was likely coined to match with existing biological terms referring to the complete set of genes (the genome), the complete set of proteins (the proteome) and the complete set of transcripts (the transcriptome). The first book on metabolomics was published in 2003. The first journal dedicated to metabolomics (titled simply "Metabolomics") was launched in 2005 and is currently edited by Prof. Roy Goodacre. Some of the m Document 4::: Biochemistry or biological chemistry is the study of chemical processes within and relating to living organisms. A sub-discipline of both chemistry and biology, biochemistry may be divided into three fields: structural biology, enzymology, and metabolism. Over the last decades of the 20th century, biochemistry has become successful at explaining living processes through these three disciplines. Almost all areas of the life sciences are being uncovered and developed through biochemical methodology and research. Biochemistry focuses on understanding the chemical basis which allows biological molecules to give rise to the processes that occur within living cells and between cells, in turn relating greatly to the understanding of tissues and organs, as well as organism structure and function. Biochemistry is closely related to molecular biology, which is the study of the molecular mechanisms of biological phenomena. Much of biochemistry deals with the structures, bonding, functions, and interactions of biological macromolecules, such as proteins, nucleic acids, carbohydrates, and lipids. They provide the structure of cells and perform many of the functions associated with life. The chemistry of the cell also depends upon the reactions of small molecules and ions. These can be inorganic (for example, water and metal ions) or organic (for example, the amino acids, which are used to synthesize proteins). The mechanisms used by cells to harness energy from their environment via chemical reactions are known as metabolism. The findings of biochemistry are applied primarily in medicine, nutrition and agriculture. In medicine, biochemists investigate the causes and cures of diseases. Nutrition studies how to maintain health and wellness and also the effects of nutritional deficiencies. In agriculture, biochemists investigate soil and fertilizers, with the goal of improving crop cultivation, crop storage, and pest control. In recent decades, biochemical principles a The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Amino acids broken down by metabolic process are mostly recycled into new what? A. enzymes B. hormones C. lipids D. proteins Answer:
sciq-4837
multiple_choice
What is the most common bacterial sti in the u. s.?
[ "diarrhea", "chlamydia", "influenza", "tuberculosis" ]
B
Relavent Documents: Document 0::: Single Best Answer (SBA or One Best Answer) is a written examination form of multiple choice questions used extensively in medical education. Structure A single question is posed with typically five alternate answers, from which the candidate must choose the best answer. This method avoids the problems of past examinations of a similar form described as Single Correct Answer. The older form can produce confusion where more than one of the possible answers has some validity. The newer form makes it explicit that more than one answer may have elements that are correct, but that one answer will be superior. Prior to the widespread introduction of SBAs into medical education, the typical form of examination was true-false multiple choice questions. But during the 2000s, educators found that SBAs would be superior. Document 1::: The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591. On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education. Format This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions. The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test. Preparation The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a Document 2::: Sexually transmitted infections (STIs), also referred to as sexually transmitted diseases (STDs), are infections that are commonly spread by sexual activity, especially vaginal intercourse, anal sex and oral sex. The most prevalent STIs may be carried by a significant fraction of the human population. Document 3::: GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test. Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95. After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17. Content specification Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below: Biochemistry (36%) A Chemical and Physical Foundations Thermodynamics and kinetics Redox states Water, pH, acid-base reactions and buffers Solutions and equilibria Solute-solvent interactions Chemical interactions and bonding Chemical reaction mechanisms B Structural Biology: Structure, Assembly, Organization and Dynamics Small molecules Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids) Supramolecular complexes (e.g. Document 4::: Advanced Placement (AP) Biology (also known as AP Bio) is an Advanced Placement biology course and exam offered by the College Board in the United States. For the 2012–2013 school year, the College Board unveiled a new curriculum with a greater focus on "scientific practices". This course is designed for students who wish to pursue an interest in the life sciences. The College Board recommends successful completion of high school biology and high school chemistry before commencing AP Biology, although the actual prerequisites vary from school to school and from state to state. This course, nevertheless, is considered very challenging and one of the most difficult AP classes, as shown with AP Finals grade distributions. Topic outline The exam covers the following 8 units. The percentage indicates the portion of the multiple-choice section of the exam focused on each content area: The course is based on and tests six skills, called scientific practices which include: In addition to the topics above, students are required to be familiar with general lab procedure. Students should know how to collect data, analyze data to form conclusions, and apply those conclusions. Exam Students are allowed to use a four-function, scientific, or graphing calculator. The exam has two sections: a 90 minute multiple choice section and a 90 minute free response section. There are 60 multiple choice questions and six free responses, two long and four short. Both sections are worth 50% of the score. Score distribution Commonly used textbooks Biology, AP Edition by Sylvia Mader (2012, hardcover ) Life: The Science of Biology (Sadava, Heller, Orians, Purves, and Hillis, ) Campbell Biology AP Ninth Edition (Reece, Urry, Cain, Wasserman, Minorsky, and Andrew Jackson ) See also Glossary of biology A.P Bio (TV Show) The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the most common bacterial sti in the u. s.? A. diarrhea B. chlamydia C. influenza D. tuberculosis Answer:
sciq-6504
multiple_choice
The hydration of what is what makes many alcohols?
[ "enzymes", "lipids", "alkenes", "malts" ]
C
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test. Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95. After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17. Content specification Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below: Biochemistry (36%) A Chemical and Physical Foundations Thermodynamics and kinetics Redox states Water, pH, acid-base reactions and buffers Solutions and equilibria Solute-solvent interactions Chemical interactions and bonding Chemical reaction mechanisms B Structural Biology: Structure, Assembly, Organization and Dynamics Small molecules Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids) Supramolecular complexes (e.g. Document 2::: Sugars in wine are at the heart of what makes winemaking possible. During the process of fermentation, sugars from wine grapes are broken down and converted by yeast into alcohol (ethanol) and carbon dioxide. Grapes accumulate sugars as they grow on the grapevine through the translocation of sucrose molecules that are produced by photosynthesis from the leaves. During ripening the sucrose molecules are hydrolyzed (separated) by the enzyme invertase into glucose and fructose. By the time of harvest, between 15 and 25% of the grape will be composed of simple sugars. Both glucose and fructose are six-carbon sugars but three-, four-, five- and seven-carbon sugars are also present in the grape. Not all sugars are fermentable, with sugars like the five-carbon arabinose, rhamnose and xylose still being present in the wine after fermentation. Very high sugar content will effectively kill the yeast once a certain (high) alcohol content is reached. For these reasons, no wine is ever fermented completely "dry" (meaning without any residual sugar). Sugar's role in dictating the final alcohol content of the wine (and such its resulting body and "mouth-feel") sometimes encourages winemakers to add sugar (usually sucrose) during winemaking in a process known as chaptalization solely in order to boost the alcohol content – chaptalization does not increase the sweetness of a wine. Sucrose Sucrose is a disaccharide, a molecule composed of the two monosaccharides glucose, and fructose. Invertase is the enzyme cleaves the glycosidic linkage between the glucose and fructose molecules. In most wines, there will be very little sucrose, since it is not a natural constituent of grapes and sucrose added for the purpose of chaptalisation will be consumed in the fermentation. The exception to this rule is Champagne and other sparkling wines, to which an amount of liqueur d'expédition (typically sucrose dissolved in a still wine) is added after the second fermentation in bottle, a practice Document 3::: Oenology (also enology; ) is the science and study of wine and winemaking. Oenology is distinct from viticulture, which is the science of the growing, cultivation, and harvesting of grapes. The English word oenology derives from the Greek word oinos (οἶνος) "wine" and the suffix –logia (-λογία) the "study of". An oenologist is an expert in the science of wine and of the arts and techniques for making wine. Education and training University programs in oenology and viticulture usually feature a concentration in science for the degree of Bachelor of Science (B.S, B.Sc., Sc.B), and as a terminal master's degree — either in a scientific or in a research program for the degree of Master of Science (M.S., Sc.M.), e.g. the master of professional studies degree. Oenologists and viticulturalists with doctorates often have a background in horticulture, plant physiology, and microbiology. Related to oenology are the professional titles of sommelier and master of wine, which are specific certifications in the restaurant business and in hospitality management. Occupationally, oenologists usually work as winemakers, as wine chemists in commercial laboratories, and in oenologic organisations, such as the Australian Wine Research Institute. Australia Schools in Australia tend to offer a "bachelor of viticulture" or "master of viticulture" degree. Charles Sturt University - Wagga Wagga, New South Wales Curtin University of Technology - Perth, Western Australia Melbourne Polytechnic/La Trobe University - Melbourne Australia Queensland College of Wine Tourism - Stanthorpe, Queensland University of Adelaide - Adelaide, South Australia Brazil Federal Institute of Rio Grande do Sul - Bento Gonçalves, Porto Alegre, Feliz, Sertão, Canoas, Porto Alegre-Restinga, Caxias do Sul, Osório, Erechim, and Rio Grande Federal University of Pampa - Dom Pedrito Campus, Rio Grande do Sul Canada Brock University - St. Catharines, Ontario France Official National Diploma of Oenology: Instit Document 4::: Biochemical engineering, also known as bioprocess engineering, is a field of study with roots stemming from chemical engineering and biological engineering. It mainly deals with the design, construction, and advancement of unit processes that involve biological organisms (such as fermentation) or organic molecules (often enzymes) and has various applications in areas of interest such as biofuels, food, pharmaceuticals, biotechnology, and water treatment processes. The role of a biochemical engineer is to take findings developed by biologists and chemists in a laboratory and translate that to a large-scale manufacturing process. History For hundreds of years, humans have made use of the chemical reactions of biological organisms in order to create goods. In the mid-1800s, Louis Pasteur was one of the first people to look into the role of these organisms when he researched fermentation. His work also contributed to the use of pasteurization, which is still used to this day. By the early 1900s, the use of microorganisms had expanded, and was used to make industrial products. Up to this point, biochemical engineering hadn't developed as a field yet. It wasn't until 1928 when Alexander Fleming discovered penicillin that the field of biochemical engineering was established. After this discovery, samples were gathered from around the world in order to continue research into the characteristics of microbes from places such as soils, gardens, forests, rivers, and streams. Today, biochemical engineers can be found working in a variety of industries, from food to pharmaceuticals. This is due to the increasing need for efficiency and production which requires knowledge of how biological systems and chemical reactions interact with each other and how they can be used to meet these needs. Education Biochemical engineering is not a major offered by most universities and is instead an area of interest under the chemical engineering major in most cases. The following universiti The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The hydration of what is what makes many alcohols? A. enzymes B. lipids C. alkenes D. malts Answer:
sciq-7621
multiple_choice
What is the key to the demographic transition?
[ "higher expatriation", "higher birth rates", "reduced family size", "higher death rates" ]
C
Relavent Documents: Document 0::: In demography, replacement migration is a theory of migration needed for a region to achieve a particular objective (demographic, economic or social). Generally, studies using this concept have as an objective to avoid the decline of total population and the decline of the working-age population. Often, these overall declines in the population are influenced by low fertility rates. When fertility is lower than the replacement level of 2.1 children per woman and there is a longer life expectancy, this changes the age structure over time. Overall, the population will start to decline as there will not be enough children born to replace the population of people lost and the proportion of older individuals composing the population will continue to increase. One concern from this is that the age-dependency ratio will be affected, as the working-age population will have more dependents in older age to support. Therefore, replacement migration has been a proposed mechanism to try and combat declining population size, aging populations and help replenish the number of people in the working age groups. Projections calculating migration replacement are primarily demographics and theoretical exercises and not forecasts or recommendations. However, this demographic information can help prompt governments to facilitate replacement migration by making policy changes. The concept of replacement migration may vary according to the study and depending on the context in which it applies. It may be a number of annual immigrants, a net migration, an additional number of immigrants compared to a reference scenario, etc. Types of replacement migration Replacement migration may take several forms because several scenarios of projections population can achieve the same aim. However, two forms predominate: minimal replacement migration and constant replacement migration. Minimal replacement migration Replacement migration is a minimum migration without surplus to achieve a chosen obj Document 1::: The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields. Description The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions. The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.” Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers. Current efforts The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo Document 2::: Agequake: Riding the Demographic Rollercoaster Shaking Business, Finance and our World is a book written by Paul Wallace and published in 1999, that investigates what possible ramifications are likely as a significant and unprecedented portion of the human population age. The book argues that rising longevity and lower fertility is causing a seismic shift in the profile of populations worldwide, and will be a fundamental force to that will shake business and finance, along with lifestyles and attitudes. Wallace suggests the old bogey of overpopulation is being replaced by a population "implosion". Through using dependency ratios (the ratio of non-working dependents to the working population) will lead to a point where workers will be burdened with the fiscal and practical responsibilities of supporting a ballooning population of aged retirees. Society and economy will be affected as the proportion of youth declines - typically the most entrepreneurial, creating and risk taking segment of society. Along with the liquidation of baby boomer assets to pay for their retirements, this is likely to halt economic growth in the future, and economic stagnation may be a more likely prospect. Housing prices will plummet, and the world may experience the greatest bear market in history. Internationally the relationship between youthful and aggressive developing world and the rich older Organisation for Economic Co-operation and Development (OECD) countries (where elderly women will become an influential constituency) will change. See also Generational accounting Document 3::: Diversity Explosion: How New Racial Demographics are Remaking America is a 2014 non-fiction book by William H. Frey. A look into how racial and ethnic diversity and changing demographics are altering the United States, Diversity Explosion is published and distributed by the Brookings Institution Press. Frey is a senior fellow at the Brookings Institution Metropolitan Policy Program. Document 4::: An underrepresented group describes a subset of a population that holds a smaller percentage within a significant subgroup than the subset holds in the general population. Specific characteristics of an underrepresented group vary depending on the subgroup being considered. Underrepresented groups in STEM United States Underrepresented groups in science, technology, engineering, and mathematics in the United States include women and some minorities. In the United States, women made up 50% of the college-educated workers in 2010, but only 28% of the science and engineering workers. Other underrepresented groups in science and engineering included African Americans, Native Americans, Alaskan Natives, and Hispanics, who collectively formed 26% of the population, but accounted for only 10% of the science and engineering workers. This 2015 study found that women make up just 26% of the computing workforce and 12% of the engineering workforce; African American, Hispanic, and Native American women are especially underrepresented in these industries. (McBride & McBride, 2018). Underrepresented groups in computing, a subset of the STEM fields, include Hispanics, and African-Americans. In the United States in 2015, Hispanics were 15% of the population and African-Americans were 13%, but their representation in the workforces of major tech companies in technical positions typically runs less than 5% and 3%, respectively. Similarly, women, providing approximately 50% of the general population, typically comprise less than 20% of the technology and leadership positions in the major technology companies. When it comes to the engineering and computing workforce, which accounts for more than 80% of STEM jobs, women remain dramatically underrepresented, as documented in the American Association of University Women's (AAUW) recent research report Solving the Equation: The Variables for Women's Success in Engineering and Computing (McBride & McBride, 2018). Women were underrepres The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the key to the demographic transition? A. higher expatriation B. higher birth rates C. reduced family size D. higher death rates Answer:
sciq-11617
multiple_choice
What instrument has a resolution many times greater than a light microscope, and can be used to see the details on the outside of a cell?
[ "element microscope", "electron microscope", "complex microscope", "molecular microscope" ]
B
Relavent Documents: Document 0::: Microscope image processing is a broad term that covers the use of digital image processing techniques to process, analyze and present images obtained from a microscope. Such processing is now commonplace in a number of diverse fields such as medicine, biological research, cancer research, drug testing, metallurgy, etc. A number of manufacturers of microscopes now specifically design in features that allow the microscopes to interface to an image processing system. Image acquisition Until the early 1990s, most image acquisition in video microscopy applications was typically done with an analog video camera, often simply closed circuit TV cameras. While this required the use of a frame grabber to digitize the images, video cameras provided images at full video frame rate (25-30 frames per second) allowing live video recording and processing. While the advent of solid state detectors yielded several advantages, the real-time video camera was actually superior in many respects. Today, acquisition is usually done using a CCD camera mounted in the optical path of the microscope. The camera may be full colour or monochrome. Very often, very high resolution cameras are employed to gain as much direct information as possible. Cryogenic cooling is also common, to minimise noise. Often digital cameras used for this application provide pixel intensity data to a resolution of 12-16 bits, much higher than is used in consumer imaging products. Ironically, in recent years, much effort has been put into acquiring data at video rates, or higher (25-30 frames per second or higher). What was once easy with off-the-shelf video cameras now requires special, high speed electronics to handle the vast digital data bandwidth. Higher speed acquisition allows dynamic processes to be observed in real time, or stored for later playback and analysis. Combined with the high image resolution, this approach can generate vast quantities of raw data, which can be a challenge to deal with, even Document 1::: A comparison microscope is a device used to analyze side-by-side specimens. It consists of two microscopes connected by an optical bridge, which results in a split view window enabling two separate objects to be viewed simultaneously. This avoids the observer having to rely on memory when comparing two objects under a conventional microscope. History One of the first prototypes of a comparison microscope was developed in 1913 in Germany. In 1929, using a comparison microscope adapted for forensic ballistics, Calvin Goddard and his partner Phillip Gravelle were able to absolve the Chicago Police Department of participation in the St. Valentine's Day Massacre. Col. Calvin H. Goddard Philip O. Gravelle, a chemist, developed a comparison microscope for use in the identification of fired bullets and cartridge cases with the support and guidance of forensic ballistics pioneer Calvin Goddard. It was a significant advance in the science of firearms identification in forensic science. The firearm from which a bullet or cartridge case has been fired is identified by the comparison of the unique striae left on the bullet or cartridge case from the worn, machined metal of the barrel, breach block, extractor, or firing pin in the gun. It was Gravelle who mistrusted his memory. "As long as he could inspect only one bullet at a time with his microscope, and had to keep the picture of it in his memory until he placed the comparison bullet under the microscope, scientific precision could not be attained. He therefore developed the comparison microscope and Goddard made it work." Calvin Goddard perfected the comparison microscope and subsequently popularized its use. Sir Sydney Smith also appreciated the idea, emphasizing its importance in forensic science and firearms identification. He took the comparison microscope to Scotland and introduced it to the European scientists for firearms identification and other forensic science needs. Modern comparison microscope The modern inst Document 2::: ClearVolume is an open source real-time live 3D visualization library designed for high-end volumetric light sheet microscopes. ClearVolume enables the live visualization of microscope data - allowing the biologists to immediately decide whether a sample is worth imaging. ClearVolume can easily be integrated into existing Java, C/C++, Python, or LabVIEW based microscope software. It has a dedicated interface to MicroManager/OpenSpim/OpenSpin control software. ClearVolume supports multi-channels, live 3D data streaming from remote microscopes, and uses a multi-pass Fibonacci rendering algorithm that can handle large volumes. Moreover, ClearVolume is integrated into the FiJi/ImageJ2/KNIME ecosystem. See also FiJi KNIME Light sheet fluorescence microscopy Volume rendering Document 3::: PSF Lab is a software program that allows the calculation of the illumination point spread function (PSF) of a confocal microscope under various imaging conditions. The calculation of the electric field vectors is based on a rigorous, vectorial model that takes polarization effects in the near-focus region and high numerical aperture microscope objectives into account. The polarization of the input beam (assumed to be collimated and monochromatic) can be chosen freely (linear, circular, or elliptic). Furthermore, a constant or Gaussian shaped input beam intensity profile can be assumed. On its way from the objective to the focus, the illumination light passes through up to three stratified optical layers, which allows the simulation of an immersion oil/air (layer 1) objective that focusses light through a glass cover slip (layer 2) into the sample medium (layer 3). Each layer is characterized by its (constant) refractive index and thickness. PSF Lab can also simulate microscope objectives that are corrected for certain refractive indices and cover slip thicknesses (design parameters). Thus, any deviations from the ideal imaging conditions for which the objective was designed for are properly taken into account. The following optical parameters can be selected: Input beam Wavelength Gaussian profile filling parameter (0 = constant profile) Polarization (linear, circular, elliptic) Outputs Individual field components Squared field components Intensity Microscope objective Numerical aperture Optical media Refractive index (design and actual) Thickness (design and actual) Depth (focus position within medium 3) The program calculates only 2D section of the PSF, but several calculations can be stacked (with a third party program) to obtain the full 3D PSF. Calculations are organized in "sets", each with its own set of parameters. Loops can be set up such that PSF Lab calculates one or several sets, increasing the resolution of the calculated images in eac Document 4::: A microtome (from the Greek mikros, meaning "small", and temnein, meaning "to cut") is a cutting tool used to produce extremely thin slices of material known as sections, with the process being termed microsectioning. Important in science, microtomes are used in microscopy for the preparation of samples for observation under transmitted light or electron radiation. Microtomes use steel, glass or diamond blades depending upon the specimen being sliced and the desired thickness of the sections being cut. Steel blades are used to prepare histological sections of animal or plant tissues for light microscopy. Glass knives are used to slice sections for light microscopy and to slice very thin sections for electron microscopy. Industrial grade diamond knives are used to slice hard materials such as bone, teeth and tough plant matter for both light microscopy and for electron microscopy. Gem-quality diamond knives are also used for slicing thin sections for electron microscopy. Microtomy is a method for the preparation of thin sections for materials such as bones, minerals and teeth, and an alternative to electropolishing and ion milling. Microtome sections can be made thin enough to section a human hair across its breadth, with section thickness between 50 nm and 100 μm. History In the beginnings of light microscope development, sections from plants and animals were manually prepared using razor blades. It was found that to observe the structure of the specimen under observation it was important to make clean reproducible cuts on the order of 100 μm, through which light can be transmitted. This allowed for the observation of samples using light microscopes in a transmission mode. One of the first devices for the preparation of such cuts was invented in 1770 by George Adams, Jr. (1750–1795) and further developed by Alexander Cummings. The device was hand operated, and the sample held in a cylinder and sections created from the top of the sample using a hand crank. In The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What instrument has a resolution many times greater than a light microscope, and can be used to see the details on the outside of a cell? A. element microscope B. electron microscope C. complex microscope D. molecular microscope Answer:
sciq-7254
multiple_choice
Alpha and beta decay occur when a nucleus has too many protons or an unstable ratio of what?
[ "nucleus to neutrons", "electrons to neutrons", "protons to neutrons", "atoms to neutrons" ]
C
Relavent Documents: Document 0::: In nuclear physics, beta decay (β-decay) is a type of radioactive decay in which an atomic nucleus emits a beta particle (fast energetic electron or positron), transforming into an isobar of that nuclide. For example, beta decay of a neutron transforms it into a proton by the emission of an electron accompanied by an antineutrino; or, conversely a proton is converted into a neutron by the emission of a positron with a neutrino in so-called positron emission. Neither the beta particle nor its associated (anti-)neutrino exist within the nucleus prior to beta decay, but are created in the decay process. By this process, unstable atoms obtain a more stable ratio of protons to neutrons. The probability of a nuclide decaying due to beta and other forms of decay is determined by its nuclear binding energy. The binding energies of all existing nuclides form what is called the nuclear band or valley of stability. For either electron or positron emission to be energetically possible, the energy release (see below) or Q value must be positive. Beta decay is a consequence of the weak force, which is characterized by relatively lengthy decay times. Nucleons are composed of up quarks and down quarks, and the weak force allows a quark to change its flavour by emission of a W boson leading to creation of an electron/antineutrino or positron/neutrino pair. For example, a neutron, composed of two down quarks and an up quark, decays to a proton composed of a down quark and two up quarks. Electron capture is sometimes included as a type of beta decay, because the basic nuclear process, mediated by the weak force, is the same. In electron capture, an inner atomic electron is captured by a proton in the nucleus, transforming it into a neutron, and an electron neutrino is released. Description The two types of beta decay are known as beta minus and beta plus. In beta minus (β−) decay, a neutron is converted to a proton, and the process creates an electron and an electron antineutrino; Document 1::: In nuclear physics, properties of a nucleus depend on evenness or oddness of its atomic number (proton number) Z, neutron number N and, consequently, of their sum, the mass number A. Most importantly, oddness of both Z and N tends to lower the nuclear binding energy, making odd nuclei generally less stable. This effect is not only experimentally observed, but is included in the semi-empirical mass formula and explained by some other nuclear models, such as the nuclear shell model. This difference of nuclear binding energy between neighbouring nuclei, especially of odd-A isobars, has important consequences for beta decay. The nuclear spin is zero for even-Z, even N nuclei, integer for all even-A nuclei, and odd half-integer for all odd-A nuclei. The neutron–proton ratio is not the only factor affecting nuclear stability. Adding neutrons to isotopes can vary their nuclear spins and nuclear shapes, causing differences in neutron capture cross sections and gamma spectroscopy and nuclear magnetic resonance properties. If too many or too few neutrons are present with regard to the nuclear binding energy optimum, the nucleus becomes unstable and subject to certain types of nuclear decay. Unstable nuclides with a nonoptimal number of neutrons or protons decay by beta decay (including positron decay), electron capture, or other means, such as spontaneous fission and cluster decay. Even mass number Even-mass-number nuclides, which comprise 150/251 = ~60% of all stable nuclides, are bosons, i.e., they have integer spin. 145 of the 150 are even-proton, even-neutron (EE) nuclides, which necessarily have spin 0 because of pairing. The remainder of the stable bosonic nuclides are five odd-proton, odd-neutron stable nuclides (, , , and ), all having a non-zero integer spin. Pairing effects Beta decay of an even–even nucleus produces an odd–odd nucleus, and vice versa. An even number of protons or of neutrons are more stable (higher binding energy) because of pairing effects, Document 2::: When embedded in an atomic nucleus, neutrons are (usually) stable particles. Outside the nucleus, free neutrons are unstable and have a mean lifetime of (about , ). Therefore, the half-life for this process (which differs from the mean lifetime by a factor of ) is (about , ). (An article published in October 2021, arrives at for the mean lifetime). The beta decay of the neutron described in this article can be notated at four slightly different levels of detail, as shown in four layers of Feynman diagrams in a section below. The hard-to-observe quickly decays into an electron and its matching antineutrino. The subatomic reaction shown immediately above depicts the process as it was first understood, in the first half of the 20th century. The boson () vanished so quickly that it was not detected until much later. Later, beta decay was understood to occur by the emission of a weak boson (), sometimes called a charged weak current. Beta decay specifically involves the emission of a boson from one of the down quarks hidden within the neutron, thereby converting the down quark into an up quark and consequently the neutron into a proton. The following diagram gives a summary sketch of the beta decay process according to the present level of understanding. For diagrams at several levels of detail, see § Decay process, below. Energy budget For the free neutron, the decay energy for this process (based on the rest masses of the neutron, proton and electron) is . That is the difference between the rest mass of the neutron and the sum of the rest masses of the products. That difference has to be carried away as kinetic energy. The maximal energy of the beta decay electron (in the process wherein the neutrino receives a vanishingly small amount of kinetic energy) has been measured at . The latter number is not well-enough measured to determine the comparatively tiny rest mass of the neutrino (which must in theory be subtracted from the maximal electron kinetic ene Document 3::: This article summarizes equations in the theory of nuclear physics and particle physics. Definitions Equations Nuclear structure {| class="wikitable" |- ! scope="col" width="100" | Physical situation ! scope="col" width="250" | Nomenclature ! scope="col" width="10" | Equations |- !Mass number | A = (Relative) atomic mass = Mass number = Sum of protons and neutrons N = Number of neutrons Z = Atomic number = Number of protons = Number of electrons | |- !Mass in nuclei | ''Mnuc = Mass of nucleus, bound nucleons MΣ = Sum of masses for isolated nucleons mp = proton rest mass mn = neutron rest mass | |- !Nuclear radius |r0 ≈ 1.2 fm | hence (approximately) nuclear volume ∝ A nuclear surface ∝ A2/3 |- !Nuclear binding energy, empirical curve |Dimensionless parameters to fit experiment: EB = binding energy, av = nuclear volume coefficient, as = nuclear surface coefficient, ac = electrostatic interaction coefficient, aa = symmetry/asymmetry extent coefficient for the numbers of neutrons/protons, | where (due to pairing of nuclei) δ(N, Z) = +1 even N, even Z, δ(N, Z) = −1 odd N, odd Z, δ(N, Z) = 0 odd A |- |} Nuclear decay Nuclear scattering theory The following apply for the nuclear reaction: a + b ↔ R → c in the centre of mass frame, where a and b are the initial species about to collide, c is the final species, and R is the resonant state. Fundamental forcesThese equations need to be refined such that the notation is defined as has been done for the previous sets of equations.''' See also Defining equation (physical chemistry) List of electromagnetism equations List of equations in classical mechanics List of equations in quantum mechanics List of equations in wave theory List of photonics equations List of relativistic equations Relativistic wave equations Footnotes Sources Further reading Physical quantities SI units Equations of physics Nuclear physics Particle physics Document 4::: An alpha nuclide is a nuclide that consists of an integer number of alpha particles. Alpha nuclides have equal, even numbers of protons and neutrons; they are important in stellar nucleosynthesis since the energetic environment within stars is amenable to fusion of alpha particles into heavier nuclei. Stable alpha nuclides, and stable decay products of radioactive alpha nuclides, are some of the most common metals in the universe. Alpha nuclide is also shorthand for alpha radionuclide, referring to those radioactive isotopes that undergo alpha decay and thereby emit alpha particles. List of alpha nuclides The nuclear binding energy of alpha nuclides heavier than zinc-60 (beginning with germanium-64) is too large for them be formed by fusion processes (see alpha process). , the heaviest known alpha nuclide is xenon-108. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Alpha and beta decay occur when a nucleus has too many protons or an unstable ratio of what? A. nucleus to neutrons B. electrons to neutrons C. protons to neutrons D. atoms to neutrons Answer:
sciq-1435
multiple_choice
The most common two-lens telescope, like the simple microscope, uses lenses of what shape?
[ "convex", "concave", "angular", "cylindrical" ]
A
Relavent Documents: Document 0::: In 2-dimensional geometry, a lens is a convex region bounded by two circular arcs joined to each other at their endpoints. In order for this shape to be convex, both arcs must bow outwards (convex-convex). This shape can be formed as the intersection of two circular disks. It can also be formed as the union of two circular segments (regions between the chord of a circle and the circle itself), joined along a common chord. Types If the two arcs of a lens have equal radius, it is called a symmetric lens, otherwise is an asymmetric lens. The vesica piscis is one form of a symmetric lens, formed by arcs of two circles whose centers each lie on the opposite arc. The arcs meet at angles of 120° at their endpoints. Area Symmetric The area of a symmetric lens can be expressed in terms of the radius R and arc lengths θ in radians: Asymmetric The area of an asymmetric lens formed from circles of radii R and r with distance d between their centers is where is the area of a triangle with sides d, r, and R. The two circles overlap if . For sufficiently large , the coordinate of the lens centre lies between the coordinates of the two circle centers: For small the coordinate of the lens centre lies outside the line that connects the circle centres: By eliminating y from the circle equations and the abscissa of the intersecting rims is . The sign of x, i.e., being larger or smaller than , distinguishes the two cases shown in the images. The ordinate of the intersection is . Negative values under the square root indicate that the rims of the two circles do not touch because the circles are too far apart or one circle lies entirely within the other. The value under the square root is a biquadratic polynomial of d. The four roots of this polynomial are associated with y=0 and with the four values of d where the two circles have only one point in common. The angles in the blue triangle of sides d, r and R are where y is the ordinate of the intersection. Th Document 1::: Holochroal eyes are compound eyes with many tiny lenses (sometimes more than 15,000, each 30-100μm, rarely larger). They are the oldest and most common type of trilobite eye, and found in all orders of trilobite from the Cambrian to the Permian periods. Lenses (composed of calcite) covered a curved, kidney-shaped visual surface in a hexagonal close packing system, with a single corneal membrane covering all lenses. Unlike in schizochroal eyes, adjacent lenses were in direct contact with one another. Lens shape generally depended on cuticle thickness. The lenses of trilobites with thin cuticles were thin and biconvex, whereas those with thick cuticles had thick lenses, which in extreme cases, could be thick columns with the outer surface flattened and the inner surface hemispherical. Regardless of lens thickness, however, the point at which light was focused was roughly the same distance below the lens. Document 2::: The extended hemispherical lens is a commonly used lens for millimeter-wave electromagnetic radiation. Such lenses are typically fabricated from dielectric materials such as Teflon or silicon. The geometry consists of a hemisphere of radius on a cylinder of length , with the same radius. Scanning performance When a feed element is placed a distance off the central axis, then the main beam will be steered an angle off-axis. The relation between and can be determined from geometrical optics: This relation is used when designing focal plane arrays to be used with the extended hemispherical lens. See also Luneburg lens Fresnel lens Lens antenna Document 3::: The angular aperture of a lens is the angular size of the lens aperture as seen from the focal point: where is the focal length is the diameter of the aperture. Relation to numerical aperture In a medium with an index of refraction close to 1, such as air, the angular aperture is approximately equal to twice the numerical aperture of the lens. Formally, the numerical aperture in air is: In the paraxial approximation, with a small aperture, : Document 4::: A METATOY is a sheet, formed by a two-dimensional array of small, telescopic optical components, that switches the path of transmitted light rays. METATOY is an acronym for "metamaterial for rays", representing a number of analogies with metamaterials; METATOYs even satisfy a few definitions of metamaterials, but are certainly not metamaterials in the usual sense. When seen from a distance, the view through each individual telescopic optical component acts as one pixel of the view through the METATOY as a whole. In the simplest case, the individual optical components are all identical; the METATOY then behaves like a homogeneous, but pixellated, window that can have very unusual optical properties (see the picture of the view through a METATOY). METATOYs are usually treated within the framework of geometrical optics; the light-ray-direction change performed by a METATOY is described by a mapping of the direction of any incoming light ray onto the corresponding direction of the outgoing ray. The light-ray-direction mappings can be very general. METATOYs can even create pixellated light-ray fields that could not exist in non-pixellated form due to a condition imposed by wave optics. Much of the work on METATOYs is currently theoretical, backed up by computer simulations. A small number of experiments have been performed to date; more experimental work is ongoing. Examples of METATOYs Telescopic optical components that have been used as the unit cell of two-dimensional arrays, and which therefore form homogeneous METATOYs, include a pair of identical lenses (focal length ) that share the same optical axis (perpendicular to the METATOY) and that are separated by , that is they share one focal plane (a special case of a refracting telescope with angular magnification -1); a pair of non-identical lenses (focal lengths and ) that share the same optical axis (again perpendicular to the METATOY) and that are separated by , that is they again share one focal plane ( The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The most common two-lens telescope, like the simple microscope, uses lenses of what shape? A. convex B. concave C. angular D. cylindrical Answer:
sciq-2606
multiple_choice
What in the axils of leaves and stems give rise to branches?
[ "axillary buds", "nodules", "chloroplasts", "meristems" ]
A
Relavent Documents: Document 0::: The axillary bud (or lateral bud) is an embryonic or organogenic shoot located in the axil of a leaf. Each bud has the potential to form shoots, and may be specialized in producing either vegetative shoots (stems and branches) or reproductive shoots (flowers). Once formed, a bud may remain dormant for some time, or it may form a shoot immediately. Overview An axillary bud is an embryonic or organogenic shoot which lies dormant at the junction of the stem and petiole of a plant. It arises exogenously from outer layer of cortex of the stem. Axillary buds do not become actively growing shoots on plants with strong apical dominance (the tendency to grow just the terminal bud on the main stem). Apical dominance occurs because the shoot apical meristem produces auxin which prevents axillary buds from growing. The axillary buds begin developing when they are exposed to less auxin, for example if the plant naturally has weak apical dominance, if apical dominance is broken by removing the terminal bud, or if the terminal bud has grown far enough away for the auxin to have less of an effect. An example of axillary buds are the eyes of the potato. Effects of auxin As the apical meristem grows and forms leaves, a region of meristematic cells is left behind at the node between the stem and the leaf. These axillary buds are usually dormant, inhibited by auxin produced by the apical meristem, which is known as apical dominance. If the apical meristem is removed, or has grown a sufficient distance away from an axillary bud, the axillary bud may become activated (or more appropriately freed from hormone inhibition). Like the apical meristem, axillary buds can develop into a stem or flower. Diseases that affect axillary buds Certain plant diseases - notably phytoplasmas - can cause the proliferation of axillary buds, and cause plants to become bushy in appearance. Document 1::: Edible plant stems are one part of plants that are eaten by humans. Most plants are made up of stems, roots, leaves, flowers, and produce fruits containing seeds. Humans most commonly eat the seeds (e.g. maize, wheat), fruit (e.g. tomato, avocado, banana), flowers (e.g. broccoli), leaves (e.g. lettuce, spinach, and cabbage), roots (e.g. carrots, beets), and stems (e.g. asparagus of many plants. There are also a few edible petioles (also known as leaf stems) such as celery or rhubarb. Plant stems have a variety of functions. Stems support the entire plant and have buds, leaves, flowers, and fruits. Stems are also a vital connection between leaves and roots. They conduct water and mineral nutrients through xylem tissue from roots upward, and organic compounds and some mineral nutrients through phloem tissue in any direction within the plant. Apical meristems, located at the shoot tip and axillary buds on the stem, allow plants to increase in length, surface, and mass. In some plants, such as cactus, stems are specialized for photosynthesis and water storage. Modified stems Typical stems are located above ground, but there are modified stems that can be found either above or below ground. Modified stems located above ground are phylloids, stolons, runners, or spurs. Modified stems located below ground are corms, rhizomes, and tubers. Detailed description of edible plant stems Asparagus The edible portion is the rapidly emerging stems that arise from the crowns in the Bamboo The edible portion is the young shoot (culm). Birch Trunk sap is drunk as a tonic or rendered into birch syrup, vinegar, beer, soft drinks, and other foods. Broccoli The edible portion is the peduncle stem tissue, flower buds, and some small leaves. Cauliflower The edible portion is proliferated peduncle and flower tissue. Cinnamon Many favor the unique sweet flavor of the inner bark of cinnamon, and it is commonly used as a spice. Fig The edible portion is stem tissue. The Document 2::: The quiescent centre is a group of cells, up to 1,000 in number, in the form of a hemisphere, with the flat face toward the root tip of vascular plants. It is a region in the apical meristem of a root where cell division proceeds very slowly or not at all, but the cells are capable of resuming meristematic activity when the tissue surrounding them is damaged. Cells of root apical meristems do not all divide at the same rate. Determinations of relative rates of DNA synthesis show that primary roots of Zea, Vicia and Allium have quiescent centres to the meristems, in which the cells divide rarely or never in the course of normal root growth (Clowes, 1958). Such a quiescent centre includes the cells at the apices of the histogens of both stele and cortex. Its presence can be deduced from the anatomy of the apex in Zea (Clowes, 1958), but not in the other species which lack discrete histogens. History In 1953, during the course of analysing the organization and function of the root apices, Frederick Albert Lionel Clowes (born 10 September 1921), at the School of Botany (now Department of Plant Sciences), University of Oxford, proposed the term ‘cytogenerative centre’ to denote ‘the region of an apical meristem from which all future cells are derived’. This term had been suggested to him by Mr Harold K. Pusey, a lecturer in embryology at the Department of Zoology and Comparative Anatomy at the same university. The 1953 paper of Clowes reported results of his experiments on Fagus sylvatica and Vicia faba, in which small oblique and wedge-shaped excisions were made at the tip of the primary root, at the most distal level of the root body, near the boundary with the root cap. The results of these experiments were striking and showed that: the root which grew on following the excision was normal at the undamaged meristem side; the nonexcised meristem portion contributed to the regeneration of the excised portion; the regenerated part of the root had abnormal patterning and Document 3::: A stem is one of two main structural axes of a vascular plant, the other being the root. It supports leaves, flowers and fruits, transports water and dissolved substances between the roots and the shoots in the xylem and phloem, photosynthesis takes place here, stores nutrients, and produces new living tissue. The stem can also be called halm or haulm or culms. The stem is normally divided into nodes and internodes: The nodes are the points of attachment for leaves and can hold one or more leaves. There are sometimes axillary buds between the stem and leaf which can grow into branches (with leaves, conifer cones, or flowers). Adventitious roots may also be produced from the nodes. Vines may produce tendrils from nodes. The internodes distance one node from another. The term "shoots" is often confused with "stems"; "shoots" generally refers to new fresh plant growth, including both stems and other structures like leaves or flowers. In most plants, stems are located above the soil surface, but some plants have underground stems. Stems have several main functions: Support for and the elevation of leaves, flowers, and fruits. The stems keep the leaves in the light and provide a place for the plant to keep its flowers and fruits. Transport of fluids between the roots and the shoots in the xylem and phloem. Storage of nutrients. Production of new living tissue. The normal lifespan of plant cells is one to three years. Stems have cells called meristems that annually generate new living tissue. Photosynthesis. Stems have two pipe-like tissues called xylem and phloem. The xylem tissue arises from the cell facing inside and transports water by the action of transpiration pull, capillary action, and root pressure. The phloem tissue arises from the cell facing outside and consists of sieve tubes and their companion cells. The function of phloem tissue is to distribute food from photosynthetic tissue to other tissues. The two tissues are separated by cambium, a tis Document 4::: Primary growth in plants is growth that takes place from the tips of roots or shoots. It leads to lengthening of roots and stems and sets the stage for organ formation. It is distinguished from secondary growth that leads to widening. Plant growth takes place in well defined plant locations. Specifically, the cell division and differentiation needed for growth occurs in specialized structures called meristems. These consist of undifferentiated cells (meristematic cells) capable of cell division. Cells in the meristem can develop into all the other tissues and organs that occur in plants. These cells continue to divide until they differentiate and then lose the ability to divide. Thus, the meristems produce all the cells used for plant growth and function. At the tip of each stem and root, an apical meristem adds cells to their length, resulting in the elongation of both. Examples of primary growth are the rapid lengthening growth of seedlings after they emerge from the soil and the penetration of roots deep into the soil. Furthermore, all plant organs arise ultimately from cell divisions in the apical meristems, followed by cell expansion and differentiation. In contrast, a growth process that involves thickening of stems takes place within lateral meristems that are located throughout the length of the stems. The lateral meristems of larger plants also extend into the roots. This thickening is secondary growth and is needed to give mechanical support and stability to the plant. The functions of a plant's growing tips – its apical (or primary) meristems – include: lengthening through cell division and elongation; organising the development of leaves along the stem; creating platforms for the eventual development of branches along the stem; laying the groundwork for organ formation by providing a stock of undifferentiated or incompletely differentiated cells that later develop into fully differentiated cells, thereby ultimately allowing the "spatial deployment The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What in the axils of leaves and stems give rise to branches? A. axillary buds B. nodules C. chloroplasts D. meristems Answer:
sciq-6872
multiple_choice
Unlike wild animals, what type of species are genetically uniform, making them more vulnerable to die-out from disease?
[ "mammals", "domestic species", "free-range species", "urbane species" ]
B
Relavent Documents: Document 0::: Molecular ecology is a field of evolutionary biology that is concerned with applying molecular population genetics, molecular phylogenetics, and more recently genomics to traditional ecological questions (e.g., species diagnosis, conservation and assessment of biodiversity, species-area relationships, and many questions in behavioral ecology). It is virtually synonymous with the field of "Ecological Genetics" as pioneered by Theodosius Dobzhansky, E. B. Ford, Godfrey M. Hewitt, and others. These fields are united in their attempt to study genetic-based questions "out in the field" as opposed to the laboratory. Molecular ecology is related to the field of conservation genetics. Methods frequently include using microsatellites to determine gene flow and hybridization between populations. The development of molecular ecology is also closely related to the use of DNA microarrays, which allows for the simultaneous analysis of the expression of thousands of different genes. Quantitative PCR may also be used to analyze gene expression as a result of changes in environmental conditions or different responses by differently adapted individuals. Molecular ecology uses molecular genetic data to answer ecological question related to biogeography, genomics, conservation genetics, and behavioral ecology. Studies mostly use data based on deoxyribonucleic acid sequences (DNA). This approach has been enhanced over a number of years to allow researchers to sequence thousands of genes from a small amount of starting DNA. Allele sizes are another way researchers are able to compare individuals and populations which allows them to quantify the genetic diversity within a population and the genetic similarities among populations. Bacterial diversity Molecular ecological techniques are used to study in situ questions of bacterial diversity. Many microorganisms are not easily obtainable as cultured strains in the laboratory, which would allow for identification and characterization. I Document 1::: In zoology, mammalogy is the study of mammals – a class of vertebrates with characteristics such as homeothermic metabolism, fur, four-chambered hearts, and complex nervous systems. Mammalogy has also been known as "mastology," "theriology," and "therology." The archive of number of mammals on earth is constantly growing, but is currently set at 6,495 different mammal species including recently extinct. There are 5,416 living mammals identified on earth and roughly 1,251 have been newly discovered since 2006. The major branches of mammalogy include natural history, taxonomy and systematics, anatomy and physiology, ethology, ecology, and management and control. The approximate salary of a mammalogist varies from $20,000 to $60,000 a year, depending on their experience. Mammalogists are typically involved in activities such as conducting research, managing personnel, and writing proposals. Mammalogy branches off into other taxonomically-oriented disciplines such as primatology (study of primates), and cetology (study of cetaceans). Like other studies, mammalogy is also a part of zoology which is also a part of biology, the study of all living things. Research purposes Mammalogists have stated that there are multiple reasons for the study and observation of mammals. Knowing how mammals contribute or thrive in their ecosystems gives knowledge on the ecology behind it. Mammals are often used in business industries, agriculture, and kept for pets. Studying mammals habitats and source of energy has led to aiding in survival. The domestication of some small mammals has also helped discover several different diseases, viruses, and cures. Mammalogist A mammalogist studies and observes mammals. In studying mammals, they can observe their habitats, contributions to the ecosystem, their interactions, and the anatomy and physiology. A mammalogist can do a broad variety of things within the realm of mammals. A mammalogist on average can make roughly $58,000 a year. This dep Document 2::: Animals are multicellular eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, are able to move, reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Over 1.5 million living animal species have been described—of which around 1 million are insects—but it has been estimated there are over 7 million in total. Animals range in size from 8.5 millionths of a metre to long and have complex interactions with each other and their environments, forming intricate food webs. The study of animals is called zoology. Animals may be listed or indexed by many criteria, including taxonomy, status as endangered species, their geographical location, and their portrayal and/or naming in human culture. By common name List of animal names (male, female, young, and group) By aspect List of common household pests List of animal sounds List of animals by number of neurons By domestication List of domesticated animals By eating behaviour List of herbivorous animals List of omnivores List of carnivores By endangered status IUCN Red List endangered species (Animalia) United States Fish and Wildlife Service list of endangered species By extinction List of extinct animals List of extinct birds List of extinct mammals List of extinct cetaceans List of extinct butterflies By region Lists of amphibians by region Lists of birds by region Lists of mammals by region Lists of reptiles by region By individual (real or fictional) Real Lists of snakes List of individual cats List of oldest cats List of giant squids List of individual elephants List of historical horses List of leading Thoroughbred racehorses List of individual apes List of individual bears List of giant pandas List of individual birds List of individual bovines List of individual cetaceans List of individual dogs List of oldest dogs List of individual monkeys List of individual pigs List of w Document 3::: Urban evolution refers to the heritable genetic changes of populations in response to urban development and anthropogenic activities in urban areas. Urban evolution can be caused by mutation, genetic drift, gene flow, or evolution by natural selection. Biologists have observed evolutionary change in numerous species compared to their rural counterparts on a relatively short timescale. Strong selection pressures due to urbanization play a big role in this process. The changed environmental conditions lead to selection and adaptive changes in city-dwelling plants and animals. Also, there is a significant change in species composition between rural and urban ecosystems. Shared aspects of cities worldwide also give ample opportunity for scientists to study the specific evolutionary responses in these rapidly changed landscapes independently. How certain organisms (are able to) adapt to urban environments while others cannot, gives a live perspective on rapid evolution. Urbanization With urban growth, the urban-rural gradient has seen a large shift in distribution of humans, moving from low density to very high in the last millennia. This has brought a large change to environments as well as societies. Urbanization transforms natural habitats to completely altered living spaces that sustain large human populations. Increasing congregation of humans accompanies the expansion of infrastructure, industry and housing. Natural vegetation and soil are mostly replaced or covered by dense grey materials. Urbanized areas continue to expand both in size and number globally; in 2018, the United Nations estimated that 68% of people globally will live in ever-larger urban areas by 2050. Urban evolution selective agents Urbanization intensifies diverse stressors spatiotemporally such that they can act in concert to cause rapid evolutionary consequences such as extinction, maladaptation, or adaptation. Three factors have come to the forefront as the main evolutionary influencer Document 4::: Vertebrate zoology is the biological discipline that consists of the study of Vertebrate animals, i.e., animals with a backbone, such as fish, amphibians, reptiles, birds and mammals. Many natural history museums have departments named Vertebrate Zoology. In some cases whole museums bear this name, e.g. the Museum of Vertebrate Zoology at the University of California, Berkeley. Subdivisions This subdivision of zoology has many further subdivisions, including: Ichthyology - the study of fishes. Mammalogy - the study of mammals. Chiropterology - the study of bats. Primatology - the study of primates. Ornithology - the study of birds. Herpetology - the study of reptiles. Batrachology - the study of amphibians. These divisions are sometimes further divided into more specific specialties. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Unlike wild animals, what type of species are genetically uniform, making them more vulnerable to die-out from disease? A. mammals B. domestic species C. free-range species D. urbane species Answer:
sciq-11070
multiple_choice
What type of muscle makes up most of the heart?
[ "heart muscle", "chest muscles", "cardiac muscle", "respiratory muscle" ]
C
Relavent Documents: Document 0::: Cardiac muscle (also called heart muscle or myocardium) is one of three types of vertebrate muscle tissues, with the other two being skeletal muscle and smooth muscle. It is an involuntary, striated muscle that constitutes the main tissue of the wall of the heart. The cardiac muscle (myocardium) forms a thick middle layer between the outer layer of the heart wall (the pericardium) and the inner layer (the endocardium), with blood supplied via the coronary circulation. It is composed of individual cardiac muscle cells joined by intercalated discs, and encased by collagen fibers and other substances that form the extracellular matrix. Cardiac muscle contracts in a similar manner to skeletal muscle, although with some important differences. Electrical stimulation in the form of a cardiac action potential triggers the release of calcium from the cell's internal calcium store, the sarcoplasmic reticulum. The rise in calcium causes the cell's myofilaments to slide past each other in a process called excitation-contraction coupling. Diseases of the heart muscle known as cardiomyopathies are of major importance. These include ischemic conditions caused by a restricted blood supply to the muscle such as angina, and myocardial infarction. Structure Gross anatomy Cardiac muscle tissue or myocardium forms the bulk of the heart. The heart wall is a three-layered structure with a thick layer of myocardium sandwiched between the inner endocardium and the outer epicardium (also known as the visceral pericardium). The inner endocardium lines the cardiac chambers, covers the cardiac valves, and joins with the endothelium that lines the blood vessels that connect to the heart. On the outer aspect of the myocardium is the epicardium which forms part of the pericardial sac that surrounds, protects, and lubricates the heart. Within the myocardium, there are several sheets of cardiac muscle cells or cardiomyocytes. The sheets of muscle that wrap around the left ventricle clos Document 1::: Cardiophysics is an interdisciplinary science that stands at the junction of cardiology and medical physics, with researchers using the methods of, and theories from, physics to study cardiovascular system at different levels of its organisation, from the molecular scale to whole organisms. Being formed historically as part of systems biology, cardiophysics designed to reveal connections between the physical mechanisms, underlying the organization of the cardiovascular system, and biological features of its functioning. Zbigniew R. Struzik seems to be a first author who used the term in a scientific publication in 2004. One can use interchangeably also the terms cardiovascular physics. See also Medical physics Important publications in medical physics Biomedicine Biomedical engineering Physiome Nanomedicine Document 2::: In cardiology, the cardiac skeleton, also known as the fibrous skeleton of the heart, is a high-density homogeneous structure of connective tissue that forms and anchors the valves of the heart, and influences the forces exerted by and through them. The cardiac skeleton separates and partitions the atria (the smaller, upper two chambers) from the ventricles (the larger, lower two chambers).The heart's cardiac skeleton comprises four dense connective tissue rings that encircle the mitral and tricuspid atrioventricular (AV) canals and extend to the origins of the pulmonary trunk and aorta. This provides crucial support and structure to the heart while also serving to electrically isolate the atria from the ventricles. The unique matrix of connective tissue within the cardiac skeleton isolates electrical influence within these defined chambers. In normal anatomy, there is only one conduit for electrical conduction from the upper chambers to the lower chambers, known as the atrioventricular node. The physiologic cardiac skeleton forms a firewall governing autonomic/electrical influence until bordering the bundle of His which further governs autonomic flow to the bundle branches of the ventricles. Understood as such, the cardiac skeleton efficiently centers and robustly funnels electrical energy from the atria to the ventricles. Structure The structure of the components of the heart has become an area of increasing interest. The cardiac skeleton binds several bands of dense connective tissue, as collagen, that encircle the bases of the pulmonary trunk, aorta, and all four heart valves. While not a traditionally or "true" or rigid skeleton, it does provide structure and support for the heart, as well as isolate the atria from the ventricles. This is why atrial fibrillation almost never degrades to ventricular fibrillation. In youth, this collagen structure is free of calcium adhesions and is quite flexible. With aging, calcium and other mineral accumulation occur withi Document 3::: The Frank–Starling law of the heart (also known as Starling's law and the Frank–Starling mechanism) represents the relationship between stroke volume and end diastolic volume. The law states that the stroke volume of the heart increases in response to an increase in the volume of blood in the ventricles, before contraction (the end diastolic volume), when all other factors remain constant. As a larger volume of blood flows into the ventricle, the blood stretches cardiac muscle, leading to an increase in the force of contraction. The Frank-Starling mechanism allows the cardiac output to be synchronized with the venous return, arterial blood supply and humoral length, without depending upon external regulation to make alterations. The physiological importance of the mechanism lies mainly in maintaining left and right ventricular output equality. Physiology The Frank-Starling mechanism occurs as the result of the length-tension relationship observed in striated muscle, including for example skeletal muscles, arthropod muscle and cardiac (heart) muscle. As striated muscle is stretched, active tension is created by altering the overlap of thick and thin filaments. The greatest isometric active tension is developed when a muscle is at its optimal length. In most relaxed skeletal muscle fibers, passive elastic properties maintain the muscle fibers length near optimal, as determined usually by the fixed distance between the attachment points of tendons to the bones (or the exoskeleton of arthropods) at either end of the muscle. In contrast, the relaxed sarcomere length of cardiac muscle cells, in a resting ventricle, is lower than the optimal length for contraction. There is no bone to fix sarcomere length in the heart (of any animal) so sarcomere length is very variable and depends directly upon blood filling and thereby expanding the heart chambers. In the human heart, maximal force is generated with an initial sarcomere length of 2.2 micrometers, a length which is rare Document 4::: Endomyocardial biopsy (EMB) is an invasive procedure used routinely to obtain small samples of heart muscle, primarily for detecting rejection of a donor heart following heart transplantation. It is also used as a diagnostic tool in some heart diseases. A bioptome is used to gain access to the heart via a sheath inserted into the right internal jugular or less commonly the femoral vein. Monitoring during the procedure consists of performing ECGs and blood pressures. Guidance and confirmation of correct positioning of the bioptome is made by echocardiography or fluoroscopy. The risk of complications is less than 1% when performed by an experienced physician in a specialist centre. Serious complications include perforation of the heart with pericardial tamponade, haemopericardium, AV block, tricuspid regurgitation and pneumothorax. EMB, sampling myocardium was first pioneered in Japan by S. Sakakibra and S. Konno in 1962. Indications The main reason for performing an EMB is to assess allograft rejection following heart transplantation and sometimes to evaluate cardiomyopathy, some heart disease research and ventricular arrhythmias, or unexplained ventricular dysfunction. Transplant monitoring Visualising the microscopic appearance of the heart muscle allows the detection of cell-mediated or antibody-mediated rejection and is recommended episodically during the first year after heart transplantation. Occasionally, monitoring continues beyond one year. The use of EMB in heart transplant rejection surveillance remains the gold standard test, although the pre-test predictors of rejection cardiac magnetic resonance imaging (CMR) and gene expression profiling, are increasingly used. Myocardial diseases EMB has a role in the diagnosis of viral myocarditis and inflammatory myocarditis. Procedure EMB of the right ventricle via the internal jugular vein is standard after heart transplant. A bioptome is used to gain access to the heart via a sheath inserted into the rig The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What type of muscle makes up most of the heart? A. heart muscle B. chest muscles C. cardiac muscle D. respiratory muscle Answer:
sciq-850
multiple_choice
Defecating, urination, and even childbirth involve cooperation between the diaphragm and these?
[ "skeletal muscles", "heart muscles", "lung muscles", "abdominal muscles" ]
D
Relavent Documents: Document 0::: Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research. Americas Human Biology major at Stanford University, Palo Alto (since 1970) Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government. Human and Social Biology (Caribbean) Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment. Human Biology Program at University of Toronto The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications. Asia BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002) BSc (honours) Human Biology at AIIMS (New Document 1::: The thoracic diaphragm, or simply the diaphragm (; ), is a sheet of internal skeletal muscle in humans and other mammals that extends across the bottom of the thoracic cavity. The diaphragm is the most important muscle of respiration, and separates the thoracic cavity, containing the heart and lungs, from the abdominal cavity: as the diaphragm contracts, the volume of the thoracic cavity increases, creating a negative pressure there, which draws air into the lungs. Its high oxygen consumption is noted by the many mitochondria and capillaries present; more than in any other skeletal muscle. The term diaphragm in anatomy, created by Gerard of Cremona, can refer to other flat structures such as the urogenital diaphragm or pelvic diaphragm, but "the diaphragm" generally refers to the thoracic diaphragm. In humans, the diaphragm is slightly asymmetric—its right half is higher up (superior) to the left half, since the large liver rests beneath the right half of the diaphragm. There is also speculation that the diaphragm is lower on the other side due to heart's presence. Other mammals have diaphragms, and other vertebrates such as amphibians and reptiles have diaphragm-like structures, but important details of the anatomy may vary, such as the position of the lungs in the thoracic cavity. Structure The diaphragm is an upward curved, c-shaped structure of muscle and fibrous tissue that separates the thoracic cavity from the abdomen. The superior surface of the dome forms the floor of the thoracic cavity, and the inferior surface the roof of the abdominal cavity. As a dome, the diaphragm has peripheral attachments to structures that make up the abdominal and chest walls. The muscle fibres from these attachments converge in a central tendon, which forms the crest of the dome. Its peripheral part consists of muscular fibers that take origin from the circumference of the inferior thoracic aperture and converge to be inserted into a central tendon. The muscle fibres of t Document 2::: The muscular layer (muscular coat, muscular fibers, muscularis propria, muscularis externa) is a region of muscle in many organs in the vertebrate body, adjacent to the submucosa. It is responsible for gut movement such as peristalsis. The Latin, tunica muscularis, may also be used. Structure It usually has two layers of smooth muscle: inner and "circular" outer and "longitudinal" However, there are some exceptions to this pattern. In the stomach there are three layers to the muscular layer. Stomach contains an additional oblique muscle layer just interior to circular muscle layer. In the upper esophagus, part of the externa is skeletal muscle, rather than smooth muscle. In the vas deferens of the spermatic cord, there are three layers: inner longitudinal, middle circular, and outer longitudinal. In the ureter the smooth muscle orientation is opposite that of the GI tract. There is an inner longitudinal and an outer circular layer. The inner layer of the muscularis externa forms a sphincter at two locations of the gastrointestinal tract: in the pylorus of the stomach, it forms the pyloric sphincter. in the anal canal, it forms the internal anal sphincter. In the colon, the fibres of the external longitudinal smooth muscle layer are collected into three longitudinal bands, the teniae coli. The thickest muscularis layer is found in the stomach (triple layered) and thus maximum peristalsis occurs in the stomach. Thinnest muscularis layer in the alimentary canal is found in the rectum, where minimum peristalsis occurs. Function The muscularis layer is responsible for the peristaltic movements and segmental contractions in and the alimentary canal. The Auerbach's nerve plexus (myenteric nerve plexus) is found between longitudinal and circular muscle layers, it starts muscle contractions to initiate peristalsis. Document 3::: The control of ventilation is the physiological mechanisms involved in the control of breathing, which is the movement of air into and out of the lungs. Ventilation facilitates respiration. Respiration refers to the utilization of oxygen and balancing of carbon dioxide by the body as a whole, or by individual cells in cellular respiration. The most important function of breathing is the supplying of oxygen to the body and balancing of the carbon dioxide levels. Under most conditions, the partial pressure of carbon dioxide (PCO2), or concentration of carbon dioxide, controls the respiratory rate. The peripheral chemoreceptors that detect changes in the levels of oxygen and carbon dioxide are located in the arterial aortic bodies and the carotid bodies. Central chemoreceptors are primarily sensitive to changes in the pH of the blood, (resulting from changes in the levels of carbon dioxide) and they are located on the medulla oblongata near to the medullar respiratory groups of the respiratory center. Information from the peripheral chemoreceptors is conveyed along nerves to the respiratory groups of the respiratory center. There are four respiratory groups, two in the medulla and two in the pons. The two groups in the pons are known as the pontine respiratory group. Dorsal respiratory group – in the medulla Ventral respiratory group – in the medulla Pneumotaxic center – various nuclei of the pons Apneustic center – nucleus of the pons From the respiratory center, the muscles of respiration, in particular the diaphragm, are activated to cause air to move in and out of the lungs. Control of respiratory rhythm Ventilatory pattern Breathing is normally an unconscious, involuntary, automatic process. The pattern of motor stimuli during breathing can be divided into an inhalation stage and an exhalation stage. Inhalation shows a sudden, ramped increase in motor discharge to the respiratory muscles (and the pharyngeal constrictor muscles). Before the end of inh Document 4::: Urination is the release of urine from the bladder through the urethra to the outside of the body. It is the urinary system's form of excretion. It is also known medically as micturition, voiding, uresis, or, rarely, emiction, and known colloquially by various names including peeing, weeing, pissing, and euphemistically going (for a) number one. In healthy humans and other animals, the process of urination is under voluntary control. In infants, some elderly individuals, and those with neurological injury, urination may occur as a reflex. It is normal for adult humans to urinate up to seven times during the day. In some animals, in addition to expelling waste material, urination can mark territory or express submissiveness. Physiologically, urination involves coordination between the central, autonomic, and somatic nervous systems. Brain centres that regulate urination include the pontine micturition center, periaqueductal gray, and the cerebral cortex. In placental mammals, urine is drained through the urinary meatus, a urethral opening in the male penis or female vulval vestibule. Anatomy and physiology Anatomy of the bladder and outlet The main organs involved in urination are the urinary bladder and the urethra. The smooth muscle of the bladder, known as the detrusor, is innervated by sympathetic nervous system fibers from the lumbar spinal cord and parasympathetic fibers from the sacral spinal cord. Fibers in the pelvic nerves constitute the main afferent limb of the voiding reflex; the parasympathetic fibers to the bladder that constitute the excitatory efferent limb also travel in these nerves. Part of the urethra is surrounded by the male or female external urethral sphincter, which is innervated by the somatic pudendal nerve originating in the cord, in an area termed Onuf's nucleus. Smooth muscle bundles pass on either side of the urethra, and these fibers are sometimes called the internal urethral sphincter, although they do not encircle the urethra. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Defecating, urination, and even childbirth involve cooperation between the diaphragm and these? A. skeletal muscles B. heart muscles C. lung muscles D. abdominal muscles Answer:
sciq-5082
multiple_choice
Noninfectious diseases can't be passed from one person to another. instead, these types of diseases are caused by factors such as environment, genetics and what?
[ "education", "age", "weight", "lifestyle" ]
D
Relavent Documents: Document 0::: The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591. On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education. Format This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions. The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test. Preparation The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a Document 1::: Host factor (sometimes known as risk factor) is a medical term referring to the traits of an individual person or animal that affect susceptibility to disease, especially in comparison to other individuals. The term arose in the context of infectious disease research, in contrast to "organism factors", such as the virulence and infectivity of a microbe. Host factors that may vary in a population and affect disease susceptibility can be innate or acquired. Some examples: general health psychological characteristics and attitude nutritional state social ties previous exposure to the organism or related antigens haplotype or other specific genetic differences of immune function substance abuse race The term is now used in oncology and many other medical contexts related to individual differences of disease vulnerability. See also Vulnerability index Epidemiology Immunology Document 2::: Advanced Placement (AP) Biology (also known as AP Bio) is an Advanced Placement biology course and exam offered by the College Board in the United States. For the 2012–2013 school year, the College Board unveiled a new curriculum with a greater focus on "scientific practices". This course is designed for students who wish to pursue an interest in the life sciences. The College Board recommends successful completion of high school biology and high school chemistry before commencing AP Biology, although the actual prerequisites vary from school to school and from state to state. This course, nevertheless, is considered very challenging and one of the most difficult AP classes, as shown with AP Finals grade distributions. Topic outline The exam covers the following 8 units. The percentage indicates the portion of the multiple-choice section of the exam focused on each content area: The course is based on and tests six skills, called scientific practices which include: In addition to the topics above, students are required to be familiar with general lab procedure. Students should know how to collect data, analyze data to form conclusions, and apply those conclusions. Exam Students are allowed to use a four-function, scientific, or graphing calculator. The exam has two sections: a 90 minute multiple choice section and a 90 minute free response section. There are 60 multiple choice questions and six free responses, two long and four short. Both sections are worth 50% of the score. Score distribution Commonly used textbooks Biology, AP Edition by Sylvia Mader (2012, hardcover ) Life: The Science of Biology (Sadava, Heller, Orians, Purves, and Hillis, ) Campbell Biology AP Ninth Edition (Reece, Urry, Cain, Wasserman, Minorsky, and Andrew Jackson ) See also Glossary of biology A.P Bio (TV Show) Document 3::: Progress tests are longitudinal, feedback oriented educational assessment tools for the evaluation of development and sustainability of cognitive knowledge during a learning process. A progress test is a written knowledge exam (usually involving multiple choice questions) that is usually administered to all students in the "A" program at the same time and at regular intervals (usually twice to four times yearly) throughout the entire academic program. The test samples the complete knowledge domain expected of new graduates upon completion of their courses, regardless of the year level of the student). The differences between students’ knowledge levels show in the test scores; the further a student has progressed in the curriculum the higher the scores. As a result, these resultant scores provide a longitudinal, repeated measures, curriculum-independent assessment of the objectives (in knowledge) of the entire programme. History Since its inception in the late 1970s at both Maastricht University and the University of Missouri–Kansas City independently, the progress test of applied knowledge has been increasingly used in medical and health sciences programs across the globe. They are well established and increasingly used in medical education in both undergraduate and postgraduate medical education. They are used formatively and summatively. Use in academic programs The progress test is currently used by national progress test consortia in the United Kingdom, Italy, The Netherlands, in Germany (including Austria), and in individual schools in Africa, Saudi Arabia, South East Asia, the Caribbean, Australia, New Zealand, Sweden, Finland, UK, and the USA. The National Board of Medical Examiners in the USA also provides progress testing in various countries The feasibility of an international approach to progress testing has been recently acknowledged and was first demonstrated by Albano et al. in 1996, who compared test scores across German, Dutch and Italian medi Document 4::: The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work. History It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council. Function Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres. STEM ambassadors To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell. Funding STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments. See also The WISE Campaign Engineering and Physical Sciences Research Council National Centre for Excellence in Teaching Mathematics Association for Science Education Glossary of areas of mathematics Glossary of astronomy Glossary of biology Glossary of chemistry Glossary of engineering Glossary of physics The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Noninfectious diseases can't be passed from one person to another. instead, these types of diseases are caused by factors such as environment, genetics and what? A. education B. age C. weight D. lifestyle Answer:
sciq-5116
multiple_choice
Burning fossil fuels produces air pollution and what?
[ "carbon dioxide", "liquid dioxide", "oxygen", "acid rain" ]
A
Relavent Documents: Document 0::: The indirect land use change impacts of biofuels, also known as ILUC or iLUC (pronounced as i-luck), relates to the unintended consequence of releasing more carbon emissions due to land-use changes around the world induced by the expansion of croplands for ethanol or biodiesel production in response to the increased global demand for biofuels. As farmers worldwide respond to higher crop prices in order to maintain the global food supply-and-demand balance, pristine lands are cleared to replace the food crops that were diverted elsewhere to biofuels' production. Because natural lands, such as rainforests and grasslands, store carbon in their soil and biomass as plants grow each year, clearance of wilderness for new farms translates to a net increase in greenhouse gas emissions. Due to this off-site change in the carbon stock of the soil and the biomass, indirect land use change has consequences in the greenhouse gas (GHG) balance of a biofuel. Other authors have also argued that indirect land use changes produce other significant social and environmental impacts, affecting biodiversity, water quality, food prices and supply, land tenure, worker migration, and community and cultural stability. History The estimates of carbon intensity for a given biofuel depend on the assumptions regarding several variables. As of 2008, multiple full life cycle studies had found that corn ethanol, cellulosic ethanol and Brazilian sugarcane ethanol produce lower greenhouse gas emissions than gasoline. None of these studies, however, considered the effects of indirect land-use changes, and though land use impacts were acknowledged, estimation was considered too complex and difficult to model. A controversial paper published in February 2008 in Sciencexpress by a team led by Searchinger from Princeton University concluded that such effects offset the (positive) direct effects of both corn and cellulosic ethanol and that Brazilian sugarcane performed better, but still resulted in a sma Document 1::: Roland Geyer is professor of industrial ecology at the Bren School of Environmental Science and Management, University of California at Santa Barbara. He is a specialist in the ecological impact of plastics. In March 2021, Geyer wrote in The Guardian that humanity should ban fossil fuels, just at it had earlier banned tetraethyllead (TEL) and chlorofluorocarbons (CFC). Document 2::: At the global scale sustainability and environmental management involves managing the oceans, freshwater systems, land and atmosphere, according to sustainability principles. Land use change is fundamental to the operations of the biosphere because alterations in the relative proportions of land dedicated to urbanisation, agriculture, forest, woodland, grassland and pasture have a marked effect on the global water, carbon and nitrogen biogeochemical cycles. Management of the Earth's atmosphere involves assessment of all aspects of the carbon cycle to identify opportunities to address human-induced climate change and this has become a major focus of scientific research because of the potential catastrophic effects on biodiversity and human communities. Ocean circulation patterns have a strong influence on climate and weather and, in turn, the food supply of both humans and other organisms. Atmosphere In March 2009, at a meeting of the Copenhagen Climate Council, 2,500 climate experts from 80 countries issued a keynote statement that there is now "no excuse" for failing to act on global warming and without strong carbon reduction targets "abrupt or irreversible" shifts in climate may occur that "will be very difficult for contemporary societies to cope with". Management of the global atmosphere now involves assessment of all aspects of the carbon cycle to identify opportunities to address human-induced climate change and this has become a major focus of scientific research because of the potential catastrophic effects on biodiversity and human communities. Other human impacts on the atmosphere include the air pollution in cities, the pollutants including toxic chemicals like nitrogen oxides, sulphur oxides, volatile organic compounds and airborne particulate matter that produce photochemical smog and acid rain, and the chlorofluorocarbons that degrade the ozone layer. Anthropogenic particulates such as sulfate aerosols in the atmosphere reduce the direct irradianc Document 3::: Trashing the Planet: How Science Can Help Us Deal With Acid Rain, Depletion of the Ozone, and Nuclear Waste (Among Other Things) is a 1990 book by zoologist and Governor of Washington Dixy Lee Ray. The book talks about the seriousness about acid rain, the problems with the ozone layer and other environmental issues. Ray co-wrote the book with journalist Lou Guzzo. Document 4::: Carbon dioxide is a chemical compound with the chemical formula . It is made up of molecules that each have one carbon atom covalently double bonded to two oxygen atoms. It is found in the gas state at room temperature, and as the source of available carbon in the carbon cycle, atmospheric is the primary carbon source for life on Earth. In the air, carbon dioxide is transparent to visible light but absorbs infrared radiation, acting as a greenhouse gas. Carbon dioxide is soluble in water and is found in groundwater, lakes, ice caps, and seawater. When carbon dioxide dissolves in water, it forms carbonate and mainly bicarbonate (), which causes ocean acidification as atmospheric levels increase. It is a trace gas in Earth's atmosphere at 421 parts per million (ppm), or about 0.04% (as of May 2022) having risen from pre-industrial levels of 280 ppm or about 0.025%. Burning fossil fuels is the primary cause of these increased concentrations and also the primary cause of climate change. Its concentration in Earth's pre-industrial atmosphere since late in the Precambrian was regulated by organisms and geological phenomena. Plants, algae and cyanobacteria use energy from sunlight to synthesize carbohydrates from carbon dioxide and water in a process called photosynthesis, which produces oxygen as a waste product. In turn, oxygen is consumed and is released as waste by all aerobic organisms when they metabolize organic compounds to produce energy by respiration. is released from organic materials when they decay or combust, such as in forest fires. Since plants require for photosynthesis, and humans and animals depend on plants for food, is necessary for the survival of life on earth. Carbon dioxide is 53% more dense than dry air, but is long lived and thoroughly mixes in the atmosphere. About half of excess emissions to the atmosphere are absorbed by land and ocean carbon sinks. These sinks can become saturated and are volatile, as decay and wildfires result i The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Burning fossil fuels produces air pollution and what? A. carbon dioxide B. liquid dioxide C. oxygen D. acid rain Answer:
sciq-1005
multiple_choice
What three characteristics do waves have?
[ "reflection, refraction and deflection", "spin, refraction, and deflection", "theory , refraction and deflection", "structure , refraction and deflection" ]
A
Relavent Documents: Document 0::: In physics, a mechanical wave is a wave that is an oscillation of matter, and therefore transfers energy through a medium. While waves can move over long distances, the movement of the medium of transmission—the material—is limited. Therefore, the oscillating material does not move far from its initial equilibrium position. Mechanical waves can be produced only in media which possess elasticity and inertia. There are three types of mechanical waves: transverse waves, longitudinal waves, and surface waves. Some of the most common examples of mechanical waves are water waves, sound waves, and seismic waves. Like all waves, mechanical waves transport energy. This energy propagates in the same direction as the wave. A wave requires an initial energy input; once this initial energy is added, the wave travels through the medium until all its energy is transferred. In contrast, electromagnetic waves require no medium, but can still travel through one. Transverse wave A transverse wave is the form of a wave in which particles of medium vibrate about their mean position perpendicular to the direction of the motion of the wave. To see an example, move an end of a Slinky (whose other end is fixed) to the left-and-right of the Slinky, as opposed to to-and-fro. Light also has properties of a transverse wave, although it is an electromagnetic wave. Longitudinal wave Longitudinal waves cause the medium to vibrate parallel to the direction of the wave. It consists of multiple compressions and rarefactions. The rarefaction is the farthest distance apart in the longitudinal wave and the compression is the closest distance together. The speed of the longitudinal wave is increased in higher index of refraction, due to the closer proximity of the atoms in the medium that is being compressed. Sound is a longitudinal wave. Surface waves This type of wave travels along the surface or interface between two media. An example of a surface wave would be waves in a pool, or in an ocean Document 1::: A waveguide is a structure that guides waves by restricting the transmission of energy to one direction. Common types of waveguides include acoustic waveguides which direct sound, optical waveguides which direct light, and radio-frequency waveguides which direct electromagnetic waves other than light like radio waves. Without the physical constraint of a waveguide, waves would expand into three-dimensional space and their intensities would decrease according to the inverse square law. There are different types of waveguides for different types of waves. The original and most common meaning is a hollow conductive metal pipe used to carry high frequency radio waves, particularly microwaves. Dielectric waveguides are used at higher radio frequencies, and transparent dielectric waveguides and optical fibers serve as waveguides for light. In acoustics, air ducts and horns are used as waveguides for sound in musical instruments and loudspeakers, and specially-shaped metal rods conduct ultrasonic waves in ultrasonic machining. The geometry of a waveguide reflects its function; in addition to more common types that channel the wave in one dimension, there are two-dimensional slab waveguides which confine waves to two dimensions. The frequency of the transmitted wave also dictates the size of a waveguide: each waveguide has a cutoff wavelength determined by its size and will not conduct waves of greater wavelength; an optical fiber that guides light will not transmit microwaves which have a much larger wavelength. Some naturally occurring structures can also act as waveguides. The SOFAR channel layer in the ocean can guide the sound of whale song across enormous distances. Any shape of cross section of waveguide can support EM waves. Irregular shapes are difficult to analyse. Commonly used waveguides are rectangular and circular in shape. Uses The uses of waveguides for transmitting signals were known even before the term was coined. The phenomenon of sound waves g Document 2::: In physics, a transverse wave is a wave that oscillates perpendicularly to the direction of the wave's advance. In contrast, a longitudinal wave travels in the direction of its oscillations. All waves move energy from place to place without transporting the matter in the transmission medium if there is one. Electromagnetic waves are transverse without requiring a medium. The designation “transverse” indicates the direction of the wave is perpendicular to the displacement of the particles of the medium through which it passes, or in the case of EM waves, the oscillation is perpendicular to the direction of the wave. A simple example is given by the waves that can be created on a horizontal length of string by anchoring one end and moving the other end up and down. Another example is the waves that are created on the membrane of a drum. The waves propagate in directions that are parallel to the membrane plane, but each point in the membrane itself gets displaced up and down, perpendicular to that plane. Light is another example of a transverse wave, where the oscillations are the electric and magnetic fields, which point at right angles to the ideal light rays that describe the direction of propagation. Transverse waves commonly occur in elastic solids due to the shear stress generated; the oscillations in this case are the displacement of the solid particles away from their relaxed position, in directions perpendicular to the propagation of the wave. These displacements correspond to a local shear deformation of the material. Hence a transverse wave of this nature is called a shear wave. Since fluids cannot resist shear forces while at rest, propagation of transverse waves inside the bulk of fluids is not possible. In seismology, shear waves are also called secondary waves or S-waves. Transverse waves are contrasted with longitudinal waves, where the oscillations occur in the direction of the wave. The standard example of a longitudinal wave is a sound wave or " Document 3::: This is a list of wave topics. 0–9 21 cm line A Abbe prism Absorption spectroscopy Absorption spectrum Absorption wavemeter Acoustic wave Acoustic wave equation Acoustics Acousto-optic effect Acousto-optic modulator Acousto-optics Airy disc Airy wave theory Alfvén wave Alpha waves Amphidromic point Amplitude Amplitude modulation Animal echolocation Antarctic Circumpolar Wave Antiphase Aquamarine Power Arrayed waveguide grating Artificial wave Atmospheric diffraction Atmospheric wave Atmospheric waveguide Atom laser Atomic clock Atomic mirror Audience wave Autowave Averaged Lagrangian B Babinet's principle Backward wave oscillator Bandwidth-limited pulse beat Berry phase Bessel beam Beta wave Black hole Blazar Bloch's theorem Blueshift Boussinesq approximation (water waves) Bow wave Bragg diffraction Bragg's law Breaking wave Bremsstrahlung, Electromagnetic radiation Brillouin scattering Bullet bow shockwave Burgers' equation Business cycle C Capillary wave Carrier wave Cherenkov radiation Chirp Ernst Chladni Circular polarization Clapotis Closed waveguide Cnoidal wave Coherence (physics) Coherence length Coherence time Cold wave Collimated light Collimator Compton effect Comparison of analog and digital recording Computation of radiowave attenuation in the atmosphere Continuous phase modulation Continuous wave Convective heat transfer Coriolis frequency Coronal mass ejection Cosmic microwave background radiation Coulomb wave function Cutoff frequency Cutoff wavelength Cymatics D Damped wave Decollimation Delta wave Dielectric waveguide Diffraction Direction finding Dispersion (optics) Dispersion (water waves) Dispersion relation Dominant wavelength Doppler effect Doppler radar Douglas Sea Scale Draupner wave Droplet-shaped wave Duhamel's principle E E-skip Earthquake Echo (phenomenon) Echo sounding Echolocation (animal) Echolocation (human) Eddy (fluid dynamics) Edge wave Eikonal equation Ekman layer Ekman spiral Ekman transport El Niño–Southern Oscillation El Document 4::: A wavenumber–frequency diagram is a plot displaying the relationship between the wavenumber (spatial frequency) and the frequency (temporal frequency) of certain phenomena. Usually frequencies are placed on the vertical axis, while wavenumbers are placed on the horizontal axis. In the atmospheric sciences, these plots are a common way to visualize atmospheric waves. In the geosciences, especially seismic data analysis, these plots also called f–k plot, in which energy density within a given time interval is contoured on a frequency-versus-wavenumber basis. They are used to examine the direction and apparent velocity of seismic waves and in velocity filter design. Origins In general, the relationship between wavelength , frequency , and the phase velocity of a sinusoidal wave is: Using the wavenumber () and angular frequency () notation, the previous equation can be rewritten as On the other hand, the group velocity is equal to the slope of the wavenumber–frequency diagram: Analyzing such relationships in detail often yields information on the physical properties of the medium, such as density, composition, etc. See also Dispersion relation The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What three characteristics do waves have? A. reflection, refraction and deflection B. spin, refraction, and deflection C. theory , refraction and deflection D. structure , refraction and deflection Answer:
sciq-1263
multiple_choice
What type of vertebrates control body temperature to just a limited extent from the outside by changing behavior?
[ "endothermic", "ectothermic", "mimetic", "etheric" ]
B
Relavent Documents: Document 0::: An endotherm (from Greek ἔνδον endon "within" and θέρμη thermē "heat") is an organism that maintains its body at a metabolically favorable temperature, largely by the use of heat released by its internal bodily functions instead of relying almost purely on ambient heat. Such internally generated heat is mainly an incidental product of the animal's routine metabolism, but under conditions of excessive cold or low activity an endotherm might apply special mechanisms adapted specifically to heat production. Examples include special-function muscular exertion such as shivering, and uncoupled oxidative metabolism, such as within brown adipose tissue. Only birds and mammals are extant universally endothermic groups of animals. However, Argentine black and white tegu, leatherback sea turtles, lamnid sharks, tuna and billfishes, cicadas, and winter moths are also endothermic. Unlike mammals and birds, some reptiles, particularly some species of python and tegu, possess seasonal reproductive endothermy in which they are endothermic only during their reproductive season. In common parlance, endotherms are characterized as "warm-blooded". The opposite of endothermy is ectothermy, although in general, there is no absolute or clear separation between the nature of endotherms and ectotherms. Origin Endothermy was thought to have originated towards the end of the Permian Period. One recent study claimed the origin of endothermy within Synapsida (the mammalian lineage) was among Mammaliamorpha, a node calibrated during the Late Triassic period, about 233 million years ago. Another study instead argued that endothermy only appeared later, during the Middle Jurassic, among crown-group mammals. Evidence for endothermy has been found in basal synapsids ("pelycosaurs"), pareiasaurs, ichthyosaurs, plesiosaurs, mosasaurs, and basal archosauromorphs. Even the earliest amniotes might have been endotherms. Mechanisms Generating and conserving heat Many endotherms have a larger amount Document 1::: Brett's hypothesis also known as the heat-invariant hypothesis or Brett's heat-invariant hypothesis proposes that upper thermal tolerance limits are less variable geographically than lower thermal tolerance limits. This hypothesis was originally proposed for fish but lately has been supported by studies with reptiles, amphibians, and aquatic insects. Three different mechanisms are proposed for the existence of this large-scale pattern of thermal tolerance limits variation: A constrained evolutionary potential of upper thermal tolerance limits The buffering effects of thermoregulatory behaviour has greater potential to face heat rather than cold stress Resolution of thermal data used Global versus local scales in Brett's hypothesis While Brett's hypothesis has been strongly supported at global scales, heat tolerance seems to respond differently to smaller-scale climatic and habitat factors. For instance, lizards from the Iberian Peninsula show higher variation in upper thermal tolerance limits than in lower thermal tolerance limits. Similar results are found in adult frogs, tadpoles, and dragonfly larvae at local scales. Document 2::: In physiology, thermoception or thermoreception is the sensation and perception of temperature, or more accurately, temperature differences inferred from heat flux. It deals with a series of events and processes required for an organism to receive a temperature stimulus, convert it to a molecular signal, and recognize and characterize the signal in order to trigger an appropriate defense response. Thermoception in larger animals is mainly done in the skin; mammals have at least two types. The details of how temperature receptors work are still being investigated. Ciliopathy is associated with decreased ability to sense heat; thus cilia may aid in the process. Transient receptor potential channels (TRP channels) are believed to play a role in many species in sensation of hot, cold, and pain. Vertebrates have at least two types of sensor: those that detect heat and those that detect cold. In animals A particularly specialized form of thermoception is used by Crotalinae (pit viper) and Boidae (boa) snakes, which can effectively see the infrared radiation emitted by hot objects. The snakes' face has a pair of holes, or pits, lined with temperature sensors. The sensors indirectly detect infrared radiation by its heating effect on the skin inside the pit. They can work out which part of the pit is hottest, and therefore the direction of the heat source, which could be a warm-blooded prey animal. By combining information from both pits, the snake can also estimate the distance of the object. The Common vampire bat has specialized infrared sensors in its nose-leaf. Vampire bats are the only mammals that feed exclusively on blood. The infrared sense enables Desmodus to localize homeothermic (warm-blooded) animals (cattle, horses, wild mammals) within a range of about 10 to 15 cm. This infrared perception is possibly used in detecting regions of maximal blood flow on targeted prey. Other animals with specialized heat detectors are forest fire seeking beetles (Melano Document 3::: Heterothermy or heterothermia (from Greek ἕτερος heteros "other" and θέρμη thermē "heat") is a physiological term for animals that vary between self-regulating their body temperature, and allowing the surrounding environment to affect it. In other words, they exhibit characteristics of both poikilothermy and homeothermy. Definition Heterothermic animals are those that can switch between poikilothermic and homeothermic strategies. These changes in strategies typically occur on a daily basis or on an annual basis. More often than not, it is used as a way to dissociate the fluctuating metabolic rates seen in some small mammals and birds (e.g. bats and hummingbirds), from those of traditional cold blooded animals. In many bat species, body temperature and metabolic rate are elevated only during activity. When at rest, these animals reduce their metabolisms drastically, which results in their body temperature dropping to that of the surrounding environment. This makes them homeothermic when active, and poikilothermic when at rest. This phenomenon has been termed 'daily torpor' and was intensively studied in the Djungarian hamster. During the hibernation season, this animal shows strongly reduced metabolism each day during the rest phase while it reverts to endothermic metabolism during its active phase, leading to normal euthermic body temperatures (around 38 °C). Larger mammals (e.g. ground squirrels) and bats show multi-day torpor bouts during hibernation (up to several weeks) in winter. During these multi-day torpor bouts, body temperature drops to ~1 °C above ambient temperature and metabolism may drop to about 1% of the normal endothermic metabolic rate. Even in these deep hibernators, the long periods of torpor is interrupted by bouts of endothermic metabolism, called arousals (typically lasting between 4–20 hours). These metabolic arousals cause body temperature to return to euthermic levels 35-37 °C. Most of the energy spent during hibernation is spent in arous Document 4::: Gigantothermy (sometimes called ectothermic homeothermy or inertial homeothermy) is a phenomenon with significance in biology and paleontology, whereby large, bulky ectothermic animals are more easily able to maintain a constant, relatively high body temperature than smaller animals by virtue of their smaller surface-area-to-volume ratio. A bigger animal has proportionately less of its body close to the outside environment than a smaller animal of otherwise similar shape, and so it gains heat from, or loses heat to, the environment much more slowly. The phenomenon is important in the biology of ectothermic megafauna, such as large turtles, and aquatic reptiles like ichthyosaurs and mosasaurs. Gigantotherms, though almost always ectothermic, generally have a body temperature similar to that of endotherms. It has been suggested that the larger dinosaurs would have been gigantothermic, rendering them virtually homeothermic. Disadvantages Gigantothermy allows animals to maintain body temperature, but is most likely detrimental to endurance and muscle power as compared with endotherms due to decreased anaerobic efficiency. Mammals' bodies have roughly four times as much surface area occupied by mitochondria as reptiles, necessitating larger energy demands, and consequently producing more heat to use in thermoregulation. An ectotherm the same size of an endotherm would not be able to remain as active as the endotherm, as heat is modulated behaviorally rather than biochemically. More time is dedicated to basking than eating. Advantages Large ectotherms displaying the same body size as large endotherms have the advantage of a slow metabolic rate, meaning that it takes reptiles longer to digest their food. Consequently gigantothermic ectotherms would not have to eat as often as large endotherms that need to maintain a constant influx of food to meet energy demands. Although lions are much smaller than crocodiles, the lions must eat more often than crocodiles because o The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What type of vertebrates control body temperature to just a limited extent from the outside by changing behavior? A. endothermic B. ectothermic C. mimetic D. etheric Answer:
scienceQA-3018
multiple_choice
Select the chemical formula for this molecule.
[ "P3C", "HPCl3", "PCl2", "PCl3" ]
D
P is the symbol for phosphorus. Cl is the symbol for chlorine. This ball-and-stick model shows a molecule with one phosphorus atom and three chlorine atoms. The chemical formula will contain the symbols P and Cl. There is one phosphorus atom, so P will not have a subscript. There are three chlorine atoms, so Cl will have a subscript of 3. The correct formula is PCl3. The diagram below shows how each part of the chemical formula matches with each part of the model above.
Relavent Documents: Document 0::: In chemical nomenclature, the IUPAC nomenclature of organic chemistry is a method of naming organic chemical compounds as recommended by the International Union of Pure and Applied Chemistry (IUPAC). It is published in the Nomenclature of Organic Chemistry (informally called the Blue Book). Ideally, every possible organic compound should have a name from which an unambiguous structural formula can be created. There is also an IUPAC nomenclature of inorganic chemistry. To avoid long and tedious names in normal communication, the official IUPAC naming recommendations are not always followed in practice, except when it is necessary to give an unambiguous and absolute definition to a compound. IUPAC names can sometimes be simpler than older names, as with ethanol, instead of ethyl alcohol. For relatively simple molecules they can be more easily understood than non-systematic names, which must be learnt or looked over. However, the common or trivial name is often substantially shorter and clearer, and so preferred. These non-systematic names are often derived from an original source of the compound. Also, very long names may be less clear than structural formulas. Basic principles In chemistry, a number of prefixes, suffixes and infixes are used to describe the type and position of the functional groups in the compound. The steps for naming an organic compound are: Identification of the parent hydride parent hydrocarbon chain. This chain must obey the following rules, in order of precedence: It should have the maximum number of substituents of the suffix functional group. By suffix, it is meant that the parent functional group should have a suffix, unlike halogen substituents. If more than one functional group is present, the one with highest group precedence should be used. It should have the maximum number of multiple bonds. It should have the maximum length. It should have the maximum number of substituents or branches cited as prefixes It should have the ma Document 1::: The prismanes are a class of hydrocarbon compounds consisting of prism-like polyhedra of various numbers of sides on the polygonal base. Chemically, it is a series of fused cyclobutane rings (a ladderane, with all-cis/all-syn geometry) that wraps around to join its ends and form a band, with cycloalkane edges. Their chemical formula is (C2H2)n, where n is the number of cyclobutane sides (the size of the cycloalkane base), and that number also forms the basis for a system of nomenclature within this class. The first few chemicals in this class are: Triprismane, tetraprismane, and pentaprismane have been synthesized and studied experimentally, and many higher members of the series have been studied using computer models. The first several members do indeed have the geometry of a regular prism, with flat n-gon bases. As n becomes increasingly large, however, modeling experiments find that highly symmetric geometry is no longer stable, and the molecule distorts into less-symmetric forms. One series of modelling experiments found that starting with [11]prismane, the regular-prism form is not a stable geometry. For example, the structure of [12]prismane would have the cyclobutane chain twisted, with the dodecagonal bases non-planar and non-parallel. Nonconvex prismanes For large base-sizes, some of the cyclobutanes can be fused anti to each other, giving a non-convex polygon base. These are geometric isomers of the prismanes. Two isomers of [12]prismane that have been studied computationally are named helvetane and israelane, based on the star-like shapes of the rings that form their bases. This was explored computationally after originally being proposed as an April fools joke. Their names refer to the shapes found on the flags of Switzerland and Israel, respectively. Polyprismanes The polyprismanes consist of multiple prismanes stacked base-to-base. The carbons at each intermediate level—the n-gon bases where the prismanes fuse to each other—have no hydrogen atom Document 2::: The SYBYL line notation or SLN is a specification for unambiguously describing the structure of chemical molecules using short ASCII strings. SLN differs from SMILES in several significant ways. SLN can specify molecules, molecular queries, and reactions in a single line notation whereas SMILES handles these through language extensions. SLN has support for relative stereochemistry, it can distinguish mixtures of enantiomers from pure molecules with pure but unresolved stereochemistry. In SMILES aromaticity is considered to be a property of both atoms and bonds whereas in SLN it is a property of bonds. Description Like SMILES, SLN is a linear language that describes molecules. This provides a lot of similarity with SMILES despite SLN's many differences from SMILES, and as a result this description will heavily compare SLN to SMILES and its extensions. Attributes Attributes, bracketed strings with additional data like [key1=value1, key2...], is a core feature of SLN. Attributes can be applied to atoms and bonds. Attributes not defined officially are available to users for private extensions. When searching for molecules, comparison operators such as fcharge>-0.125 can be used in place of the usual equal sign. A ! preceding a key/value group inverts the result of the comparison. Entire molecules or reactions can too have attributes. The square brackets are changed to a pair of <> signs. Atoms Anything that starts with an uppercase letter identifies an atom in SLN. Hydrogens are not automatically added, but the single bonds with hydrogen can be abbreviated for organic compounds, resulting in CH4 instead of C(H)(H)(H)H for methane. The author argues that explicit hydrogens allow for more robust parsing. Attributes defined for atoms include I= for isotope mass number, charge= for formal charge, fcharge for partial charge, s= for stereochemistry, and spin= for radicals (s, d, t respectively for singlet, doublet, triplet). A formal charge of charge=2 can be abbrevi Document 3::: E–Z configuration, or the E–Z convention, is the IUPAC preferred method of describing the absolute stereochemistry of double bonds in organic chemistry. It is an extension of cis–trans isomer notation (which only describes relative stereochemistry) that can be used to describe double bonds having two, three or four substituents. Following the Cahn–Ingold–Prelog priority rules (CIP rules), each substituent on a double bond is assigned a priority, then positions of the higher of the two substituents on each carbon are compared to each other. If the two groups of higher priority are on opposite sides of the double bond (trans to each other), the bond is assigned the configuration E (from entgegen, , the German word for "opposite"). If the two groups of higher priority are on the same side of the double bond (cis to each other), the bond is assigned the configuration Z (from zusammen, , the German word for "together"). The letters E and Z are conventionally printed in italic type, within parentheses, and separated from the rest of the name with a hyphen. They are always printed as full capitals (not in lowercase or small capitals), but do not constitute the first letter of the name for English capitalization rules (as in the example above). Another example: The CIP rules assign a higher priority to bromine than to chlorine, and a higher priority to chlorine than to hydrogen, hence the following (possibly counterintuitive) nomenclature. For organic molecules with multiple double bonds, it is sometimes necessary to indicate the alkene location for each E or Z symbol. For example, the chemical name of alitretinoin is (2E,4E,6Z,8E)-3,7-dimethyl-9-(2,6,6-trimethyl-1-cyclohexenyl)nona-2,4,6,8-tetraenoic acid, indicating that the alkenes starting at positions 2, 4, and 8 are E while the one starting at position 6 is Z. See also Descriptor (chemistry) Geometric isomerism Molecular geometry Document 4::: This is a list of topics in molecular biology. See also index of biochemistry articles. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Select the chemical formula for this molecule. A. P3C B. HPCl3 C. PCl2 D. PCl3 Answer:
sciq-3624
multiple_choice
What term is used to describe the energy reserve carbohydrate of animals?
[ "lactose", "fructose", "glycogen", "sucrose" ]
C
Relavent Documents: Document 0::: Glycogen is a multibranched polysaccharide of glucose that serves as a form of energy storage in animals, fungi, and bacteria. It is the main storage form of glucose in the human body. Glycogen functions as one of three regularly used forms of energy reserves, creatine phosphate being for very short-term, glycogen being for short-term and the triglyceride stores in adipose tissue (i.e., body fat) being for long-term storage. Protein, broken down into amino acids, is seldom used as a main energy source except during starvation and glycolytic crisis (see bioenergetic systems). In humans, glycogen is made and stored primarily in the cells of the liver and skeletal muscle. In the liver, glycogen can make up 5–6% of the organ's fresh weight: the liver of an adult, weighing 1.5 kg, can store roughly 100–120 grams of glycogen. In skeletal muscle, glycogen is found in a low concentration (1–2% of the muscle mass): the skeletal muscle of an adult weighing 70 kg stores roughly 400 grams of glycogen. Small amounts of glycogen are also found in other tissues and cells, including the kidneys, red blood cells, white blood cells, and glial cells in the brain. The uterus also stores glycogen during pregnancy to nourish the embryo. The amount of glycogen stored in the body mostly depends on oxidative type 1 fibres, physical training, basal metabolic rate, and eating habits. Different levels of resting muscle glycogen are reached by changing the number of glycogen particles, rather than increasing the size of existing particles though most glycogen particles at rest are smaller than their theoretical maximum. Approximately 4 grams of glucose are present in the blood of humans at all times; in fasting individuals, blood glucose is maintained constant at this level at the expense of glycogen stores in the liver and skeletal muscle. Glycogen stores in skeletal muscle serve as a form of energy storage for the muscle itself; however, the breakdown of muscle glycogen impedes muscle Document 1::: Animal nutrition focuses on the dietary nutrients needs of animals, primarily those in agriculture and food production, but also in zoos, aquariums, and wildlife management. Constituents of diet Macronutrients (excluding fiber and water) provide structural material (amino acids from which proteins are built, and lipids from which cell membranes and some signaling molecules are built) and energy. Some of the structural material can be used to generate energy internally, though the net energy depends on such factors as absorption and digestive effort, which vary substantially from instance to instance. Vitamins, minerals, fiber, and water do not provide energy, but are required for other reasons. A third class dietary material, fiber (i.e., non-digestible material such as cellulose), seems also to be required, for both mechanical and biochemical reasons, though the exact reasons remain unclear. Molecules of carbohydrates and fats consist of carbon, hydrogen, and oxygen atoms. Carbohydrates range from simple monosaccharides (glucose, fructose, galactose) to complex polysaccharides (starch). Fats are triglycerides, made of assorted fatty acid monomers bound to glycerol backbone. Some fatty acids, but not all, are essential in the diet: they cannot be synthesized in the body. Protein molecules contain nitrogen atoms in addition to carbon, oxygen, and hydrogen. The fundamental components of protein are nitrogen-containing amino acids. Essential amino acids cannot be made by the animal. Some of the amino acids are convertible (with the expenditure of energy) to glucose and can be used for energy production just as ordinary glucose. By breaking down existing protein, some glucose can be produced internally; the remaining amino acids are discarded, primarily as urea in urine. This occurs normally only during prolonged starvation. Other dietary substances found in plant foods (phytochemicals, polyphenols) are not identified as essential nutrients but appear to impact healt Document 2::: The term human equivalent is used in a number of different contexts. This term can refer to human equivalents of various comparisons of animate and inanimate things. Animal models in chemistry and medicine Animal models are used to learn more about a disease, its diagnosis and its treatment, with animal models predicting human toxicity in up to 71% of cases. The human equivalent dose (HED) or human equivalent concentration (HEC) is the quantity of a chemical that, when administered to humans, produces an effect equal to that produced in test animals by a smaller dose. Calculating the HED is a step in carrying out a clinical trial of a pharmaceutical drug. Human energy usage and conversion The concept of human-equivalent energy (H-e) assists in understanding of energy flows in physical and biological systems by expressing energy units in human terms: it provides a “feel” for the use of a given amount of energy by expressing it in terms of the relative quantity of energy needed for human metabolism, assuming an average human energy expenditure of 12,500 kJ per day and a basal metabolic rate of 80 watts. A light bulb running at 100 watts is running at 1.25 human equivalents (100/80), i.e. 1.25 H-e. On the other hand, a human may generate as much as 1,000 watts for a task lasting a few minutes, or even more for a task of a few seconds' duration, while climbing a flight of stairs may represent work at a rate of about 200 watts. Animal attributes expressed in terms of human equivalents Cat and dog years The ages of domestic cats and dogs are often referred to in terms of "cat years" or "dog years", representing a conversion to human-equivalent years. One formula for cat years is based on a cat reaching maturity in approximately 1 year, which could be seen as 16 in human terms, then adding about 4 years for every year the cat ages. A 5-year-old cat would then be (5 − 1) × 4 + 16 = 32 "cat years" (i.e. human-equivalent years), and a 10-year-old cat (10 − 1) × 4 + 16 = Document 3::: An energy budget is a balance sheet of energy income against expenditure. It is studied in the field of Energetics which deals with the study of energy transfer and transformation from one form to another. Calorie is the basic unit of measurement. An organism in a laboratory experiment is an open thermodynamic system, exchanging energy with its surroundings in three ways - heat, work and the potential energy of biochemical compounds. Organisms use ingested food resources (C=consumption) as building blocks in the synthesis of tissues (P=production) and as fuel in the metabolic process that power this synthesis and other physiological processes (R=respiratory loss). Some of the resources are lost as waste products (F=faecal loss, U=urinary loss). All these aspects of metabolism can be represented in energy units. The basic model of energy budget may be shown as: P = C - R - U - F or P = C - (R + U + F) or C = P + R + U + F All the aspects of metabolism can be represented in energy units (e.g. joules (J);1 calorie = 4.2 kJ). Energy used for metabolism will be R = C - (F + U + P) Energy used in the maintenance will be R + F + U = C - P Endothermy and ectothermy Energy budget allocation varies for endotherms and ectotherms. Ectotherms rely on the environment as a heat source while endotherms maintain their body temperature through the regulation of metabolic processes. The heat produced in association with metabolic processes facilitates the active lifestyles of endotherms and their ability to travel far distances over a range of temperatures in the search for food. Ectotherms are limited by the ambient temperature of the environment around them but the lack of substantial metabolic heat production accounts for an energetically inexpensive metabolic rate. The energy demands for ectotherms are generally one tenth of that required for endotherms. Document 4::: Relatively speaking, the brain consumes an immense amount of energy in comparison to the rest of the body. The mechanisms involved in the transfer of energy from foods to neurons are likely to be fundamental to the control of brain function. Human bodily processes, including the brain, all require both macronutrients, as well as micronutrients. Insufficient intake of selected vitamins, or certain metabolic disorders, may affect cognitive processes by disrupting the nutrient-dependent processes within the body that are associated with the management of energy in neurons, which can subsequently affect synaptic plasticity, or the ability to encode new memories. Macronutrients The human brain requires nutrients obtained from the diet to develop and sustain its physical structure and cognitive functions. Additionally, the brain requires caloric energy predominately derived from the primary macronutrients to operate. The three primary macronutrients include carbohydrates, proteins, and fats. Each macronutrient can impact cognition through multiple mechanisms, including glucose and insulin metabolism, neurotransmitter actions, oxidative stress and inflammation, and the gut-brain axis. Inadequate macronutrient consumption or proportion could impair optimal cognitive functioning and have long-term health implications. Carbohydrates Through digestion, dietary carbohydrates are broken down and converted into glucose, which is the sole energy source for the brain. Optimal brain function relies on adequate carbohydrate consumption, as carbohydrates provide the quickest source of glucose for the brain. Glucose deficiencies such as hypoglycaemia reduce available energy for the brain and impair all cognitive processes and performance. Additionally, situations with high cognitive demand, such as learning a new task, increase brain glucose utilization, depleting blood glucose stores and initiating the need for supplementation. Complex carbohydrates, especially those with high d The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What term is used to describe the energy reserve carbohydrate of animals? A. lactose B. fructose C. glycogen D. sucrose Answer:
sciq-6617
multiple_choice
Electrons are located at fixed distances from the nucleus, what are they called?
[ "energy levels", "Positive levels", "Energy layers", "energy concentrations" ]
A
Relavent Documents: Document 0::: Secondary electrons are electrons generated as ionization products. They are called 'secondary' because they are generated by other radiation (the primary radiation). This radiation can be in the form of ions, electrons, or photons with sufficiently high energy, i.e. exceeding the ionization potential. Photoelectrons can be considered an example of secondary electrons where the primary radiation are photons; in some discussions photoelectrons with higher energy (>50 eV) are still considered "primary" while the electrons freed by the photoelectrons are "secondary". Applications Secondary electrons are also the main means of viewing images in the scanning electron microscope (SEM). The range of secondary electrons depends on the energy. Plotting the inelastic mean free path as a function of energy often shows characteristics of the "universal curve" familiar to electron spectroscopists and surface analysts. This distance is on the order of a few nanometers in metals and tens of nanometers in insulators. This small distance allows such fine resolution to be achieved in the SEM. For SiO2, for a primary electron energy of 100 eV, the secondary electron range is up to 20 nm from the point of incidence. See also Delta ray Everhart-Thornley detector Document 1::: The electric dipole moment is a measure of the separation of positive and negative electrical charges within a system, that is, a measure of the system's overall polarity. The SI unit for electric dipole moment is the coulomb-meter (C⋅m). The debye (D) is another unit of measurement used in atomic physics and chemistry. Theoretically, an electric dipole is defined by the first-order term of the multipole expansion; it consists of two equal and opposite charges that are infinitesimally close together, although real dipoles have separated charge. Elementary definition Often in physics the dimensions of a massive object can be ignored and can be treated as a pointlike object, i.e. a point particle. Point particles with electric charge are referred to as point charges. Two point charges, one with charge and the other one with charge separated by a distance , constitute an electric dipole (a simple case of an electric multipole). For this case, the electric dipole moment has a magnitude and is directed from the negative charge to the positive one. Some authors may split in half and use since this quantity is the distance between either charge and the center of the dipole, leading to a factor of two in the definition. A stronger mathematical definition is to use vector algebra, since a quantity with magnitude and direction, like the dipole moment of two point charges, can be expressed in vector form where is the displacement vector pointing from the negative charge to the positive charge. The electric dipole moment vector also points from the negative charge to the positive charge. With this definition the dipole direction tends to align itself with an external electric field (and note that the electric flux lines produced by the charges of the dipole itself, which point from positive charge to negative charge then tend to oppose the flux lines of the external field). Note that this sign convention is used in physics, while the opposite sign convention for th Document 2::: A quantum mechanical system or particle that is bound—that is, confined spatially—can only take on certain discrete values of energy, called energy levels. This contrasts with classical particles, which can have any amount of energy. The term is commonly used for the energy levels of the electrons in atoms, ions, or molecules, which are bound by the electric field of the nucleus, but can also refer to energy levels of nuclei or vibrational or rotational energy levels in molecules. The energy spectrum of a system with such discrete energy levels is said to be quantized. In chemistry and atomic physics, an electron shell, or principal energy level, may be thought of as the orbit of one or more electrons around an atom's nucleus. The closest shell to the nucleus is called the " shell" (also called "K shell"), followed by the " shell" (or "L shell"), then the " shell" (or "M shell"), and so on farther and farther from the nucleus. The shells correspond with the principal quantum numbers (n = 1, 2, 3, 4 ...) or are labeled alphabetically with letters used in the X-ray notation (K, L, M, N...). Each shell can contain only a fixed number of electrons: The first shell can hold up to two electrons, the second shell can hold up to eight (2 + 6) electrons, the third shell can hold up to 18 (2 + 6 + 10) and so on. The general formula is that the nth shell can in principle hold up to 2n2 electrons. Since electrons are electrically attracted to the nucleus, an atom's electrons will generally occupy outer shells only if the more inner shells have already been completely filled by other electrons. However, this is not a strict requirement: atoms may have two or even three incomplete outer shells. (See Madelung rule for more details.) For an explanation of why electrons exist in these shells see electron configuration. If the potential energy is set to zero at infinite distance from the atomic nucleus or molecule, the usual convention, then bound electron states have negative pot Document 3::: Electric potential energy is a potential energy (measured in joules) that results from conservative Coulomb forces and is associated with the configuration of a particular set of point charges within a defined system. An object may be said to have electric potential energy by virtue of either its own electric charge or its relative position to other electrically charged objects. The term "electric potential energy" is used to describe the potential energy in systems with time-variant electric fields, while the term "electrostatic potential energy" is used to describe the potential energy in systems with time-invariant electric fields. Definition The electric potential energy of a system of point charges is defined as the work required to assemble this system of charges by bringing them close together, as in the system from an infinite distance. Alternatively, the electric potential energy of any given charge or system of charges is termed as the total work done by an external agent in bringing the charge or the system of charges from infinity to the present configuration without undergoing any acceleration. The electrostatic potential energy can also be defined from the electric potential as follows: Units The SI unit of electric potential energy is joule (named after the English physicist James Prescott Joule). In the CGS system the erg is the unit of energy, being equal to 10−7 Joules. Also electronvolts may be used, 1 eV = 1.602×10−19 Joules. Electrostatic potential energy of one point charge One point charge q in the presence of another point charge Q The electrostatic potential energy, UE, of one point charge q at position r in the presence of a point charge Q, taking an infinite separation between the charges as the reference position, is: where is the Coulomb constant, r is the distance between the point charges q and Q, and q and Q are the charges (not the absolute values of the charges—i.e., an electron would have a negative value of charge when Document 4::: The subatomic scale is the domain of physical size that encompasses objects smaller than an atom. It is the scale at which the atomic constituents, such as the nucleus containing protons and neutrons, and the electrons in their orbitals, become apparent. The subatomic scale includes the many thousands of times smaller subnuclear scale, which is the scale of physical size at which constituents of the protons and neutrons - particularly quarks - become apparent. See also Astronomical scale the opposite end of the spectrum Subatomic particles The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Electrons are located at fixed distances from the nucleus, what are they called? A. energy levels B. Positive levels C. Energy layers D. energy concentrations Answer:
sciq-8717
multiple_choice
What is the term for a structure within the cytoplasm that performs a specific job in the cell?
[ "ribosome", "organelle", "mitochondria", "nucleus" ]
B
Relavent Documents: Document 0::: Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence. Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism. Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry. See also Cell (biology) Cell biology Biomolecule Organelle Tissue (biology) External links https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm Document 1::: Cell physiology is the biological study of the activities that take place in a cell to keep it alive. The term physiology refers to normal functions in a living organism. Animal cells, plant cells and microorganism cells show similarities in their functions even though they vary in structure. General characteristics There are two types of cells: prokaryotes and eukaryotes. Prokaryotes were the first of the two to develop and do not have a self-contained nucleus. Their mechanisms are simpler than later-evolved eukaryotes, which contain a nucleus that envelops the cell's DNA and some organelles. Prokaryotes Prokaryotes have DNA located in an area called the nucleoid, which is not separated from other parts of the cell by a membrane. There are two domains of prokaryotes: bacteria and archaea. Prokaryotes have fewer organelles than eukaryotes. Both have plasma membranes and ribosomes (structures that synthesize proteins and float free in cytoplasm). Two unique characteristics of prokaryotes are fimbriae (finger-like projections on the surface of a cell) and flagella (threadlike structures that aid movement). Eukaryotes Eukaryotes have a nucleus where DNA is contained. They are usually larger than prokaryotes and contain many more organelles. The nucleus, the feature of a eukaryote that distinguishes it from a prokaryote, contains a nuclear envelope, nucleolus and chromatin. In cytoplasm, endoplasmic reticulum (ER) synthesizes membranes and performs other metabolic activities. There are two types, rough ER (containing ribosomes) and smooth ER (lacking ribosomes). The Golgi apparatus consists of multiple membranous sacs, responsible for manufacturing and shipping out materials such as proteins. Lysosomes are structures that use enzymes to break down substances through phagocytosis, a process that comprises endocytosis and exocytosis. In the mitochondria, metabolic processes such as cellular respiration occur. The cytoskeleton is made of fibers that support the str Document 2::: The nucleoplasm, also known as karyoplasm, is the type of protoplasm that makes up the cell nucleus, the most prominent organelle of the eukaryotic cell. It is enclosed by the nuclear envelope, also known as the nuclear membrane. The nucleoplasm resembles the cytoplasm of a eukaryotic cell in that it is a gel-like substance found within a membrane, although the nucleoplasm only fills out the space in the nucleus and has its own unique functions. The nucleoplasm suspends structures within the nucleus that are not membrane-bound and is responsible for maintaining the shape of the nucleus. The structures suspended in the nucleoplasm include chromosomes, various proteins, nuclear bodies, the nucleolus, nucleoporins, nucleotides, and nuclear speckles. The soluble, liquid portion of the nucleoplasm is called the karyolymph nucleosol, or nuclear hyaloplasm. History The existence of the nucleus, including the nucleoplasm, was first documented as early as 1682 by the Dutch microscopist Leeuwenhoek and was later described and drawn by Franz Bauer. However, the cell nucleus was not named and described in detail until Robert Brown's presentation to the Linnean Society in 1831. The nucleoplasm, while described by Bauer and Brown, was not specifically isolated as a separate entity until its naming in 1882 by Polish-German scientist Eduard Strasburger, one of the most famous botanists of the 19th century, and the first person to discover mitosis in plants. Role Many important cell functions take place in the nucleus, more specifically in the nucleoplasm. The main function of the nucleoplasm is to provide the proper environment for essential processes that take place in the nucleus, serving as the suspension substance for all organelles inside the nucleus, and storing the structures that are used in these processes. 34% of proteins encoded in the human genome are ones that localize to the nucleoplasm. These proteins take part in RNA transcription and gene regulation in the n Document 3::: A cytoplast is a medical term that is used to describe a cell membrane and the cytoplasm. It is occasionally used to describe a cell in which the nucleus has been removed. Originally named by Rebecca Bodily. See also Cytoplast Document 4::: The cell is the basic structural and functional unit of all forms of life. Every cell consists of cytoplasm enclosed within a membrane, and contains many macromolecules such as proteins, DNA and RNA, as well as many small molecules of nutrients and metabolites. The term comes from the Latin word meaning 'small room'. Cells can acquire specified function and carry out various tasks within the cell such as replication, DNA repair, protein synthesis, and motility. Cells are capable of specialization and mobility within the cell. Most plant and animal cells are only visible under a light microscope, with dimensions between 1 and 100 micrometres. Electron microscopy gives a much higher resolution showing greatly detailed cell structure. Organisms can be classified as unicellular (consisting of a single cell such as bacteria) or multicellular (including plants and animals). Most unicellular organisms are classed as microorganisms. The study of cells and how they work has led to many other studies in related areas of biology, including: discovery of DNA, cancer systems biology, aging and developmental biology. Cell biology is the study of cells, which were discovered by Robert Hooke in 1665, who named them for their resemblance to cells inhabited by Christian monks in a monastery. Cell theory, first developed in 1839 by Matthias Jakob Schleiden and Theodor Schwann, states that all organisms are composed of one or more cells, that cells are the fundamental unit of structure and function in all living organisms, and that all cells come from pre-existing cells. Cells emerged on Earth about 4 billion years ago. Discovery With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as i The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the term for a structure within the cytoplasm that performs a specific job in the cell? A. ribosome B. organelle C. mitochondria D. nucleus Answer:
sciq-7922
multiple_choice
Atoms of different elements can combine in simple whole number ratios to form what?
[ "chemical compounds", "mixtures", "carbon compounds", "combinations" ]
A
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Atomicity is the total number of atoms present in a molecule. For example, each molecule of oxygen (O2) is composed of two oxygen atoms. Therefore, the atomicity of oxygen is 2. In older contexts, atomicity is sometimes equivalent to valency. Some authors also use the term to refer to the maximum number of valencies observed for an element. Classifications Based on atomicity, molecules can be classified as: Monoatomic (composed of one atom). Examples include He (helium), Ne (neon), Ar (argon), and Kr (krypton). All noble gases are monoatomic. Diatomic (composed of two atoms). Examples include H2 (hydrogen), N2 (nitrogen), O2 (oxygen), F2 (fluorine), and Cl2 (chlorine). Halogens are usually diatomic. Triatomic (composed of three atoms). Examples include O3 (ozone). Polyatomic (composed of three or more atoms). Examples include S8. Atomicity may vary in different allotropes of the same element. The exact atomicity of metals, as well as some other elements such as carbon, cannot be determined because they consist of a large and indefinite number of atoms bonded together. They are typically designated as having an atomicity of 1. The atomicity of homonuclear molecule can be derived by dividing the molecular weight by the atomic weight. For example, the molecular weight of oxygen is 31.999, while its atomic weight is 15.879; therefore, its atomicity is approximately 2 (31.999/15.879 ≈ 2). Examples The most common values of atomicity for the first 30 elements in the periodic table are as follows: Document 2::: In chemical nomenclature, the IUPAC nomenclature of organic chemistry is a method of naming organic chemical compounds as recommended by the International Union of Pure and Applied Chemistry (IUPAC). It is published in the Nomenclature of Organic Chemistry (informally called the Blue Book). Ideally, every possible organic compound should have a name from which an unambiguous structural formula can be created. There is also an IUPAC nomenclature of inorganic chemistry. To avoid long and tedious names in normal communication, the official IUPAC naming recommendations are not always followed in practice, except when it is necessary to give an unambiguous and absolute definition to a compound. IUPAC names can sometimes be simpler than older names, as with ethanol, instead of ethyl alcohol. For relatively simple molecules they can be more easily understood than non-systematic names, which must be learnt or looked over. However, the common or trivial name is often substantially shorter and clearer, and so preferred. These non-systematic names are often derived from an original source of the compound. Also, very long names may be less clear than structural formulas. Basic principles In chemistry, a number of prefixes, suffixes and infixes are used to describe the type and position of the functional groups in the compound. The steps for naming an organic compound are: Identification of the parent hydride parent hydrocarbon chain. This chain must obey the following rules, in order of precedence: It should have the maximum number of substituents of the suffix functional group. By suffix, it is meant that the parent functional group should have a suffix, unlike halogen substituents. If more than one functional group is present, the one with highest group precedence should be used. It should have the maximum number of multiple bonds. It should have the maximum length. It should have the maximum number of substituents or branches cited as prefixes It should have the ma Document 3::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of Document 4::: Computer science and engineering (CSE) is an academic program at many universities which comprises computer science classes (e.g. data structures and algorithms) and computer engineering classes (e.g computer architecture). There is no clear division in computing between science and engineering, just like in the field of materials science and engineering. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both undergraduate as well postgraduate with specializations. Academic courses Academic programs vary between colleges, but typically include a combination of topics in computer science, computer engineering, and electrical engineering. Undergraduate courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most CSE programs require introductory mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, probability, and statistics, as well as the basics of electrical and electronic engineering, physics, and electromagnetism. Example universities with CSE majors and departments APJ Abdul Kalam Technological University American International University-B The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Atoms of different elements can combine in simple whole number ratios to form what? A. chemical compounds B. mixtures C. carbon compounds D. combinations Answer:
ai2_arc-129
multiple_choice
Which best describes two organ systems working together to help maintain homeostasis?
[ "The reproductive organs produce sex cells.", "The nerves carry signals from the eye to the brain.", "The bones and muscles of the hand work together to grip a pencil.", "The muscles of the chest tighten to push carbon dioxide out of the lungs." ]
D
Relavent Documents: Document 0::: In a multicellular organism, an organ is a collection of tissues joined in a structural unit to serve a common function. In the hierarchy of life, an organ lies between tissue and an organ system. Tissues are formed from same type cells to act together in a function. Tissues of different types combine to form an organ which has a specific function. The intestinal wall for example is formed by epithelial tissue and smooth muscle tissue. Two or more organs working together in the execution of a specific body function form an organ system, also called a biological system or body system. An organ's tissues can be broadly categorized as parenchyma, the functional tissue, and stroma, the structural tissue with supportive, connective, or ancillary functions. For example, the gland's tissue that makes the hormones is the parenchyma, whereas the stroma includes the nerves that innervate the parenchyma, the blood vessels that oxygenate and nourish it and carry away its metabolic wastes, and the connective tissues that provide a suitable place for it to be situated and anchored. The main tissues that make up an organ tend to have common embryologic origins, such as arising from the same germ layer. Organs exist in most multicellular organisms. In single-celled organisms such as members of the eukaryotes, the functional analogue of an organ is known as an organelle. In plants, there are three main organs. The number of organs in any organism depends on the definition used. By one widely adopted definition, 79 organs have been identified in the human body. Animals Except for placozoans, multicellular animals including humans have a variety of organ systems. These specific systems are widely studied in human anatomy. The functions of these organ systems often share significant overlap. For instance, the nervous and endocrine system both operate via a shared organ, the hypothalamus. For this reason, the two systems are combined and studied as the neuroendocrine system. The sam Document 1::: A biological system is a complex network which connects several biologically relevant entities. Biological organization spans several scales and are determined based different structures depending on what the system is. Examples of biological systems at the macro scale are populations of organisms. On the organ and tissue scale in mammals and other animals, examples include the circulatory system, the respiratory system, and the nervous system. On the micro to the nanoscopic scale, examples of biological systems are cells, organelles, macromolecular complexes and regulatory pathways. A biological system is not to be confused with a living system, such as a living organism. Organ and tissue systems These specific systems are widely studied in human anatomy and are also present in many other animals. Respiratory system: the organs used for breathing, the pharynx, larynx, bronchi, lungs and diaphragm. Digestive system: digestion and processing food with salivary glands, oesophagus, stomach, liver, gallbladder, pancreas, intestines, rectum and anus. Cardiovascular system (heart and circulatory system): pumping and channeling blood to and from the body and lungs with heart, blood and blood vessels. Urinary system: kidneys, ureters, bladder and urethra involved in fluid balance, electrolyte balance and excretion of urine. Integumentary system: skin, hair, fat, and nails. Skeletal system: structural support and protection with bones, cartilage, ligaments and tendons. Endocrine system: communication within the body using hormones made by endocrine glands such as the hypothalamus, pituitary gland, pineal body or pineal gland, thyroid, parathyroid and adrenals, i.e., adrenal glands. Lymphatic system: structures involved in the transfer of lymph between tissues and the blood stream; includes the lymph and the nodes and vessels. The lymphatic system includes functions including immune responses and development of antibodies. Immune system: protects the organism from Document 2::: The human body is the structure of a human being. It is composed of many different types of cells that together create tissues and subsequently organs and then organ systems. They ensure homeostasis and the viability of the human body. It comprises a head, hair, neck, torso (which includes the thorax and abdomen), arms and hands, legs and feet. The study of the human body includes anatomy, physiology, histology and embryology. The body varies anatomically in known ways. Physiology focuses on the systems and organs of the human body and their functions. Many systems and mechanisms interact in order to maintain homeostasis, with safe levels of substances such as sugar and oxygen in the blood. The body is studied by health professionals, physiologists, anatomists, and artists to assist them in their work. Composition The human body is composed of elements including hydrogen, oxygen, carbon, calcium and phosphorus. These elements reside in trillions of cells and non-cellular components of the body. The adult male body is about 60% water for a total water content of some . This is made up of about of extracellular fluid including about of blood plasma and about of interstitial fluid, and about of fluid inside cells. The content, acidity and composition of the water inside and outside cells is carefully maintained. The main electrolytes in body water outside cells are sodium and chloride, whereas within cells it is potassium and other phosphates. Cells The body contains trillions of cells, the fundamental unit of life. At maturity, there are roughly 3037trillion cells in the body, an estimate arrived at by totaling the cell numbers of all the organs of the body and cell types. The body is also host to about the same number of non-human cells as well as multicellular organisms which reside in the gastrointestinal tract and on the skin. Not all parts of the body are made from cells. Cells sit in an extracellular matrix that consists of proteins such as collagen, Document 3::: H2.00.04.4.01001: Lymphoid tissue H2.00.05.0.00001: Muscle tissue H2.00.05.1.00001: Smooth muscle tissue H2.00.05.2.00001: Striated muscle tissue H2.00.06.0.00001: Nerve tissue H2.00.06.1.00001: Neuron H2.00.06.2.00001: Synapse H2.00.06.2.00001: Neuroglia h3.01: Bones h3.02: Joints h3.03: Muscles h3.04: Alimentary system h3.05: Respiratory system h3.06: Urinary system h3.07: Genital system h3.08: Document 4::: Splanchnology is the study of the visceral organs, i.e. digestive, urinary, reproductive and respiratory systems. The term derives from the Neo-Latin splanchno-, from the Greek σπλάγχνα, meaning "viscera". More broadly, splanchnology includes all the components of the Neuro-Endo-Immune (NEI) Supersystem. An organ (or viscus) is a collection of tissues joined in a structural unit to serve a common function. In anatomy, a viscus is an internal organ, and viscera is the plural form. Organs consist of different tissues, one or more of which prevail and determine its specific structure and function. Functionally related organs often cooperate to form whole organ systems. Viscera are the soft organs of the body. There are organs and systems of organs that differ in structure and development but they are united for the performance of a common function. Such functional collection of mixed organs, form an organ system. These organs are always made up of special cells that support its specific function. The normal position and function of each visceral organ must be known before the abnormal can be ascertained. Healthy organs all work together cohesively and gaining a better understanding of how, helps to maintain a healthy lifestyle. Some functions cannot be accomplished only by one organ. That is why organs form complex systems. The system of organs is a collection of homogeneous organs, which have a common plan of structure, function, development, and they are connected to each other anatomically and communicate through the NEI supersystem. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Which best describes two organ systems working together to help maintain homeostasis? A. The reproductive organs produce sex cells. B. The nerves carry signals from the eye to the brain. C. The bones and muscles of the hand work together to grip a pencil. D. The muscles of the chest tighten to push carbon dioxide out of the lungs. Answer:
sciq-704
multiple_choice
What is the thick fluid in the space between bones that cushions the joint?
[ "interstitial fluid", "amniotic fluid", "synovial fluid", "collagen" ]
C
Relavent Documents: Document 0::: Outline h1.00: Cytology h2.00: General histology H2.00.01.0.00001: Stem cells H2.00.02.0.00001: Epithelial tissue H2.00.02.0.01001: Epithelial cell H2.00.02.0.02001: Surface epithelium H2.00.02.0.03001: Glandular epithelium H2.00.03.0.00001: Connective and supportive tissues H2.00.03.0.01001: Connective tissue cells H2.00.03.0.02001: Extracellular matrix H2.00.03.0.03001: Fibres of connective tissues H2.00.03.1.00001: Connective tissue proper H2.00.03.1.01001: Ligaments H2.00.03.2.00001: Mucoid connective tissue; Gelatinous connective tissue H2.00.03.3.00001: Reticular tissue H2.00.03.4.00001: Adipose tissue H2.00.03.5.00001: Cartilage tissue H2.00.03.6.00001: Chondroid tissue H2.00.03.7.00001: Bone tissue; Osseous tissue H2.00.04.0.00001: Haemotolymphoid complex H2.00.04.1.00001: Blood cells H2.00.04.1.01001: Erythrocyte; Red blood cell H2.00.04.1.02001: Leucocyte; White blood cell H2.00.04.1.03001: Platelet; Thrombocyte H2.00.04.2.00001: Plasma H2.00.04.3.00001: Blood cell production H2.00.04.4.00001: Postnatal sites of haematopoiesis H2.00.04.4.01001: Lymphoid tissue H2.00.05.0.00001: Muscle tissue H2.00.05.1.00001: Smooth muscle tissue Document 1::: Dense irregular connective tissue has fibers that are not arranged in parallel bundles as in dense regular connective tissue. Dense irregular connective tissue consists of mostly collagen fibers. It has less ground substance than loose connective tissue. Fibroblasts are the predominant cell type, scattered sparsely across the tissue. Function This type of connective tissue is found mostly in the reticular layer (or deep layer) of the dermis. It is also in the sclera and in the deeper skin layers. Due to high portions of collagenous fibers, dense irregular connective tissue provides strength, making the skin resistant to tearing by stretching forces from different directions. Dense irregular connective tissue also makes up submucosa of the digestive tract, lymph nodes, and some types of fascia. Other examples include periosteum and perichondrium of bones, and the tunica albuginea of testis. In the submucosa layer, the fiber bundles course in varying planes allowing the organ to resist excessive stretching and distension. Document 2::: Fibroblast-like synoviocytes (FLS) represent a specialised cell type located inside joints in the synovium. These cells play a crucial role in the pathogenesis of chronic inflammatory diseases, such as rheumatoid arthritis. Fibroblast-like synoviocytes in normal tissues The inner lining of the joint consists of the synovium (also called the synovial membrane), a thin layer located between the joint capsule and the joint cavity. The word "synovium" is derived from the word "synovia" (or synovial fluid), which is a clear, viscous fluid produced by the synovium, and its main purpose is to reduce friction between the joint cartilages during movement. Synovium is also important to maintain proper joint function by providing the structural support and supply of the necessary nutrients to the surrounding cartilage. Synovial membrane is divided into two compartments – the outer layer (subintima) and the inner layer (intima). The inner layer is mainly composed of two cell types, specialized macrophages (macrophage-like synovial cells) and fibroblast-like synoviocytes, which are important in maintaining the internal joint homeostasis. These cells represent the main source of hyaluronic acid and also other glycoproteins, major components of the synovial fluid. Fibroblast-like synoviocytes are cells of mesenchymal origin that display many characteristics common with fibroblasts, such as expression of several types of collagens and protein vimentin, a part of cytoskeletal filaments. Unlike fibroblasts, fibroblast-like synoviocytes also secrete unique proteins, that are normally absent in other fibroblast lineages. These include especially lubricin, a protein crucial for the joint lubrication. Furthermore these cells express a number of molecules important for the mediation of the cell adhesion, such as cadherin-11, VCAM-1, various integrins and their receptors. Specific for fibroblast-like synoviocytes is also the expression of CD55; this protein is often used to identify th Document 3::: The interstitium is a contiguous fluid-filled space existing between a structural barrier, such as a cell membrane or the skin, and internal structures, such as organs, including muscles and the circulatory system. The fluid in this space is called interstitial fluid, comprises water and solutes, and drains into the lymph system. The interstitial compartment is composed of connective and supporting tissues within the body – called the extracellular matrix – that are situated outside the blood and lymphatic vessels and the parenchyma of organs. Structure The non-fluid parts of the interstitium are predominantly collagen types I, III, and V, elastin, and glycosaminoglycans, such as hyaluronan and proteoglycans that are cross-linked to form a honeycomb-like reticulum. Such structural components exist both for the general interstitium of the body, and within individual organs, such as the myocardial interstitium of the heart, the renal interstitium of the kidney, and the pulmonary interstitium of the lung. The interstitium in the submucosae of visceral organs, the dermis, superficial fascia, and perivascular adventitia are fluid-filled spaces supported by a collagen bundle lattice. The fluid spaces communicate with draining lymph nodes though they do not have lining cells or structures of lymphatic channels. Functions The interstitial fluid is a reservoir and transportation system for nutrients and solutes distributing among organs, cells, and capillaries, for signaling molecules communicating between cells, and for antigens and cytokines participating in immune regulation. The composition and chemical properties of the interstitial fluid vary among organs and undergo changes in chemical composition during normal function, as well as during body growth, conditions of inflammation, and development of diseases, as in heart failure and chronic kidney disease. The total fluid volume of the interstitium during health is about 20% of body weight, but this space is dynamic Document 4::: In histology, a lacuna is a small space, containing an osteocyte in bone, or chondrocyte in cartilage. Bone The lacunae are situated between the lamellae, and consist of a number of oblong spaces. In an ordinary microscopic section, viewed by transmitted light, they appear as fusiform opaque spots. Each lacuna is occupied during life by a branched cell, termed an osteocyte, bone-cell or bone-corpuscle. Lacunae are connected to one another by small canals called canaliculi. A lacuna never contains more than one osteocyte. Sinuses are an example of lacuna. Cartilage The cartilage cells or chondrocytes are contained in cavities in the matrix, called cartilage lacunae; around these, the matrix is arranged in concentric lines as if it had been formed in successive portions around the cartilage cells. This constitutes the so-called capsule of the space. Each lacuna is generally occupied by a single cell, but during the division of the cells, it may contain two, four, or eight cells. Lacunae are found between narrow sheets of calcified matrix that are known as lamellae ( ). See also Lacunar stroke The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the thick fluid in the space between bones that cushions the joint? A. interstitial fluid B. amniotic fluid C. synovial fluid D. collagen Answer:
sciq-2741
multiple_choice
What is the term for an amount of force pushing against a given area?
[ "force", "resistance", "mass", "pressure" ]
D
Relavent Documents: Document 0::: Mechanical load is the physical stress on a mechanical system or component. Loads can be static or dynamic. Some loads are specified as part of the design criteria of a mechanical system. Depending on the usage, some mechanical loads can be measured by an appropriate test method in a laboratory or in the field. Vehicle It can be the external mechanical resistance against which a machine (such as a motor or engine), acts. The load can often be expressed as a curve of force versus speed. For instance, a given car traveling on a road of a given slope presents a load which the engine must act against. Because air resistance increases with speed, the motor must put out more torque at a higher speed in order to maintain the speed. By shifting to a higher gear, one may be able to meet the requirement with a higher torque and a lower engine speed, whereas shifting to a lower gear has the opposite effect. Accelerating increases the load, whereas decelerating decreases the load. Pump Similarly, the load on a pump depends on the head against which the pump is pumping, and on the size of the pump. Fan Similar considerations apply to a fan. See Affinity laws. See also Structural load Physical test Document 1::: Surface force denoted fs is the force that acts across an internal or external surface element in a material body. Normal forces and shear forces between objects are types of surface force. All cohesive forces and contact forces between objects are considered as surface forces. Surface force can be decomposed into two perpendicular components: normal forces and shear forces. A normal force acts normally over an area and a shear force acts tangentially over an area. Equations for surface force Surface force due to pressure , where f = force, p = pressure, and A = area on which a uniform pressure acts Examples Pressure related surface force Since pressure is , and area is a , a pressure of over an area of will produce a surface force of . See also Body force Contact force Document 2::: In common usage, the mass of an object is often referred to as its weight, though these are in fact different concepts and quantities. Nevertheless, one object will always weigh more than another with less mass if both are subject to the same gravity (i.e. the same gravitational field strength). In scientific contexts, mass is the amount of "matter" in an object (though "matter" may be difficult to define), but weight is the force exerted on an object's matter by gravity. At the Earth's surface, an object whose mass is exactly one kilogram weighs approximately 9.81 newtons, the product of its mass and the gravitational field strength there. The object's weight is less on Mars, where gravity is weaker; more on Saturn, where gravity is stronger; and very small in space, far from significant sources of gravity, but it always has the same mass. Material objects at the surface of the Earth have weight despite such sometimes being difficult to measure. An object floating freely on water, for example, does not appear to have weight since it is buoyed by the water. But its weight can be measured if it is added to water in a container which is entirely supported by and weighed on a scale. Thus, the "weightless object" floating in water actually transfers its weight to the bottom of the container (where the pressure increases). Similarly, a balloon has mass but may appear to have no weight or even negative weight, due to buoyancy in air. However the weight of the balloon and the gas inside it has merely been transferred to a large area of the Earth's surface, making the weight difficult to measure. The weight of a flying airplane is similarly distributed to the ground, but does not disappear. If the airplane is in level flight, the same weight-force is distributed to the surface of the Earth as when the plane was on the runway, but spread over a larger area. A better scientific definition of mass is its description as being a measure of inertia, which is the tendency of an Document 3::: Most of the terms listed in Wikipedia glossaries are already defined and explained within Wikipedia itself. However, glossaries like this one are useful for looking up, comparing and reviewing large numbers of terms together. You can help enhance this page by adding new terms or writing definitions for existing ones. This glossary of mechanical engineering terms pertains specifically to mechanical engineering and its sub-disciplines. For a broad overview of engineering, see glossary of engineering. A Abrasion – is the process of scuffing, scratching, wearing down, marring, or rubbing away. It can be intentionally imposed in a controlled process using an abrasive. Abrasion can be an undesirable effect of exposure to normal use or exposure to the elements. Absolute zero – is the lowest possible temperature of a system, defined as zero kelvin or −273.15 °C. No experiment has yet measured a temperature of absolute zero. Accelerated life testing – is the process of testing a product by subjecting it to conditions (stress, strain, temperatures, voltage, vibration rate, pressure etc.) in excess of its normal service parameters in an effort to uncover faults and potential modes of failure in a short amount of time. By analyzing the product's response to such tests, engineers can make predictions about the service life and maintenance intervals of a product. Acceleration – In physics, acceleration is the rate of change of velocity of an object with respect to time. An object's acceleration is the net result of any and all forces acting on the object, as described by Newton's Second Law. The SI unit for acceleration is metre per second squared Accelerations are vector quantities (they have magnitude and direction) and add according to the parallelogram law. As a vector, the calculated net force is equal to the product of the object's mass (a scalar quantity) and its acceleration. Accelerometer – is a device that measures proper acceleration. Proper acceleration, being Document 4::: In physics, work is the energy transferred to or from an object via the application of force along a displacement. In its simplest form, for a constant force aligned with the direction of motion, the work equals the product of the force strength and the distance traveled. A force is said to do positive work if when applied it has a component in the direction of the displacement of the point of application. A force does negative work if it has a component opposite to the direction of the displacement at the point of application of the force. For example, when a ball is held above the ground and then dropped, the work done by the gravitational force on the ball as it falls is positive, and is equal to the weight of the ball (a force) multiplied by the distance to the ground (a displacement). If the ball is thrown upwards, the work done by the gravitational force is negative, and is equal to the weight multiplied by the displacement in the upwards direction. Both force and displacement are vectors. The work done is given by the dot product of the two vectors. When the force is constant and the angle between the force and the displacement is also constant, then the work done is given by: Work is a scalar quantity, so it has only magnitude and no direction. Work transfers energy from one place to another, or one form to another. The SI unit of work is the joule (J), the same unit as for energy. History The ancient Greek understanding of physics was limited to the statics of simple machines (the balance of forces), and did not include dynamics or the concept of work. During the Renaissance the dynamics of the Mechanical Powers, as the simple machines were called, began to be studied from the standpoint of how far they could lift a load, in addition to the force they could apply, leading eventually to the new concept of mechanical work. The complete dynamic theory of simple machines was worked out by Italian scientist Galileo Galilei in 1600 in Le Meccaniche (On Me The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the term for an amount of force pushing against a given area? A. force B. resistance C. mass D. pressure Answer:
ai2_arc-236
multiple_choice
Which characteristic can a human offspring inherit?
[ "facial scar", "blue eyes", "long hair", "broken leg" ]
B
Relavent Documents: Document 0::: Hard inheritance was a model of heredity that explicitly excludes any acquired characteristics, such as of Lamarckism. It is the exact opposite of soft inheritance, coined by Ernst Mayr to contrast ideas about inheritance. Hard inheritance states that characteristics of an organism's offspring (passed on through DNA) will not be affected by the actions that the parental organism performs during its lifetime. For example: a medieval blacksmith who uses only his right arm to forge steel will not sire a son with a stronger right arm than left because the blacksmith's actions do not alter his genetic code. Inheritance due to usage and non-usage is excluded. Inheritance works as described in the modern synthesis of evolutionary biology. The existence of inherited epigenetic variants has led to renewed interest in soft inheritance. Document 1::: Research on the heritability of IQ inquires into the degree of variation in IQ within a population that is due to genetic variation between individuals in that population. There has been significant controversy in the academic community about the heritability of IQ since research on the issue began in the late nineteenth century. Intelligence in the normal range is a polygenic trait, meaning that it is influenced by more than one gene, and in the case of intelligence at least 500 genes. Further, explaining the similarity in IQ of closely related persons requires careful study because environmental factors may be correlated with genetic factors. Early twin studies of adult individuals have found a heritability of IQ between 57% and 73%, with some recent studies showing heritability for IQ as high as 80%. IQ goes from being weakly correlated with genetics for children, to being strongly correlated with genetics for late teens and adults. The heritability of IQ increases with the child's age and reaches a plateau at 14-16 years old, continuing at that level well into adulthood. However, poor prenatal environment, malnutrition and disease are known to have lifelong deleterious effects. Although IQ differences between individuals have been shown to have a large hereditary component, it does not follow that disparities in IQ between groups have a genetic basis. The scientific consensus is that genetics does not explain average differences in IQ test performance between racial groups. Heritability and caveats Heritability is a statistic used in the fields of breeding and genetics that estimates the degree of variation in a phenotypic trait in a population that is due to genetic variation between individuals in that population. The concept of heritability can be expressed in the form of the following question: "What is the proportion of the variation in a given trait within a population that is not explained by the environment or random chance?" Estimates of heritabi Document 2::: Mendelian traits behave according to the model of monogenic or simple gene inheritance in which one gene corresponds to one trait. Discrete traits (as opposed to continuously varying traits such as height) with simple Mendelian inheritance patterns are relatively rare in nature, and many of the clearest examples in humans cause disorders. Discrete traits found in humans are common examples for teaching genetics. Mendelian model According to the model of Mendelian inheritance, alleles may be dominant or recessive, one allele is inherited from each parent, and only those who inherit a recessive allele from each parent exhibit the recessive phenotype. Offspring with either one or two copies of the dominant allele will display the dominant phenotype. Very few phenotypes are purely Mendelian traits. Common violations of the Mendelian model include incomplete dominance, codominance, genetic linkage, environmental effects, and quantitative contributions from a number of genes (see: gene interactions, polygenic inheritance, oligogenic inheritance). OMIM (Online Mendelian Inheritance in Man) is a comprehensive database of human genotype–phenotype links. Many visible human traits that exhibit high heritability were included in the older McKusick's Mendelian Inheritance in Man. Before the discovery of genotyping, they were used as genetic markers in medicolegal practice, including in cases of disputed paternity. Human traits with probable or uncertain simple inheritance patterns See also Polygenic inheritance Trait Gene interaction Dominance Homozygote Heterozygote Document 3::: A quantitative trait locus (QTL) is a locus (section of DNA) that correlates with variation of a quantitative trait in the phenotype of a population of organisms. QTLs are mapped by identifying which molecular markers (such as SNPs or AFLPs) correlate with an observed trait. This is often an early step in identifying the actual genes that cause the trait variation. Definition A quantitative trait locus (QTL) is a region of DNA which is associated with a particular phenotypic trait, which varies in degree and which can be attributed to polygenic effects, i.e., the product of two or more genes, and their environment. These QTLs are often found on different chromosomes. The number of QTLs which explain variation in the phenotypic trait indicates the genetic architecture of a trait. It may indicate that plant height is controlled by many genes of small effect, or by a few genes of large effect. Typically, QTLs underlie continuous traits (those traits which vary continuously, e.g. height) as opposed to discrete traits (traits that have two or several character values, e.g. red hair in humans, a recessive trait, or smooth vs. wrinkled peas used by Mendel in his experiments). Moreover, a single phenotypic trait is usually determined by many genes. Consequently, many QTLs are associated with a single trait. Another use of QTLs is to identify candidate genes underlying a trait. The DNA sequence of any genes in this region can then be compared to a database of DNA for genes whose function is already known, this task being fundamental for marker-assisted crop improvement. History Mendelian inheritance was rediscovered at the beginning of the 20th century. As Mendel's ideas spread, geneticists began to connect Mendel's rules of inheritance of single factors to Darwinian evolution. For early geneticists, it was not immediately clear that the smooth variation in traits like body size (i.e., incomplete dominance) was caused by the inheritance of single genetic factors. Althoug Document 4::: The Generalist Genes hypothesis of learning abilities and disabilities was originally coined in an article by Plomin & Kovas (2005). The Generalist Genes hypothesis suggests that most genes associated with common learning disabilities and abilities are generalist in three ways. Firstly, the same genes that influence common learning abilities (e.g., high reading aptitude) are also responsible for common learning disabilities (e.g., reading disability): they are strongly genetically correlated. Secondly, many of the genes associated with one aspect of a learning disability (e.g., vocabulary problems) also influence other aspects of this learning disability (e.g., grammar problems). Thirdly, genes that influence one learning disability (e.g., reading disability) are largely the same as those that influence other learning disabilities (e.g., mathematics disability). The Generalist Genes hypothesis has important implications for education, cognitive sciences and molecular genetics. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Which characteristic can a human offspring inherit? A. facial scar B. blue eyes C. long hair D. broken leg Answer:
sciq-10228
multiple_choice
During what period on earth was coal formed?
[ "Mesozoic", "Neoproterozoic", "Neoproterozoic era", "the carboniferous period" ]
D
Relavent Documents: Document 0::: The Silurian-Devonian Terrestrial Revolution, also known as the Devonian Plant Explosion (DePE) and the Devonian explosion, was a period of rapid plant and fungal diversification that occurred 428 to 359 million years ago during the Silurian and Devonian, with the most critical phase occurring during the Late Silurian and Early Devonian. This diversification of terrestrial plant life had vast impacts on the biotic composition of earth's soil, its atmosphere, its oceans, and for all plant and animal life that would follow it. Through fierce competition for light and available space on land, phenotypic diversity of plants increased greatly, comparable in scale and effect to the explosion in diversity of animal life during the Cambrian explosion, especially in vertical plant growth, which allowed for photoautotrophic canopies to develop, and forever altering plant evolutionary floras that followed. As plants evolved and radiated, so too did arthropods, which formed symbiotic relationships with them. This Silurian and Devonian flora was significantly different in appearance, reproduction, and anatomy to most modern flora. Much of this flora had died out in extinction events including the Kellwasser Event, the Hangenberg Event, the Carboniferous Rainforest Collapse, and the End-Permian Extinction. Silurian and Devonian life Rather than plants, it was fungi, in particular nematophytes such as Prototaxites, that dominated the early stages of this terrestrial biodiversification event. Nematophytes towered over even the largest land plants during the Silurian and Early Devonian, only being truly surpassed in size in the Early Carboniferous. The nutrient-distributing glomeromycotan mycorrhizal networks of nematophytes were very likely to have acted as facilitators for the expansion of plants into terrestrial environments, which followed the colonising fungi. The first fossils of arbuscular mycorrhizae, a type of symbiosis between fungi and vascular plants, are known from th Document 1::: Uranium mining around Bancroft, Ontario, was conducted at four sites, beginning in the early 1950s and concluding by 1982. Bancroft was one of two major uranium-producing areas in Ontario, and one of seven in Canada, all located along the edge of the Canadian Shield. In the context of mining, the "Bancroft area" includes Haliburton, Hastings, and Renfrew counties, and all areas between Minden and Lake Clear. Activity in the mid-1950s was described by engineer A. S. Bayne in a 1977 report as the "greatest uranium prospecting rush in the world". As a result of activities at its four major uranium mines, Bancroft experienced rapid population and economic growth throughout the 1950s. By 1958, Canada had become one of the world's leading producers of uranium; the $274 million of uranium exports that year represented Canada's most significant mineral export. By 1963, the federal government had purchased more than $1.5 billion of uranium from Canadian producers, but soon thereafter the global supply uranium market collapsed and the government stopped issuing contracts to buy. Mining resumed when uranium prices rose during the 1970s energy crisis, but this second period of activity ended by 1982. Three of the uranium mines are decommissioned, and one is undergoing rehabilitation. A twofold increase in lung cancer development and mortality has been observed among former mine workers. Bancroft continues to be known for gems and mineralogy. Geology and mineralogy During the most recent ice age, in the area of what is now Bancroft, Ontario, ancient glaciers removed soil and rock, exposing the Precambrian granite that had been the heart of volcanic mountains on an ancient sea bed. During the Grenville orogenies, sedimentary rocks were transformed by heat and pressure into banded gneiss and marble, incorporating gabbro and diorite (rich in iron and other dark minerals). Some uranium ores in these structures are about 1,000 million years old, while others are understood to be Document 2::: Phyllocladane is a tricyclic diterpane which is generally found in gymnosperm resins. It has a formula of C20H34 and a molecular weight of 274.4840. As a biomarker, it can be used to learn about the gymnosperm input into a hydrocarbon deposit, and about the age of the deposit in general. It indicates a terrogenous origin of the source rock. Diterpanes, such as Phyllocladane are found in source rocks as early as the middle and late Devonian periods, which indicates any rock containing them must be no more than approximately 360 Ma. Phyllocladane is commonly found in lignite, and like other resinites derived from gymnosperms, is naturally enriched in 13C. This enrichment is a result of the enzymatic pathways used to synthesize the compound. The compound can be identified by GC-MS. A peak of m/z 123 is indicative of tricyclic diterpenoids in general, and phyllocladane in particular is further characterized by strong peaks at m/z 231 and m/z 189. Presence of phyllocladane and its relative abundance to other tricyclic diterpanes can be used to differentiate between various oil fields. Document 3::: The carbon cycle is that part of the biogeochemical cycle by which carbon is exchanged among the biosphere, pedosphere, geosphere, hydrosphere, and atmosphere of Earth. Other major biogeochemical cycles include the nitrogen cycle and the water cycle. Carbon is the main component of biological compounds as well as a major component of many minerals such as limestone. The carbon cycle comprises a sequence of events that are key to making Earth capable of sustaining life. It describes the movement of carbon as it is recycled and reused throughout the biosphere, as well as long-term processes of carbon sequestration (storage) to and release from carbon sinks. To describe the dynamics of the carbon cycle, a distinction can be made between the fast and slow carbon cycle. The fast carbon cycle is also referred to as the biological carbon cycle. Fast carbon cycles can complete within years, moving substances from atmosphere to biosphere, then back to the atmosphere. Slow or geological cycles (also called deep carbon cycle) can take millions of years to complete, moving substances through the Earth's crust between rocks, soil, ocean and atmosphere. Human activities have disturbed the fast carbon cycle for many centuries by modifying land use, and moreover with the recent industrial-scale mining of fossil carbon (coal, petroleum, and gas extraction, and cement manufacture) from the geosphere. Carbon dioxide in the atmosphere had increased nearly 52% over pre-industrial levels by 2020, forcing greater atmospheric and Earth surface heating by the Sun. The increased carbon dioxide has also caused a reduction in the ocean's pH value and is fundamentally altering marine chemistry. The majority of fossil carbon has been extracted over just the past half century, and rates continue to rise rapidly, contributing to human-caused climate change. Main compartments The carbon cycle was first described by Antoine Lavoisier and Joseph Priestley, and popularised by Humphry Davy. The g Document 4::: The start of the Cambrian period is marked by "fluctuations" in a number of geochemical records, including Strontium, Sulfur and Carbon isotopic excursions. While these anomalies are difficult to interpret, a number of possibilities have been put forward. They probably represent changes on a global scale, and as such may help to constrain possible causes of the Cambrian explosion. The chemical signature may be related to continental break-up, the end of a "global glaciation", or a catastrophic drop in productivity caused by a mass extinction just before the beginning of the Cambrian. Isotopes Isotopes are different forms of elements; they have a different number of neutrons in the nucleus, meaning they have very similar chemical properties, but different mass. The weight difference means that some isotopes are discriminated against in chemical processes – for example, plants find it easier to incorporate the lighter 12C than heavy 13C. Other isotopes are only produced as a result of the radioactive decay of other elements, such as 87Sr, the daughter isotope of 87Rb. Rb, and therefore 87Sr, is common in the crust, so abundance of 87Sr in a sample of sediment (relative to 86Sr) is related to the amount of sediment which originated in the crust, as opposed to from the oceans. The ratios of three major isotopes, 87Sr / 86Sr, 34S / 32S and 13C / 12C, undergo dramatic fluctuations around the beginning of the Cambrian. Carbon isotopes Carbon has 2 stable isotopes, carbon-12 (12C) and carbon-13 (13C). The ratio between the two is denoted , and represents a number of factors. Because organic matter preferentially takes up the lighter 12C, an increase in productivity increases the of the rest of the system, and vice versa. Some carbon reservoirs are very isotopically light: for instance, biogenic methane, produced by bacterial decomposition, has a of −60‰ – vast, when 1‰ is a large fluctuation! An injection of carbon from one of these reservoirs could therefore The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. During what period on earth was coal formed? A. Mesozoic B. Neoproterozoic C. Neoproterozoic era D. the carboniferous period Answer:
sciq-4939
multiple_choice
Maintaining a high metabolic rate takes a lot of what?
[ "hydrogen", "power", "energy", "fuel" ]
C
Relavent Documents: Document 0::: An energy budget is a balance sheet of energy income against expenditure. It is studied in the field of Energetics which deals with the study of energy transfer and transformation from one form to another. Calorie is the basic unit of measurement. An organism in a laboratory experiment is an open thermodynamic system, exchanging energy with its surroundings in three ways - heat, work and the potential energy of biochemical compounds. Organisms use ingested food resources (C=consumption) as building blocks in the synthesis of tissues (P=production) and as fuel in the metabolic process that power this synthesis and other physiological processes (R=respiratory loss). Some of the resources are lost as waste products (F=faecal loss, U=urinary loss). All these aspects of metabolism can be represented in energy units. The basic model of energy budget may be shown as: P = C - R - U - F or P = C - (R + U + F) or C = P + R + U + F All the aspects of metabolism can be represented in energy units (e.g. joules (J);1 calorie = 4.2 kJ). Energy used for metabolism will be R = C - (F + U + P) Energy used in the maintenance will be R + F + U = C - P Endothermy and ectothermy Energy budget allocation varies for endotherms and ectotherms. Ectotherms rely on the environment as a heat source while endotherms maintain their body temperature through the regulation of metabolic processes. The heat produced in association with metabolic processes facilitates the active lifestyles of endotherms and their ability to travel far distances over a range of temperatures in the search for food. Ectotherms are limited by the ambient temperature of the environment around them but the lack of substantial metabolic heat production accounts for an energetically inexpensive metabolic rate. The energy demands for ectotherms are generally one tenth of that required for endotherms. Document 1::: The term human equivalent is used in a number of different contexts. This term can refer to human equivalents of various comparisons of animate and inanimate things. Animal models in chemistry and medicine Animal models are used to learn more about a disease, its diagnosis and its treatment, with animal models predicting human toxicity in up to 71% of cases. The human equivalent dose (HED) or human equivalent concentration (HEC) is the quantity of a chemical that, when administered to humans, produces an effect equal to that produced in test animals by a smaller dose. Calculating the HED is a step in carrying out a clinical trial of a pharmaceutical drug. Human energy usage and conversion The concept of human-equivalent energy (H-e) assists in understanding of energy flows in physical and biological systems by expressing energy units in human terms: it provides a “feel” for the use of a given amount of energy by expressing it in terms of the relative quantity of energy needed for human metabolism, assuming an average human energy expenditure of 12,500 kJ per day and a basal metabolic rate of 80 watts. A light bulb running at 100 watts is running at 1.25 human equivalents (100/80), i.e. 1.25 H-e. On the other hand, a human may generate as much as 1,000 watts for a task lasting a few minutes, or even more for a task of a few seconds' duration, while climbing a flight of stairs may represent work at a rate of about 200 watts. Animal attributes expressed in terms of human equivalents Cat and dog years The ages of domestic cats and dogs are often referred to in terms of "cat years" or "dog years", representing a conversion to human-equivalent years. One formula for cat years is based on a cat reaching maturity in approximately 1 year, which could be seen as 16 in human terms, then adding about 4 years for every year the cat ages. A 5-year-old cat would then be (5 − 1) × 4 + 16 = 32 "cat years" (i.e. human-equivalent years), and a 10-year-old cat (10 − 1) × 4 + 16 = Document 2::: Relatively speaking, the brain consumes an immense amount of energy in comparison to the rest of the body. The mechanisms involved in the transfer of energy from foods to neurons are likely to be fundamental to the control of brain function. Human bodily processes, including the brain, all require both macronutrients, as well as micronutrients. Insufficient intake of selected vitamins, or certain metabolic disorders, may affect cognitive processes by disrupting the nutrient-dependent processes within the body that are associated with the management of energy in neurons, which can subsequently affect synaptic plasticity, or the ability to encode new memories. Macronutrients The human brain requires nutrients obtained from the diet to develop and sustain its physical structure and cognitive functions. Additionally, the brain requires caloric energy predominately derived from the primary macronutrients to operate. The three primary macronutrients include carbohydrates, proteins, and fats. Each macronutrient can impact cognition through multiple mechanisms, including glucose and insulin metabolism, neurotransmitter actions, oxidative stress and inflammation, and the gut-brain axis. Inadequate macronutrient consumption or proportion could impair optimal cognitive functioning and have long-term health implications. Carbohydrates Through digestion, dietary carbohydrates are broken down and converted into glucose, which is the sole energy source for the brain. Optimal brain function relies on adequate carbohydrate consumption, as carbohydrates provide the quickest source of glucose for the brain. Glucose deficiencies such as hypoglycaemia reduce available energy for the brain and impair all cognitive processes and performance. Additionally, situations with high cognitive demand, such as learning a new task, increase brain glucose utilization, depleting blood glucose stores and initiating the need for supplementation. Complex carbohydrates, especially those with high d Document 3::: Basal metabolic rate (BMR) is the rate of energy expenditure per unit time by endothermic animals at rest. It is reported in energy units per unit time ranging from watt (joule/second) to ml O2/min or joule per hour per kg body mass J/(h·kg). Proper measurement requires a strict set of criteria to be met. These criteria include being in a physically and psychologically undisturbed state and being in a thermally neutral environment while in the post-absorptive state (i.e., not actively digesting food). In bradymetabolic animals, such as fish and reptiles, the equivalent term standard metabolic rate (SMR) applies. It follows the same criteria as BMR, but requires the documentation of the temperature at which the metabolic rate was measured. This makes BMR a variant of standard metabolic rate measurement that excludes the temperature data, a practice that has led to problems in defining "standard" rates of metabolism for many mammals. Metabolism comprises the processes that the body needs to function. Basal metabolic rate is the amount of energy per unit of time that a person needs to keep the body functioning at rest. Some of those processes are breathing, blood circulation, controlling body temperature, cell growth, brain and nerve function, and contraction of muscles. Basal metabolic rate affects the rate that a person burns calories and ultimately whether that individual maintains, gains, or loses weight. The basal metabolic rate accounts for about 60 to 75% of the daily calorie expenditure by individuals. It is influenced by several factors. In humans, BMR typically declines by 1–2% per decade after age 20, mostly due to loss of fat-free mass, although the variability between individuals is high. Description The body's generation of heat is known as thermogenesis and it can be measured to determine the amount of energy expended. BMR generally decreases with age, and with the decrease in lean body mass (as may happen with aging). Increasing muscle mass has the ef Document 4::: In biology, energy homeostasis, or the homeostatic control of energy balance, is a biological process that involves the coordinated homeostatic regulation of food intake (energy inflow) and energy expenditure (energy outflow). The human brain, particularly the hypothalamus, plays a central role in regulating energy homeostasis and generating the sense of hunger by integrating a number of biochemical signals that transmit information about energy balance. Fifty percent of the energy from glucose metabolism is immediately converted to heat. Energy homeostasis is an important aspect of bioenergetics. Definition In the US, biological energy is expressed using the energy unit Calorie with a capital C (i.e. a kilocalorie), which equals the energy needed to increase the temperature of 1 kilogram of water by 1 °C (about 4.18 kJ). Energy balance, through biosynthetic reactions, can be measured with the following equation: Energy intake (from food and fluids) = Energy expended (through work and heat generated) + Change in stored energy (body fat and glycogen storage) The first law of thermodynamics states that energy can be neither created nor destroyed. But energy can be converted from one form of energy to another. So, when a calorie of food energy is consumed, one of three particular effects occur within the body: a portion of that calorie may be stored as body fat, triglycerides, or glycogen, transferred to cells and converted to chemical energy in the form of adenosine triphosphate (ATP – a coenzyme) or related compounds, or dissipated as heat. Energy Intake Energy intake is measured by the amount of calories consumed from food and fluids. Energy intake is modulated by hunger, which is primarily regulated by the hypothalamus, and choice, which is determined by the sets of brain structures that are responsible for stimulus control (i.e., operant conditioning and classical conditioning) and cognitive control of eating behavior. Hunger is regulated in part by the act The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Maintaining a high metabolic rate takes a lot of what? A. hydrogen B. power C. energy D. fuel Answer:
sciq-8972
multiple_choice
What do we call the recycling of inorganic matter between living organisms and their environment?
[ "phosphorus cycle", "water cycle", "biogeochemical cycle", "nutrient cycle" ]
C
Relavent Documents: Document 0::: A nutrient cycle (or ecological recycling) is the movement and exchange of inorganic and organic matter back into the production of matter. Energy flow is a unidirectional and noncyclic pathway, whereas the movement of mineral nutrients is cyclic. Mineral cycles include the carbon cycle, sulfur cycle, nitrogen cycle, water cycle, phosphorus cycle, oxygen cycle, among others that continually recycle along with other mineral nutrients into productive ecological nutrition. Overview The nutrient cycle is nature's recycling system. All forms of recycling have feedback loops that use energy in the process of putting material resources back into use. Recycling in ecology is regulated to a large extent during the process of decomposition. Ecosystems employ biodiversity in the food webs that recycle natural materials, such as mineral nutrients, which includes water. Recycling in natural systems is one of the many ecosystem services that sustain and contribute to the well-being of human societies. There is much overlap between the terms for the biogeochemical cycle and nutrient cycle. Most textbooks integrate the two and seem to treat them as synonymous terms. However, the terms often appear independently. Nutrient cycle is more often used in direct reference to the idea of an intra-system cycle, where an ecosystem functions as a unit. From a practical point, it does not make sense to assess a terrestrial ecosystem by considering the full column of air above it as well as the great depths of Earth below it. While an ecosystem often has no clear boundary, as a working model it is practical to consider the functional community where the bulk of matter and energy transfer occurs. Nutrient cycling occurs in ecosystems that participate in the "larger biogeochemical cycles of the earth through a system of inputs and outputs." Complete and closed loop Ecosystems are capable of complete recycling. Complete recycling means that 100% of the waste material can be reconstituted inde Document 1::: A biogeochemical cycle, or more generally a cycle of matter, is the movement and transformation of chemical elements and compounds between living organisms, the atmosphere, and the Earth's crust. Major biogeochemical cycles include the carbon cycle, the nitrogen cycle and the water cycle. In each cycle, the chemical element or molecule is transformed and cycled by living organisms and through various geological forms and reservoirs, including the atmosphere, the soil and the oceans. It can be thought of as the pathway by which a chemical substance cycles (is turned over or moves through) the biotic compartment and the abiotic compartments of Earth. The biotic compartment is the biosphere and the abiotic compartments are the atmosphere, lithosphere and hydrosphere. For example, in the carbon cycle, atmospheric carbon dioxide is absorbed by plants through photosynthesis, which converts it into organic compounds that are used by organisms for energy and growth. Carbon is then released back into the atmosphere through respiration and decomposition. Additionally, carbon is stored in fossil fuels and is released into the atmosphere through human activities such as burning fossil fuels. In the nitrogen cycle, atmospheric nitrogen gas is converted by plants into usable forms such as ammonia and nitrates through the process of nitrogen fixation. These compounds can be used by other organisms, and nitrogen is returned to the atmosphere through denitrification and other processes. In the water cycle, the universal solvent water evaporates from land and oceans to form clouds in the atmosphere, and then precipitates back to different parts of the planet. Precipitation can seep into the ground and become part of groundwater systems used by plants and other organisms, or can runoff the surface to form lakes and rivers. Subterranean water can then seep into the ocean along with river discharges, rich with dissolved and particulate organic matter and other nutrients. There are bio Document 2::: Marine biogeochemical cycles are biogeochemical cycles that occur within marine environments, that is, in the saltwater of seas or oceans or the brackish water of coastal estuaries. These biogeochemical cycles are the pathways chemical substances and elements move through within the marine environment. In addition, substances and elements can be imported into or exported from the marine environment. These imports and exports can occur as exchanges with the atmosphere above, the ocean floor below, or as runoff from the land. There are biogeochemical cycles for the elements calcium, carbon, hydrogen, mercury, nitrogen, oxygen, phosphorus, selenium, and sulfur; molecular cycles for water and silica; macroscopic cycles such as the rock cycle; as well as human-induced cycles for synthetic compounds such as polychlorinated biphenyl (PCB). In some cycles there are reservoirs where a substance can be stored for a long time. The cycling of these elements is interconnected. Marine organisms, and particularly marine microorganisms are crucial for the functioning of many of these cycles. The forces driving biogeochemical cycles include metabolic processes within organisms, geological processes involving the earth's mantle, as well as chemical reactions among the substances themselves, which is why these are called biogeochemical cycles. While chemical substances can be broken down and recombined, the chemical elements themselves can be neither created nor destroyed by these forces, so apart from some losses to and gains from outer space, elements are recycled or stored (sequestered) somewhere on or within the planet. Overview Energy flows directionally through ecosystems, entering as sunlight (or inorganic molecules for chemoautotrophs) and leaving as heat during the many transfers between trophic levels. However, the matter that makes up living organisms is conserved and recycled. The six most common elements associated with organic molecules—carbon, nitrogen, hydrogen, oxy Document 3::: The water cycle, also known as the hydrologic cycle or the hydrological cycle, is a biogeochemical cycle that describes the continuous movement of water on, above and below the surface of the Earth. The mass of water on Earth remains fairly constant over time but the partitioning of the water into the major reservoirs of ice, fresh water, saline water (salt water) and atmospheric water is variable depending on a wide range of climatic variables. The water moves from one reservoir to another, such as from river to ocean, or from the ocean to the atmosphere, by the physical processes of evaporation, transpiration, condensation, precipitation, infiltration, surface runoff, and subsurface flow. In doing so, the water goes through different forms: liquid, solid (ice) and vapor. The ocean plays a key role in the water cycle as it is the source of 86% of global evaporation. The water cycle involves the exchange of energy, which leads to temperature changes. When water evaporates, it takes up energy from its surroundings and cools the environment. When it condenses, it releases energy and warms the environment. These heat exchanges influence climate. The evaporative phase of the cycle purifies water, causing salts and other solids picked up during the cycle to be left behind, and then the condensation phase in the atmosphere replenishes the land with freshwater. The flow of liquid water and ice transports minerals across the globe. It is also involved in reshaping the geological features of the Earth, through processes including erosion and sedimentation. The water cycle is also essential for the maintenance of most life and ecosystems on the planet. Description Overall process The water cycle is powered from the energy emitted by the sun. This energy heats water in the ocean and seas. Water evaporates as water vapor into the air. Some ice and snow sublimates directly into water vapor. Evapotranspiration is water transpired from plants and evaporated from the soil. Th Document 4::: The carbon cycle is that part of the biogeochemical cycle by which carbon is exchanged among the biosphere, pedosphere, geosphere, hydrosphere, and atmosphere of Earth. Other major biogeochemical cycles include the nitrogen cycle and the water cycle. Carbon is the main component of biological compounds as well as a major component of many minerals such as limestone. The carbon cycle comprises a sequence of events that are key to making Earth capable of sustaining life. It describes the movement of carbon as it is recycled and reused throughout the biosphere, as well as long-term processes of carbon sequestration (storage) to and release from carbon sinks. To describe the dynamics of the carbon cycle, a distinction can be made between the fast and slow carbon cycle. The fast carbon cycle is also referred to as the biological carbon cycle. Fast carbon cycles can complete within years, moving substances from atmosphere to biosphere, then back to the atmosphere. Slow or geological cycles (also called deep carbon cycle) can take millions of years to complete, moving substances through the Earth's crust between rocks, soil, ocean and atmosphere. Human activities have disturbed the fast carbon cycle for many centuries by modifying land use, and moreover with the recent industrial-scale mining of fossil carbon (coal, petroleum, and gas extraction, and cement manufacture) from the geosphere. Carbon dioxide in the atmosphere had increased nearly 52% over pre-industrial levels by 2020, forcing greater atmospheric and Earth surface heating by the Sun. The increased carbon dioxide has also caused a reduction in the ocean's pH value and is fundamentally altering marine chemistry. The majority of fossil carbon has been extracted over just the past half century, and rates continue to rise rapidly, contributing to human-caused climate change. Main compartments The carbon cycle was first described by Antoine Lavoisier and Joseph Priestley, and popularised by Humphry Davy. The g The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What do we call the recycling of inorganic matter between living organisms and their environment? A. phosphorus cycle B. water cycle C. biogeochemical cycle D. nutrient cycle Answer:
sciq-4661
multiple_choice
Similarity in biochemicals, like the glucose used by virtually all living things for energy, provides evidence of what?
[ "gravity", "DNA", "variation", "evolution" ]
D
Relavent Documents: Document 0::: The following outline is provided as an overview of and topical guide to biophysics: Biophysics – interdisciplinary science that uses the methods of physics to study biological systems. Nature of biophysics Biophysics is An academic discipline – branch of knowledge that is taught and researched at the college or university level. Disciplines are defined (in part), and recognized by the academic journals in which research is published, and the learned societies and academic departments or faculties to which their practitioners belong. A scientific field (a branch of science) – widely recognized category of specialized expertise within science, and typically embodies its own terminology and nomenclature. Such a field will usually be represented by one or more scientific journals, where peer-reviewed research is published. A natural science – one that seeks to elucidate the rules that govern the natural world using empirical and scientific methods. A biological science – concerned with the study of living organisms, including their structure, function, growth, evolution, distribution, and taxonomy. A branch of physics – concerned with the study of matter and its motion through space and time, along with related concepts such as energy and force. An interdisciplinary field – field of science that overlaps with other sciences Scope of biophysics research Biomolecular scale Biomolecule Biomolecular structure Organismal scale Animal locomotion Biomechanics Biomineralization Motility Environmental scale Biophysical environment Biophysics research overlaps with Agrophysics Biochemistry Biophysical chemistry Bioengineering Biogeophysics Nanotechnology Systems biology Branches of biophysics Astrobiophysics – field of intersection between astrophysics and biophysics concerned with the influence of the astrophysical phenomena upon life on planet Earth or some other planet in general. Medical biophysics – interdisciplinary field that applies me Document 1::: This list of life sciences comprises the branches of science that involve the scientific study of life – such as microorganisms, plants, and animals including human beings. This science is one of the two major branches of natural science, the other being physical science, which is concerned with non-living matter. Biology is the overall natural science that studies life, with the other life sciences as its sub-disciplines. Some life sciences focus on a specific type of organism. For example, zoology is the study of animals, while botany is the study of plants. Other life sciences focus on aspects common to all or many life forms, such as anatomy and genetics. Some focus on the micro-scale (e.g. molecular biology, biochemistry) other on larger scales (e.g. cytology, immunology, ethology, pharmacy, ecology). Another major branch of life sciences involves understanding the mindneuroscience. Life sciences discoveries are helpful in improving the quality and standard of life and have applications in health, agriculture, medicine, and the pharmaceutical and food science industries. For example, it has provided information on certain diseases which has overall aided in the understanding of human health. Basic life science branches Biology – scientific study of life Anatomy – study of form and function, in plants, animals, and other organisms, or specifically in humans Astrobiology – the study of the formation and presence of life in the universe Bacteriology – study of bacteria Biotechnology – study of combination of both the living organism and technology Biochemistry – study of the chemical reactions required for life to exist and function, usually a focus on the cellular level Bioinformatics – developing of methods or software tools for storing, retrieving, organizing and analyzing biological data to generate useful biological knowledge Biolinguistics – the study of the biology and evolution of language. Biological anthropology – the study of humans, non-hum Document 2::: This is a list of topics in molecular biology. See also index of biochemistry articles. Document 3::: Biology is the scientific study of life. It is a natural science with a broad scope but has several unifying themes that tie it together as a single, coherent field. For instance, all organisms are made up of cells that process hereditary information encoded in genes, which can be transmitted to future generations. Another major theme is evolution, which explains the unity and diversity of life. Energy processing is also important to life as it allows organisms to move, grow, and reproduce. Finally, all organisms are able to regulate their own internal environments. Biologists are able to study life at multiple levels of organization, from the molecular biology of a cell to the anatomy and physiology of plants and animals, and evolution of populations. Hence, there are multiple subdisciplines within biology, each defined by the nature of their research questions and the tools that they use. Like other scientists, biologists use the scientific method to make observations, pose questions, generate hypotheses, perform experiments, and form conclusions about the world around them. Life on Earth, which emerged more than 3.7 billion years ago, is immensely diverse. Biologists have sought to study and classify the various forms of life, from prokaryotic organisms such as archaea and bacteria to eukaryotic organisms such as protists, fungi, plants, and animals. These various organisms contribute to the biodiversity of an ecosystem, where they play specialized roles in the cycling of nutrients and energy through their biophysical environment. History The earliest of roots of science, which included medicine, can be traced to ancient Egypt and Mesopotamia in around 3000 to 1200 BCE. Their contributions shaped ancient Greek natural philosophy. Ancient Greek philosophers such as Aristotle (384–322 BCE) contributed extensively to the development of biological knowledge. He explored biological causation and the diversity of life. His successor, Theophrastus, began the scienti Document 4::: A biosignature (sometimes called chemical fossil or molecular fossil) is any substance – such as an element, isotope, molecule, or phenomenon that provides scientific evidence of past or present life. Measurable attributes of life include its complex physical or chemical structures and its use of free energy and the production of biomass and wastes. A biosignature can provide evidence for living organisms outside the Earth and can be directly or indirectly detected by searching for their unique byproducts. Types In general, biosignatures can be grouped into ten broad categories: Isotope patterns: Isotopic evidence or patterns that require biological processes. Chemistry: Chemical features that require biological activity. Organic matter: Organics formed by biological processes. Minerals: Minerals or biomineral-phases whose composition and/or morphology indicate biological activity (e.g., biomagnetite). Microscopic structures and textures: Biologically formed cements, microtextures, microfossils, and films. Macroscopic physical structures and textures: Structures that indicate microbial ecosystems, biofilms (e.g., stromatolites), or fossils of larger organisms. Temporal variability: Variations in time of atmospheric gases, reflectivity, or macroscopic appearance that indicates life's presence. Surface reflectance features: Large-scale reflectance features due to biological pigments could be detected remotely. Atmospheric gases: Gases formed by metabolic and/or aqueous processes, which may be present on a planet-wide scale. Technosignatures: Signatures that indicate a technologically advanced civilization. Viability Determining whether a potential biosignature is worth investigating is a fundamentally complicated process. Scientists must consider any and every possible alternate explanation before concluding that something is a true biosignature. This includes investigating the minute details that make other planets unique and understanding when there is a deviat The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Similarity in biochemicals, like the glucose used by virtually all living things for energy, provides evidence of what? A. gravity B. DNA C. variation D. evolution Answer:
scienceQA-9960
multiple_choice
Select the animal.
[ "Hydrangea bushes can grow colorful flowers.", "Maple trees have star-shaped leaves.", "Snowy owls eat small animals.", "Rose bushes can grow colorful flowers." ]
C
A rose bush is a plant. It can grow colorful flowers. Most rose bushes have sharp thorns. The thorns help protect the rose bush from being eaten by animals. A snowy owl is an animal. It eats small animals. Snowy owls live in cold places. Snowy owls have feathers on their feet to protect them from the cold. A maple tree is a plant. It has star-shaped leaves. Maple trees have green leaves in the spring and summer. In the fall, their leaves turn yellow, red, or brown. A hydrangea bush is a plant. It can grow colorful flowers. Hydrangea bushes can have blue, white, purple, or pink flowers.
Relavent Documents: Document 0::: Animals are multicellular eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, are able to move, reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Over 1.5 million living animal species have been described—of which around 1 million are insects—but it has been estimated there are over 7 million in total. Animals range in size from 8.5 millionths of a metre to long and have complex interactions with each other and their environments, forming intricate food webs. The study of animals is called zoology. Animals may be listed or indexed by many criteria, including taxonomy, status as endangered species, their geographical location, and their portrayal and/or naming in human culture. By common name List of animal names (male, female, young, and group) By aspect List of common household pests List of animal sounds List of animals by number of neurons By domestication List of domesticated animals By eating behaviour List of herbivorous animals List of omnivores List of carnivores By endangered status IUCN Red List endangered species (Animalia) United States Fish and Wildlife Service list of endangered species By extinction List of extinct animals List of extinct birds List of extinct mammals List of extinct cetaceans List of extinct butterflies By region Lists of amphibians by region Lists of birds by region Lists of mammals by region Lists of reptiles by region By individual (real or fictional) Real Lists of snakes List of individual cats List of oldest cats List of giant squids List of individual elephants List of historical horses List of leading Thoroughbred racehorses List of individual apes List of individual bears List of giant pandas List of individual birds List of individual bovines List of individual cetaceans List of individual dogs List of oldest dogs List of individual monkeys List of individual pigs List of w Document 1::: Plant ontology (PO) is a collection of ontologies developed by the Plant Ontology Consortium. These ontologies describe anatomical structures and growth and developmental stages across Viridiplantae. The PO is intended for multiple applications, including genetics, genomics, phenomics, and development, taxonomy and systematics, semantic applications and education. Project Members Oregon State University New York Botanical Garden L. H. Bailey Hortorium at Cornell University Ensembl SoyBase SSWAP SGN Gramene The Arabidopsis Information Resource (TAIR) MaizeGDB University of Missouri at St. Louis Missouri Botanical Garden See also Generic Model Organism Database Open Biomedical Ontologies OBO Foundry Document 2::: History of Animals (, Ton peri ta zoia historion, "Inquiries on Animals"; , "History of Animals") is one of the major texts on biology by the ancient Greek philosopher Aristotle, who had studied at Plato's Academy in Athens. It was written in the fourth century BC; Aristotle died in 322 BC. Generally seen as a pioneering work of zoology, Aristotle frames his text by explaining that he is investigating the what (the existing facts about animals) prior to establishing the why (the causes of these characteristics). The book is thus an attempt to apply philosophy to part of the natural world. Throughout the work, Aristotle seeks to identify differences, both between individuals and between groups. A group is established when it is seen that all members have the same set of distinguishing features; for example, that all birds have feathers, wings, and beaks. This relationship between the birds and their features is recognized as a universal. The History of Animals contains many accurate eye-witness observations, in particular of the marine biology around the island of Lesbos, such as that the octopus had colour-changing abilities and a sperm-transferring tentacle, that the young of a dogfish grow inside their mother's body, or that the male of a river catfish guards the eggs after the female has left. Some of these were long considered fanciful before being rediscovered in the nineteenth century. Aristotle has been accused of making errors, but some are due to misinterpretation of his text, and others may have been based on genuine observation. He did however make somewhat uncritical use of evidence from other people, such as travellers and beekeepers. The History of Animals had a powerful influence on zoology for some two thousand years. It continued to be a primary source of knowledge until zoologists in the sixteenth century, such as Conrad Gessner, all influenced by Aristotle, wrote their own studies of the subject. Context Aristotle (384–322 BC) studied at Plat Document 3::: Vertebrate zoology is the biological discipline that consists of the study of Vertebrate animals, i.e., animals with a backbone, such as fish, amphibians, reptiles, birds and mammals. Many natural history museums have departments named Vertebrate Zoology. In some cases whole museums bear this name, e.g. the Museum of Vertebrate Zoology at the University of California, Berkeley. Subdivisions This subdivision of zoology has many further subdivisions, including: Ichthyology - the study of fishes. Mammalogy - the study of mammals. Chiropterology - the study of bats. Primatology - the study of primates. Ornithology - the study of birds. Herpetology - the study of reptiles. Batrachology - the study of amphibians. These divisions are sometimes further divided into more specific specialties. Document 4::: Animals are multicellular, eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. As of 2022, 2.16 million living animal species have been described—of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are around 7.77 million animal species. Animals range in length from to . They have complex interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology. Most living animal species are in Bilateria, a clade whose members have a bilaterally symmetric body plan. The Bilateria include the protostomes, containing animals such as nematodes, arthropods, flatworms, annelids and molluscs, and the deuterostomes, containing the echinoderms and the chordates, the latter including the vertebrates. Life forms interpreted as early animals were present in the Ediacaran biota of the late Precambrian. Many modern animal phyla became clearly established in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago. Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on ad The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Select the animal. A. Hydrangea bushes can grow colorful flowers. B. Maple trees have star-shaped leaves. C. Snowy owls eat small animals. D. Rose bushes can grow colorful flowers. Answer:
sciq-7988
multiple_choice
What is the name for substances with a ph above 7?
[ "bases", "nutrient", "protein", "acid" ]
A
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test. Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95. After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17. Content specification Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below: Biochemistry (36%) A Chemical and Physical Foundations Thermodynamics and kinetics Redox states Water, pH, acid-base reactions and buffers Solutions and equilibria Solute-solvent interactions Chemical interactions and bonding Chemical reaction mechanisms B Structural Biology: Structure, Assembly, Organization and Dynamics Small molecules Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids) Supramolecular complexes (e.g. Document 2::: Advanced Placement (AP) Biology (also known as AP Bio) is an Advanced Placement biology course and exam offered by the College Board in the United States. For the 2012–2013 school year, the College Board unveiled a new curriculum with a greater focus on "scientific practices". This course is designed for students who wish to pursue an interest in the life sciences. The College Board recommends successful completion of high school biology and high school chemistry before commencing AP Biology, although the actual prerequisites vary from school to school and from state to state. This course, nevertheless, is considered very challenging and one of the most difficult AP classes, as shown with AP Finals grade distributions. Topic outline The exam covers the following 8 units. The percentage indicates the portion of the multiple-choice section of the exam focused on each content area: The course is based on and tests six skills, called scientific practices which include: In addition to the topics above, students are required to be familiar with general lab procedure. Students should know how to collect data, analyze data to form conclusions, and apply those conclusions. Exam Students are allowed to use a four-function, scientific, or graphing calculator. The exam has two sections: a 90 minute multiple choice section and a 90 minute free response section. There are 60 multiple choice questions and six free responses, two long and four short. Both sections are worth 50% of the score. Score distribution Commonly used textbooks Biology, AP Edition by Sylvia Mader (2012, hardcover ) Life: The Science of Biology (Sadava, Heller, Orians, Purves, and Hillis, ) Campbell Biology AP Ninth Edition (Reece, Urry, Cain, Wasserman, Minorsky, and Andrew Jackson ) See also Glossary of biology A.P Bio (TV Show) Document 3::: The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591. On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education. Format This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions. The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test. Preparation The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a Document 4::: Advanced Placement (AP) Physics B was a physics course administered by the College Board as part of its Advanced Placement program. It was equivalent to a year-long introductory university course covering Newtonian mechanics, electromagnetism, fluid mechanics, thermal physics, waves, optics, and modern physics. The course was algebra-based and heavily computational; in 2015, it was replaced by the more concept-focused AP Physics 1 and AP Physics 2. Exam The exam consisted of a 70 MCQ section, followed by a 6-7 FRQ section. Each section was 90 minutes and was worth 50% of the final score. The MCQ section banned calculators, while the FRQ allowed calculators and a list of common formulas. Overall, the exam was configured to approximately cover a set percentage of each of the five target categories: Purpose According to the College Board web site, the Physics B course provided "a foundation in physics for students in the life sciences, a pre medical career path, and some applied sciences, as well as other fields not directly related to science." Discontinuation Starting in the 2014–2015 school year, AP Physics B was no longer offered, and AP Physics 1 and AP Physics 2 took its place. Like AP Physics B, both are algebra-based, and both are designed to be taught as year-long courses. Grade distribution The grade distributions for the Physics B scores from 2010 until its discontinuation in 2014 are as follows: The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the name for substances with a ph above 7? A. bases B. nutrient C. protein D. acid Answer:
scienceQA-4652
multiple_choice
How long is a raisin?
[ "12 meters", "12 kilometers", "12 millimeters", "12 centimeters" ]
C
The best estimate for the length of a raisin is 12 millimeters. 12 centimeters, 12 meters, and 12 kilometers are all too long.
Relavent Documents: Document 0::: Further Mathematics is the title given to a number of advanced secondary mathematics courses. The term "Higher and Further Mathematics", and the term "Advanced Level Mathematics", may also refer to any of several advanced mathematics courses at many institutions. In the United Kingdom, Further Mathematics describes a course studied in addition to the standard mathematics AS-Level and A-Level courses. In the state of Victoria in Australia, it describes a course delivered as part of the Victorian Certificate of Education (see § Australia (Victoria) for a more detailed explanation). Globally, it describes a course studied in addition to GCE AS-Level and A-Level Mathematics, or one which is delivered as part of the International Baccalaureate Diploma. In other words, more mathematics can also be referred to as part of advanced mathematics, or advanced level math. United Kingdom Background A qualification in Further Mathematics involves studying both pure and applied modules. Whilst the pure modules (formerly known as Pure 4–6 or Core 4–6, now known as Further Pure 1–3, where 4 exists for the AQA board) build on knowledge from the core mathematics modules, the applied modules may start from first principles. The structure of the qualification varies between exam boards. With regard to Mathematics degrees, most universities do not require Further Mathematics, and may incorporate foundation math modules or offer "catch-up" classes covering any additional content. Exceptions are the University of Warwick, the University of Cambridge which requires Further Mathematics to at least AS level; University College London requires or recommends an A2 in Further Maths for its maths courses; Imperial College requires an A in A level Further Maths, while other universities may recommend it or may promise lower offers in return. Some schools and colleges may not offer Further mathematics, but online resources are available Although the subject has about 60% of its cohort obtainin Document 1::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 2::: The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591. On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education. Format This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions. The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test. Preparation The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a Document 3::: The Texas Math and Science Coaches Association or TMSCA is an organization for coaches of academic University Interscholastic League teams in Texas middle schools and high schools, specifically those that compete in mathematics and science-related tests. Events There are four events in the TMSCA at both the middle and high school level: Number Sense, General Mathematics, Calculator Applications, and General Science. Number Sense is an 80-question exam that students are given only 10 minutes to solve. Additionally, no scratch work or paper calculations are allowed. These questions range from simple calculations such as 99+98 to more complicated operations such as 1001×1938. Each calculation is able to be done with a certain trick or shortcut that makes the calculations easier. The high school exam includes calculus and other difficult topics in the questions also with the same rules applied as to the middle school version. It is well known that the grading for this event is particularly stringent as errors such as writing over a line or crossing out potential answers are considered as incorrect answers. General Mathematics is a 50-question exam that students are given only 40 minutes to solve. These problems are usually more challenging than questions on the Number Sense test, and the General Mathematics word problems take more thinking to figure out. Every problem correct is worth 5 points, and for every problem incorrect, 2 points are deducted. Tiebreakers are determined by the person that misses the first problem and by percent accuracy. Calculator Applications is an 80-question exam that students are given only 30 minutes to solve. This test requires practice on the calculator, knowledge of a few crucial formulas, and much speed and intensity. Memorizing formulas, tips, and tricks will not be enough. In this event, plenty of practice is necessary in order to master the locations of the keys and develop the speed necessary. All correct questions are worth 5 Document 4::: Additional Mathematics is a qualification in mathematics, commonly taken by students in high-school (or GCSE exam takers in the United Kingdom). It features a range of problems set out in a different format and wider content to the standard Mathematics at the same level. Additional Mathematics in Singapore In Singapore, Additional Mathematics is an optional subject offered to pupils in secondary school—specifically those who have an aptitude in Mathematics and are in the Normal (Academic) stream or Express stream. The syllabus covered is more in-depth as compared to Elementary Mathematics, with additional topics including Algebra binomial expansion, proofs in plane geometry, differential calculus and integral calculus. Additional Mathematics is also a prerequisite for students who are intending to offer H2 Mathematics and H2 Further Mathematics at A-level (if they choose to enter a Junior College after secondary school). Students without Additional Mathematics at the 'O' level will usually be offered H1 Mathematics instead. Examination Format The syllabus was updated starting with the 2021 batch of candidates. There are two written papers, each comprising half of the weightage towards the subject. Each paper is 2 hours 15 minutes long and worth 90 marks. Paper 1 has 12 to 14 questions, while Paper 2 has 9 to 11 questions. Generally, Paper 2 would have a graph plotting question based on linear law. GCSE Additional Mathematics in Northern Ireland In Northern Ireland, Additional Mathematics was offered as a GCSE subject by the local examination board, CCEA. There were two examination papers: one which tested topics in Pure Mathematics, and one which tested topics in Mechanics and Statistics. It was discontinued in 2014 and replaced with GCSE Further Mathematics—a new qualification whose level exceeds both those offered by GCSE Mathematics, and the analogous qualifications offered in England. Further Maths IGCSE and Additional Maths FSMQ in England Starting from The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. How long is a raisin? A. 12 meters B. 12 kilometers C. 12 millimeters D. 12 centimeters Answer:
sciq-8988
multiple_choice
Shrimp are an example of what group within the arthropods?
[ "myriapods", "scorpion", "crustaceans", "insect" ]
C
Relavent Documents: Document 0::: Shellfish is a colloquial and fisheries term for exoskeleton-bearing aquatic invertebrates used as food, including various species of molluscs, crustaceans, and echinoderms. Although most kinds of shellfish are harvested from saltwater environments, some are found in freshwater. In addition, a few species of land crabs are eaten, for example Cardisoma guanhumi in the Caribbean. Shellfish are among the most common food allergens. Despite the name, shellfish are not fish. Most shellfish are low on the food chain and eat a diet composed primarily of phytoplankton and zooplankton. Many varieties of shellfish, and crustaceans in particular, are actually closely related to insects and arachnids; crustaceans make up one of the main subphyla of the phylum Arthropoda. Molluscs include cephalopods (squids, octopuses, cuttlefish) and bivalves (clams, oysters), as well as gastropods (aquatic species such as whelks and winkles; land species such as snails and slugs). Molluscs used as a food source by humans include many species of clams, mussels, oysters, winkles, and scallops. Some crustaceans that are commonly eaten are shrimp, lobsters, crayfish, crabs and barnacles. Echinoderms are not as frequently harvested for food as molluscs and crustaceans; however, sea urchin gonads are quite popular in many parts of the world, where the live delicacy is harder to transport. Though some shellfish harvesting has been unsustainable, and shrimp farming has been destructive in some parts of the world, shellfish farming can be important to environmental restoration, by developing reefs, filtering water and eating biomass. Terminology The term "shellfish" is used both broadly and specifically. In common parlance, as in "having shellfish for dinner", it can refer to anything from clams and oysters to lobster and shrimp. For regulatory purposes it is often narrowly defined as filter-feeding molluscs such as clams, mussels, and oyster to the exclusion of crustaceans and all else. Althoug Document 1::: Invertebrate zoology is the subdiscipline of zoology that consists of the study of invertebrates, animals without a backbone (a structure which is found only in fish, amphibians, reptiles, birds and mammals). Invertebrates are a vast and very diverse group of animals that includes sponges, echinoderms, tunicates, numerous different phyla of worms, molluscs, arthropods and many additional phyla. Single-celled organisms or protists are usually not included within the same group as invertebrates. Subdivisions Invertebrates represent 97% of all named animal species, and because of that fact, this subdivision of zoology has many further subdivisions, including but not limited to: Arthropodology - the study of arthropods, which includes Arachnology - the study of spiders and other arachnids Entomology - the study of insects Carcinology - the study of crustaceans Myriapodology - the study of centipedes, millipedes, and other myriapods Cnidariology - the study of Cnidaria Helminthology - the study of parasitic worms. Malacology - the study of mollusks, which includes Conchology - the study of Mollusk shells. Limacology - the study of slugs. Teuthology - the study of cephalopods. Invertebrate paleontology - the study of fossil invertebrates These divisions are sometimes further divided into more specific specialties. For example, within arachnology, acarology is the study of mites and ticks; within entomology, lepidoptery is the study of butterflies and moths, myrmecology is the study of ants and so on. Marine invertebrates are all those invertebrates that exist in marine habitats. History Early Modern Era In the early modern period starting in the late 16th century, invertebrate zoology saw growth in the number of publications made and improvement in the experimental practices associated with the field. (Insects are one of the most diverse groups of organisms on Earth. They play important roles in ecosystems, including pollination, natural enemies, saprophytes, and Document 2::: Animals are multicellular, eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. As of 2022, 2.16 million living animal species have been described—of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are around 7.77 million animal species. Animals range in length from to . They have complex interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology. Most living animal species are in Bilateria, a clade whose members have a bilaterally symmetric body plan. The Bilateria include the protostomes, containing animals such as nematodes, arthropods, flatworms, annelids and molluscs, and the deuterostomes, containing the echinoderms and the chordates, the latter including the vertebrates. Life forms interpreted as early animals were present in the Ediacaran biota of the late Precambrian. Many modern animal phyla became clearly established in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago. Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on ad Document 3::: A shrimp (: shrimp (US) or shrimps (UK)) is a crustacean (a form of shellfish) with an elongated body and a primarily swimming mode of locomotion – typically belonging to the Caridea or Dendrobranchiata of the decapod order, although some crustaceans outside of this order are also referred to as "shrimp". More narrow definitions may be restricted to Caridea, to smaller species of either group or to only the marine species. Under a broader definition, shrimp may be synonymous with prawn, covering stalk-eyed swimming crustaceans with long, narrow muscular tails (abdomens), long whiskers (antennae), and slender legs. Any small crustacean which resembles a shrimp tends to be called one. They swim forward by paddling with swimmerets on the underside of their abdomens, although their escape response is typically repeated flicks with the tail driving them backwards very quickly. Crabs and lobsters have strong walking legs, whereas shrimp have thin, fragile legs which they use primarily for perching. Shrimp are widespread and abundant. There are thousands of species adapted to a wide range of habitats. They can be found feeding near the seafloor on most coasts and estuaries, as well as in rivers and lakes. To escape predators, some species flip off the seafloor and dive into the sediment. They usually live from one to seven years. Shrimp are often solitary, though they can form large schools during the spawning season. They play important roles in the food chain and are an important food source for larger animals ranging from fish to whales. The muscular tails of many shrimp are edible to humans, and they are widely caught and farmed for human consumption. Commercial shrimp species support an industry worth 50 billion dollars a year, and in 2010 the total commercial production of shrimp was nearly 7 million tonnes. Shrimp farming became more prevalent during the 1980s, particularly in China, and by 2007 the harvest from shrimp farms exceeded the capture of wild shrimp. Document 4::: Pseudoplanktonic organisms are those that attach themselves to planktonic organisms or other floating objects, such as drifting wood, buoyant shells of organisms such as Spirula, or man-made flotsam. Examples include goose barnacles and the bryozoan Jellyella. By themselves these animals cannot float, which contrasts them with true planktonic organisms, such as Velella and the Portuguese Man o' War, which are buoyant. Pseudoplankton are often found in the guts of filtering zooplankters. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Shrimp are an example of what group within the arthropods? A. myriapods B. scorpion C. crustaceans D. insect Answer:
ai2_arc-838
multiple_choice
Which system has layers of smooth muscle tissue that contract to move solid and liquid nutrients and waste through the body?
[ "respiratory", "skeletal", "endocrine", "digestive" ]
D
Relavent Documents: Document 0::: H2.00.04.4.01001: Lymphoid tissue H2.00.05.0.00001: Muscle tissue H2.00.05.1.00001: Smooth muscle tissue H2.00.05.2.00001: Striated muscle tissue H2.00.06.0.00001: Nerve tissue H2.00.06.1.00001: Neuron H2.00.06.2.00001: Synapse H2.00.06.2.00001: Neuroglia h3.01: Bones h3.02: Joints h3.03: Muscles h3.04: Alimentary system h3.05: Respiratory system h3.06: Urinary system h3.07: Genital system h3.08: Document 1::: In a multicellular organism, an organ is a collection of tissues joined in a structural unit to serve a common function. In the hierarchy of life, an organ lies between tissue and an organ system. Tissues are formed from same type cells to act together in a function. Tissues of different types combine to form an organ which has a specific function. The intestinal wall for example is formed by epithelial tissue and smooth muscle tissue. Two or more organs working together in the execution of a specific body function form an organ system, also called a biological system or body system. An organ's tissues can be broadly categorized as parenchyma, the functional tissue, and stroma, the structural tissue with supportive, connective, or ancillary functions. For example, the gland's tissue that makes the hormones is the parenchyma, whereas the stroma includes the nerves that innervate the parenchyma, the blood vessels that oxygenate and nourish it and carry away its metabolic wastes, and the connective tissues that provide a suitable place for it to be situated and anchored. The main tissues that make up an organ tend to have common embryologic origins, such as arising from the same germ layer. Organs exist in most multicellular organisms. In single-celled organisms such as members of the eukaryotes, the functional analogue of an organ is known as an organelle. In plants, there are three main organs. The number of organs in any organism depends on the definition used. By one widely adopted definition, 79 organs have been identified in the human body. Animals Except for placozoans, multicellular animals including humans have a variety of organ systems. These specific systems are widely studied in human anatomy. The functions of these organ systems often share significant overlap. For instance, the nervous and endocrine system both operate via a shared organ, the hypothalamus. For this reason, the two systems are combined and studied as the neuroendocrine system. The sam Document 2::: The gastrointestinal wall of the gastrointestinal tract is made up of four layers of specialised tissue. From the inner cavity of the gut (the lumen) outwards, these are: Mucosa Submucosa Muscular layer Serosa or adventitia The mucosa is the innermost layer of the gastrointestinal tract. It surrounds the lumen of the tract and comes into direct contact with digested food (chyme). The mucosa itself is made up of three layers: the epithelium, where most digestive, absorptive and secretory processes occur; the lamina propria, a layer of connective tissue, and the muscularis mucosae, a thin layer of smooth muscle. The submucosa contains nerves including the submucous plexus (also called Meissner's plexus), blood vessels and elastic fibres with collagen, that stretches with increased capacity but maintains the shape of the intestine. The muscular layer surrounds the submucosa. It comprises layers of smooth muscle in longitudinal and circular orientation that also helps with continued bowel movements (peristalsis) and the movement of digested material out of and along the gut. In between the two layers of muscle lies the myenteric plexus (also called Auerbach's plexus). The serosa/adventitia are the final layers. These are made up of loose connective tissue and coated in mucus so as to prevent any friction damage from the intestine rubbing against other tissue. The serosa is present if the tissue is within the peritoneum, and the adventitia if the tissue is retroperitoneal. Structure When viewed under the microscope, the gastrointestinal wall has a consistent general form, but with certain parts differing along its course. Mucosa The mucosa is the innermost layer of the gastrointestinal tract. It surrounds the cavity (lumen) of the tract and comes into direct contact with digested food (chyme). The mucosa is made up of three layers: The epithelium is the innermost layer. It is where most digestive, absorptive and secretory processes occur. The lamina propr Document 3::: The muscular layer (muscular coat, muscular fibers, muscularis propria, muscularis externa) is a region of muscle in many organs in the vertebrate body, adjacent to the submucosa. It is responsible for gut movement such as peristalsis. The Latin, tunica muscularis, may also be used. Structure It usually has two layers of smooth muscle: inner and "circular" outer and "longitudinal" However, there are some exceptions to this pattern. In the stomach there are three layers to the muscular layer. Stomach contains an additional oblique muscle layer just interior to circular muscle layer. In the upper esophagus, part of the externa is skeletal muscle, rather than smooth muscle. In the vas deferens of the spermatic cord, there are three layers: inner longitudinal, middle circular, and outer longitudinal. In the ureter the smooth muscle orientation is opposite that of the GI tract. There is an inner longitudinal and an outer circular layer. The inner layer of the muscularis externa forms a sphincter at two locations of the gastrointestinal tract: in the pylorus of the stomach, it forms the pyloric sphincter. in the anal canal, it forms the internal anal sphincter. In the colon, the fibres of the external longitudinal smooth muscle layer are collected into three longitudinal bands, the teniae coli. The thickest muscularis layer is found in the stomach (triple layered) and thus maximum peristalsis occurs in the stomach. Thinnest muscularis layer in the alimentary canal is found in the rectum, where minimum peristalsis occurs. Function The muscularis layer is responsible for the peristaltic movements and segmental contractions in and the alimentary canal. The Auerbach's nerve plexus (myenteric nerve plexus) is found between longitudinal and circular muscle layers, it starts muscle contractions to initiate peristalsis. Document 4::: A biological system is a complex network which connects several biologically relevant entities. Biological organization spans several scales and are determined based different structures depending on what the system is. Examples of biological systems at the macro scale are populations of organisms. On the organ and tissue scale in mammals and other animals, examples include the circulatory system, the respiratory system, and the nervous system. On the micro to the nanoscopic scale, examples of biological systems are cells, organelles, macromolecular complexes and regulatory pathways. A biological system is not to be confused with a living system, such as a living organism. Organ and tissue systems These specific systems are widely studied in human anatomy and are also present in many other animals. Respiratory system: the organs used for breathing, the pharynx, larynx, bronchi, lungs and diaphragm. Digestive system: digestion and processing food with salivary glands, oesophagus, stomach, liver, gallbladder, pancreas, intestines, rectum and anus. Cardiovascular system (heart and circulatory system): pumping and channeling blood to and from the body and lungs with heart, blood and blood vessels. Urinary system: kidneys, ureters, bladder and urethra involved in fluid balance, electrolyte balance and excretion of urine. Integumentary system: skin, hair, fat, and nails. Skeletal system: structural support and protection with bones, cartilage, ligaments and tendons. Endocrine system: communication within the body using hormones made by endocrine glands such as the hypothalamus, pituitary gland, pineal body or pineal gland, thyroid, parathyroid and adrenals, i.e., adrenal glands. Lymphatic system: structures involved in the transfer of lymph between tissues and the blood stream; includes the lymph and the nodes and vessels. The lymphatic system includes functions including immune responses and development of antibodies. Immune system: protects the organism from The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Which system has layers of smooth muscle tissue that contract to move solid and liquid nutrients and waste through the body? A. respiratory B. skeletal C. endocrine D. digestive Answer:
sciq-1073
multiple_choice
The energy changes in what reactions are enormous compared with those of even the most energetic chemical reactions, and they result in a measurable change of mass?
[ "molecular reaction", "nuclear reactions", "methane combustion", "metabolic reaction" ]
B
Relavent Documents: Document 0::: An elementary reaction is a chemical reaction in which one or more chemical species react directly to form products in a single reaction step and with a single transition state. In practice, a reaction is assumed to be elementary if no reaction intermediates have been detected or need to be postulated to describe the reaction on a molecular scale. An apparently elementary reaction may be in fact a stepwise reaction, i.e. a complicated sequence of chemical reactions, with reaction intermediates of variable lifetimes. In a unimolecular elementary reaction, a molecule dissociates or isomerises to form the products(s) At constant temperature, the rate of such a reaction is proportional to the concentration of the species In a bimolecular elementary reaction, two atoms, molecules, ions or radicals, and , react together to form the product(s) The rate of such a reaction, at constant temperature, is proportional to the product of the concentrations of the species and The rate expression for an elementary bimolecular reaction is sometimes referred to as the Law of Mass Action as it was first proposed by Guldberg and Waage in 1864. An example of this type of reaction is a cycloaddition reaction. This rate expression can be derived from first principles by using collision theory for ideal gases. For the case of dilute fluids equivalent results have been obtained from simple probabilistic arguments. According to collision theory the probability of three chemical species reacting simultaneously with each other in a termolecular elementary reaction is negligible. Hence such termolecular reactions are commonly referred as non-elementary reactions and can be broken down into a more fundamental set of bimolecular reactions, in agreement with the law of mass action. It is not always possible to derive overall reaction schemes, but solutions based on rate equations are often possible in terms of steady-state or Michaelis-Menten approximations. Notes Chemical kinetics Phy Document 1::: Activation energy asymptotics (AEA), also known as large activation energy asymptotics, is an asymptotic analysis used in the combustion field utilizing the fact that the reaction rate is extremely sensitive to temperature changes due to the large activation energy of the chemical reaction. History The techniques were pioneered by the Russian scientists Yakov Borisovich Zel'dovich, David A. Frank-Kamenetskii and co-workers in the 30s, in their study on premixed flames and thermal explosions (Frank-Kamenetskii theory), but not popular to western scientists until the 70s. In the early 70s, due to the pioneering work of Williams B. Bush, Francis E. Fendell, Forman A. Williams, Amable Liñán and John F. Clarke, it became popular in western community and since then it was widely used to explain more complicated problems in combustion. Method overview In combustion processes, the reaction rate is dependent on temperature in the following form (Arrhenius law), where is the activation energy, and is the universal gas constant. In general, the condition is satisfied, where is the burnt gas temperature. This condition forms the basis for activation energy asymptotics. Denoting for unburnt gas temperature, one can define the Zel'dovich number and heat release parameter as follows In addition, if we define a non-dimensional temperature such that approaching zero in the unburnt region and approaching unity in the burnt gas region (in other words, ), then the ratio of reaction rate at any temperature to reaction rate at burnt gas temperature is given by Now in the limit of (large activation energy) with , the reaction rate is exponentially small i.e., and negligible everywhere, but non-negligible when . In other words, the reaction rate is negligible everywhere, except in a small region very close to burnt gas temperature, where . Thus, in solving the conservation equations, one identifies two different regimes, at leading order, Outer convective-diffusive zone I Document 2::: In physical chemistry and chemical engineering, extent of reaction is a quantity that measures the extent to which the reaction has proceeded. Often, it refers specifically to the value of the extent of reaction when equilibrium has been reached. It is usually denoted by the Greek letter ξ. The extent of reaction is usually defined so that it has units of amount (moles). It was introduced by the Belgian scientist Théophile de Donder. Definition Consider the reaction A ⇌ 2 B + 3 C Suppose an infinitesimal amount of the reactant A changes into B and C. This requires that all three mole numbers change according to the stoichiometry of the reaction, but they will not change by the same amounts. However, the extent of reaction can be used to describe the changes on a common footing as needed. The change of the number of moles of A can be represented by the equation , the change of B is , and the change of C is . The change in the extent of reaction is then defined as where denotes the number of moles of the reactant or product and is the stoichiometric number of the reactant or product. Although less common, we see from this expression that since the stoichiometric number can either be considered to be dimensionless or to have units of moles, conversely the extent of reaction can either be considered to have units of moles or to be a unitless mole fraction. The extent of reaction represents the amount of progress made towards equilibrium in a chemical reaction. Considering finite changes instead of infinitesimal changes, one can write the equation for the extent of a reaction as The extent of a reaction is generally defined as zero at the beginning of the reaction. Thus the change of is the extent itself. Assuming that the system has come to equilibrium, Although in the example above the extent of reaction was positive since the system shifted in the forward direction, this usage implies that in general the extent of reaction can be positive or negative, Document 3::: Activation, in chemistry and biology, is the process whereby something is prepared or excited for a subsequent reaction. Chemistry In chemistry, "activation" refers to the reversible transition of a molecule into a nearly identical chemical or physical state, with the defining characteristic being that this resultant state exhibits an increased propensity to undergo a specified chemical reaction. Thus, activation is conceptually the opposite of protection, in which the resulting state exhibits a decreased propensity to undergo a certain reaction. The energy of activation specifies the amount of free energy the reactants must possess (in addition to their rest energy) in order to initiate their conversion into corresponding products—that is, in order to reach the transition state for the reaction. The energy needed for activation can be quite small, and often it is provided by the natural random thermal fluctuations of the molecules themselves (i.e. without any external sources of energy). The branch of chemistry that deals with this topic is called chemical kinetics. Biology Biochemistry In biochemistry, activation, specifically called bioactivation, is where enzymes or other biologically active molecules acquire the ability to perform their biological function, such as inactive proenzymes being converted into active enzymes that are able to catalyze their substrates' reactions into products. Bioactivation may also refer to the process where inactive prodrugs are converted into their active metabolites, or the toxication of protoxins into actual toxins. An enzyme may be reversibly or irreversibly bioactivated. A major mechanism of irreversible bioactivation is where a piece of a protein is cut off by cleavage, producing an enzyme that will then stay active. A major mechanism of reversible bioactivation is substrate presentation where an enzyme translocates near its substrate. Another reversible reaction is where a cofactor binds to an enzyme, which then rem Document 4::: In chemistry and particularly biochemistry, an energy-rich species (usually energy-rich molecule) or high-energy species (usually high-energy molecule) is a chemical species which reacts, potentially with other species found in the environment, to release chemical energy. In particular, the term is often used for: adenosine triphosphate (ATP) and similar molecules called high-energy phosphates, which release inorganic phosphate into the environment in an exothermic reaction with water: ATP + → ADP + Pi ΔG°' = −30.5 kJ/mol (−7.3 kcal/mol) fuels such as hydrocarbons, carbohydrates, lipids, proteins, and other organic molecules which react with oxygen in the environment to ultimately form carbon dioxide, water, and sometimes nitrogen, sulfates, and phosphates molecular hydrogen monatomic oxygen, ozone, hydrogen peroxide, singlet oxygen and other metastable or unstable species which spontaneously react without further reactants in particular, the vast majority of free radicals explosives such as nitroglycerin and other substances which react exothermically without requiring a second reactant metals or metal ions which can be oxidized to release energy This is contrasted to species that are either part of the environment (this sometimes includes diatomic triplet oxygen) or do not react with the environment (such as many metal oxides or calcium carbonate); those species are not considered energy-rich or high-energy species. Alternative definitions The term is often used without a definition. Some authors define the term "high-energy" to be equivalent to "chemically unstable", while others reserve the term for high-energy phosphates, such as the Great Soviet Encyclopedia which defines the term "high-energy compounds" to refer exclusively to those. The IUPAC glossary of terms used in ecotoxicology defines a primary producer as an "organism capable of using the energy derived from light or a chemical substance in order to manufacture energy-rich organic compou The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The energy changes in what reactions are enormous compared with those of even the most energetic chemical reactions, and they result in a measurable change of mass? A. molecular reaction B. nuclear reactions C. methane combustion D. metabolic reaction Answer:
sciq-9401
multiple_choice
What does parasitic mean?
[ "symbiotic", "welcome guest", "lives in host", "mutual benefit" ]
C
Relavent Documents: Document 0::: In experimental physics, and particularly in high energy and nuclear physics, a parasite experiment or parasitic experiment is an experiment performed using a big particle accelerator or other large facility, without interfering with the scheduled experiments of that facility. This allows the experimenters to proceed without the usual competitive time scheduling procedure. These experiments may be instrument tests or experiments whose scientific interest has not been clearly established. Further reading Experimental particle physics Document 1::: In biology and medicine, a host is a larger organism that harbours a smaller organism; whether a parasitic, a mutualistic, or a commensalist guest (symbiont). The guest is typically provided with nourishment and shelter. Examples include animals playing host to parasitic worms (e.g. nematodes), cells harbouring pathogenic (disease-causing) viruses, or a bean plant hosting mutualistic (helpful) nitrogen-fixing bacteria. More specifically in botany, a host plant supplies food resources to micropredators, which have an evolutionarily stable relationship with their hosts similar to ectoparasitism. The host range is the collection of hosts that an organism can use as a partner. Symbiosis Symbiosis spans a wide variety of possible relationships between organisms, differing in their permanence and their effects on the two parties. If one of the partners in an association is much larger than the other, it is generally known as the host. In parasitism, the parasite benefits at the host's expense. In commensalism, the two live together without harming each other, while in mutualism, both parties benefit. Most parasites are only parasitic for part of their life cycle. By comparing parasites with their closest free-living relatives, parasitism has been shown to have evolved on at least 233 separate occasions. Some organisms live in close association with a host and only become parasitic when environmental conditions deteriorate. A parasite may have a long-term relationship with its host, as is the case with all endoparasites. The guest seeks out the host and obtains food or another service from it, but does not usually kill it. In contrast, a parasitoid spends a large part of its life within or on a single host, ultimately causing the host's death, with some of the strategies involved verging on predation. Generally, the host is kept alive until the parasitoid is fully grown and ready to pass on to its next life stage. A guest's relationship with its host may be intermitten Document 2::: Parasitic chromosomes are "selfish" chromosomes that propagate throughout cell divisions, even if they confer no benefit to the overall organism's survival. Parasitic chromosomes can persist even if slightly detrimental to survival, as is characteristic of some selfish genetic elements. Parasitic chromosomes are often B chromosomes, such that they are not necessarily present in the majority of the species population and are not needed for basic life functions, in contrast to A chromosomes. Parasitic chromosomes are classified as selfish genetic elements. Parasitic chromosomes, if detrimental to an organism's survival, often are selected against by natural selection over time, but if the chromosome is able to act like a selfish DNA element, it can spread throughout a population. An example of a parasitic chromosome is the b24 chromosome in grasshoppers. Document 3::: Parasitism is a close relationship between species, where one organism, the parasite, lives on or inside another organism, the host, causing it some harm, and is adapted structurally to this way of life. The entomologist E. O. Wilson characterised parasites as "predators that eat prey in units of less than one". Parasites include single-celled protozoans such as the agents of malaria, sleeping sickness, and amoebic dysentery; animals such as hookworms, lice, mosquitoes, and vampire bats; fungi such as honey fungus and the agents of ringworm; and plants such as mistletoe, dodder, and the broomrapes. There are six major parasitic strategies of exploitation of animal hosts, namely parasitic castration, directly transmitted parasitism (by contact), trophicallytransmitted parasitism (by being eaten), vector-transmitted parasitism, parasitoidism, and micropredation. One major axis of classification concerns invasiveness: an endoparasite lives inside the host's body; an ectoparasite lives outside, on the host's surface. Like predation, parasitism is a type of consumer–resource interaction, but unlike predators, parasites, with the exception of parasitoids, are typically much smaller than their hosts, do not kill them, and often live in or on their hosts for an extended period. Parasites of animals are highly specialised, and reproduce at a faster rate than their hosts. Classic examples include interactions between vertebrate hosts and tapeworms, flukes, the malaria-causing Plasmodium species, and fleas. Parasites reduce host fitness by general or specialised pathology, from parasitic castration to modification of host behaviour. Parasites increase their own fitness by exploiting hosts for resources necessary for their survival, in particular by feeding on them and by using intermediate (secondary) hosts to assist in their transmission from one definitive (primary) host to another. Although parasitism is often unambiguous, it is part of a spectrum of interactions between Document 4::: The hypothesis or paradigm of Mutualism Parasitism Continuum postulates that compatible host-symbiont associations can occupy a broad continuum of interactions with different fitness outcomes for each member. At one end of the continuum lies obligate mutualism where both host and symbiont benefit from the interaction and are dependent on it for survival. At the other end of the continuum highly parasitic interactions can occur, where one member gains a fitness benefit at the expense of the others survival. Between these extremes many different types of interaction are possible. The degree of change between mutualism or parasitism varies depending on the availability of resources, where there is environmental stress generated by few resources, symbiotic relationships are formed while in environments where there is an excess of resources, biological interactions turn to competition and parasitism. Classically the transmission mode of the symbiont can also be important in predicting where on the mutualism-parasitism-continuum an interaction will sit. Symbionts that are vertically transmitted (inherited symbionts) frequently occupy mutualism space on the continuum, this is due to the aligned reproductive interests between host and symbiont that are generated under vertical transmission. In some systems increases in the relative contribution of horizontal transmission can drive selection for parasitism. Studies of this hypothesis have focused on host-symbiont models of plants and fungi, and also of animals and microbes. See also Red King Hypothesis Red Queen Hypothesis Black Queen Hypothesis Biological interaction The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What does parasitic mean? A. symbiotic B. welcome guest C. lives in host D. mutual benefit Answer:
sciq-10118
multiple_choice
Phagocytosis, pinocytosis, and receptor-mediated endocytosis are the three types of what?
[ "nanoparticles", "endocytosis", "modulators", "mitosis" ]
B
Relavent Documents: Document 0::: In cellular biology, pinocytosis, otherwise known as fluid endocytosis and bulk-phase pinocytosis, is a mode of endocytosis in which small molecules dissolved in extracellular fluid are brought into the cell through an invagination of the cell membrane, resulting in their containment within a small vesicle inside the cell. These pinocytotic vesicles then typically fuse with early endosomes to hydrolyze (break down) the particles. Pinocytosis is variably subdivided into categories depending on the molecular mechanism and the fate of the internalized molecules. Function In humans, this process occurs primarily for absorption of fat droplets. In endocytosis the cell plasma membrane extends and folds around desired extracellular material, forming a pouch that pinches off creating an internalized vesicle. The invaginated pinocytosis vesicles are much smaller than those generated by phagocytosis. The vesicles eventually fuse with the lysosome, whereupon the vesicle contents are digested. Pinocytosis involves a considerable investment of cellular energy in the form of ATP. Pinocytosis and ATP Pinocytosis is used primarily for clearing extracellular fluids (ECF) and as part of immune surveillance. In contrast to phagocytosis, it generates very small amounts of ATP from the wastes of alternative substances such as lipids (fat). Unlike receptor-mediated endocytosis, pinocytosis is nonspecific in the substances that it transport: the cell takes in surrounding fluids, including all solutes present. Etymology and pronunciation The word pinocytosis () uses combining forms of pino- + cyto- + -osis, all Neo-Latin from Greek, reflecting píno, to drink, and cytosis. The term was proposed by W. H. Lewis in 1931. Non-specific, adsorptive pinocytosis Non-specific, adsorptive pinocytosis is a form of endocytosis, a process in which small particles are taken in by a cell by splitting off small vesicles from the cell membrane. Cationic proteins bind to the negative cell surface and Document 1::: -Cytosis is a suffix that either refers to certain aspects of cells ie cellular process or phenomenon or sometimes refers to predominance of certain type of cells. It essentially means "of the cell". Sometimes it may be shortened to -osis (necrosis, apoptosis) and may be related to some of the processes ending with -esis (eg diapedesis, or emperipolesis, cytokinesis) or similar suffixes. There are three main types of cytosis: endocytosis (into the cell), exocytosis (out of the cell), and transcytosis (through the cell, in and out). Etymology and pronunciation The word cytosis () uses combining forms of cyto- and -osis, reflecting a cellular process. The term was coined by Novikoff in 1961. Processes related to subcellular entry or exit Endocytosis Endocytosis is when a cell absorbs a molecule, such as a protein, from outside the cell by engulfing it with the cell membrane. It is used by most cells, because many critical substances are large polar molecules that cannot pass through the cell membrane. The two major types of endocytosis are pinocytosis and phagocytosis. Pinocytosis Pinocytosis, also known as cell drinking, is the absorption of small aqueous particles along with the membrane receptors that recognize them. It is an example of fluid phase endocytosis and is usually a continuous process within the cell. The particles are absorbed through the use of clathrin-coated pits. These clathrin-coated pits are short lived and serve only to form a vesicle for transfer of particles to the lysosome. The clathrin-coated pit invaginates into the cytosol and forms a clathrin-coated vesicle. The clathrin proteins will then dissociate. What is left is known as an early endosome. The early endosome merges with a late endosome. This is the vesicle that allows the particles that were endocytosed to be transported into the lysosome. Here there are hydrolytic enzymes that will degrade the contents of the late endosome. Sometimes, rather than being degraded, the receptors t Document 2::: Cell physiology is the biological study of the activities that take place in a cell to keep it alive. The term physiology refers to normal functions in a living organism. Animal cells, plant cells and microorganism cells show similarities in their functions even though they vary in structure. General characteristics There are two types of cells: prokaryotes and eukaryotes. Prokaryotes were the first of the two to develop and do not have a self-contained nucleus. Their mechanisms are simpler than later-evolved eukaryotes, which contain a nucleus that envelops the cell's DNA and some organelles. Prokaryotes Prokaryotes have DNA located in an area called the nucleoid, which is not separated from other parts of the cell by a membrane. There are two domains of prokaryotes: bacteria and archaea. Prokaryotes have fewer organelles than eukaryotes. Both have plasma membranes and ribosomes (structures that synthesize proteins and float free in cytoplasm). Two unique characteristics of prokaryotes are fimbriae (finger-like projections on the surface of a cell) and flagella (threadlike structures that aid movement). Eukaryotes Eukaryotes have a nucleus where DNA is contained. They are usually larger than prokaryotes and contain many more organelles. The nucleus, the feature of a eukaryote that distinguishes it from a prokaryote, contains a nuclear envelope, nucleolus and chromatin. In cytoplasm, endoplasmic reticulum (ER) synthesizes membranes and performs other metabolic activities. There are two types, rough ER (containing ribosomes) and smooth ER (lacking ribosomes). The Golgi apparatus consists of multiple membranous sacs, responsible for manufacturing and shipping out materials such as proteins. Lysosomes are structures that use enzymes to break down substances through phagocytosis, a process that comprises endocytosis and exocytosis. In the mitochondria, metabolic processes such as cellular respiration occur. The cytoskeleton is made of fibers that support the str Document 3::: White blood cells, also called leukocytes or immune cells also called immunocytes, are cells of the immune system that are involved in protecting the body against both infectious disease and foreign invaders. White blood cells include three main subtypes; granulocytes, lymphocytes and monocytes. All white blood cells are produced and derived from multipotent cells in the bone marrow known as hematopoietic stem cells. Leukocytes are found throughout the body, including the blood and lymphatic system. All white blood cells have nuclei, which distinguishes them from the other blood cells, the anucleated red blood cells (RBCs) and platelets. The different white blood cells are usually classified by cell lineage (myeloid cells or lymphoid cells). White blood cells are part of the body's immune system. They help the body fight infection and other diseases. Types of white blood cells are granulocytes (neutrophils, eosinophils, and basophils), and agranulocytes (monocytes, and lymphocytes (T cells and B cells)). Myeloid cells (myelocytes) include neutrophils, eosinophils, mast cells, basophils, and monocytes. Monocytes are further subdivided into dendritic cells and macrophages. Monocytes, macrophages, and neutrophils are phagocytic. Lymphoid cells (lymphocytes) include T cells (subdivided into helper T cells, memory T cells, cytotoxic T cells), B cells (subdivided into plasma cells and memory B cells), and natural killer cells. Historically, white blood cells were classified by their physical characteristics (granulocytes and agranulocytes), but this classification system is less frequently used now. Produced in the bone marrow, white blood cells defend the body against infections and disease. An excess of white blood cells is usually due to infection or inflammation. Less commonly, a high white blood cell count could indicate certain blood cancers or bone marrow disorders. The number of leukocytes in the blood is often an indicator of disease, and thus the white blood Document 4::: Trans-endocytosis is the biological process where material created in one cell undergoes endocytosis (enters) into another cell. If the material is large enough, this can be observed using an electron microscope. Trans-endocytosis from neurons to glia has been observed using time-lapse microscopy. Trans-endocytosis also applies to molecules. For example, this process is involved when a part of the protein Notch is cleaved off and undergoes endocytosis into its neighboring cell. Without Notch trans-endocytosis, there would be too many neurons in a developing embryo. Trans-endocytosis is also involved in cell movement when the protein ephrin is bound by its receptor from a neighboring cell. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Phagocytosis, pinocytosis, and receptor-mediated endocytosis are the three types of what? A. nanoparticles B. endocytosis C. modulators D. mitosis Answer:
ai2_arc-80
multiple_choice
An ice cube placed in sunlight melts quickly. Which BEST explains this event?
[ "The Sun is far away.", "The Sun makes heat.", "The ice cube is a solid.", "The ice cube looks clear." ]
B
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 2::: Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams. Course content E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are: Electrostatics Conductors, capacitors, and dielectrics Electric circuits Magnetic fields Electromagnetism. Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class. AP test The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution. Registration The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test. Format The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with Document 3::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of Document 4::: Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women. The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development. Current status of girls and women in STEM education Overall trends in STEM education Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle. Learning achievement in STEM education Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. An ice cube placed in sunlight melts quickly. Which BEST explains this event? A. The Sun is far away. B. The Sun makes heat. C. The ice cube is a solid. D. The ice cube looks clear. Answer:
sciq-1602
multiple_choice
What parts of comets make them easy to see?
[ "craters and tails", "arcs and tails", "novas and tails", "comas and tails" ]
D
Relavent Documents: Document 0::: 101P/Chernykh back to main list This is a list of (2 entries) with all its cometary fragments listed at JPL's SBDB (see ). 128P/Shoemaker–Holt back to main list This is a list of (3 entries) with all its cometar Document 1::: This is a list of periodic comets that were numbered by the Minor Planet Center after having been observed on at least two occasions. Their orbital periods vary from 3.2 to 366 years. there are 471 numbered comets (1P–471P). There are 405 Jupiter-family comets (JFCs), 38 Encke-type comets (ETCs), 14 Halley-type comets (HTCs), five Chiron-type comets (CTCs), and one long-period comet (153P). 75 bodies are also near-Earth comets (NECs). In addition, eight numbered comets are principally classified as minor planets – five main-belt comets, two centaurs (CEN), and one Apollo asteroid – and display characteristics of both an asteroid and a comet. Occasionally, comets will break up into multiple chunks, as volatiles coming off the comet and rotational forces may cause it to break into two or more pieces. An extreme example of this is 73P/Schwassmann–Wachmann, which broke into over 50 pieces during its 1995 perihelion. For a larger list of periodic Jupiter-family and Halley-type comets including unnumbered bodies, see list of periodic comets. List Multiples 51P/Harrington back to main list This is a list of (3 entries) with all its cometary fragments listed at JPL's SBDB (see ). 57P/du Toit–Neujmin–Delporte back to main list This is a list of (2 entries) with all its cometary fragments listed at JPL's SBDB (see ). 73P/Schwassmann–Wachmann back to main list In 1995, comet 73P/Schwassmann–Wachmann, broke up into several pieces and as of its last perihelion date, the pieces numbered at least 67 with 73P/Schwassmann–Wachmann C as the presumed original nucleus. Because of the enormous number, the pieces of it have been compiled into a separate list. This is a list of (68 entries) with all its cometary fragments listed at JPL's SBDB (see ). 101P/Chernykh back to main list This is a list of (2 entries) with all its cometary fragments listed at JPL's SBDB (see ). 128P/Shoemaker–Holt back to main list Document 2::: A lost comet is one which was not detected during its most recent perihelion passage. This generally happens when data is insufficient to reliably calculate the comet's location or if the solar elongation is unfavorable near perihelion passage. The D/ designation is used for a periodic comet that no longer exists or is deemed to have disappeared. Lost comets can be compared to lost asteroids (lost minor planets), although calculation of comet orbits differs because of nongravitational forces, such as emission of jets of gas from the nucleus. Some astronomers have specialized in this area, such as Brian G. Marsden, who successfully predicted the 1992 return of the once-lost periodic comet Swift–Tuttle. Overview Loss There are a number of reasons why a comet might be missed by astronomers during subsequent apparitions. Firstly, cometary orbits may be perturbed by interaction with the giant planets, such as Jupiter. This, along with nongravitational forces, can result in changes to the date of perihelion. Alternatively, it is possible that the interaction of the planets with a comet can move its orbit too far from the Earth to be seen or even eject it from the Solar System, as is believed to have happened in the case of Lexell's Comet. As some comets periodically undergo "outbursts" or flares in brightness, it may be possible for an intrinsically faint comet to be discovered during an outburst and subsequently lost. Comets can also run out of volatiles. Eventually most of the volatile material contained in a comet nucleus evaporates away, and the comet becomes a small, dark, inert lump of rock or rubble, an extinct comet that can resemble an asteroid (see Comets § Fate of comets). This may have occurred in the case of 5D/Brorsen, which was considered by Marsden to have probably "faded out of existence" in the late 19th century. Comets are in some cases known to have disintegrated during their perihelion passage, or at other points during their orbit. The best-know Document 3::: This is a list of comets designated with X/ prefix. The majority of these comets were discovered before the invention of the telescope in 1610, and as such there was nobody to plot the positions of the comets to a high enough precision to generate any meaningful orbit. Later comets, observed in the 17th century or later, either did not have enough observations, sometimes as few as one or two, or the comet disintegrated or moved out of a favorable location in the sky before it was possible to make more observations of it. Document 4::: The following tables list all minor planets and comets that have been visited by robotic spacecraft. List of minor planets visited by spacecraft A total of 17 minor planets (asteroids, dwarf planets, and Kuiper belt objects) have been visited by space probes. Moons (not directly orbiting the Sun) and planets are not minor planets and thus are not included in the table below. Incidental flybys In addition to the above listed objects, four asteroids have been imaged by spacecraft at distances too large to resolve features (over 100,000 km), and are labeled as such. List of comets visited by spacecraft {| class="wikitable sortable" |- ! colspan=4 style="background-color:#D4E2FC;" | Comet ! colspan=5 style="background-color:#FFFF99;" | Space probe |- ! rowspan=2 style="background-color:#edf3fe;" width=110 | Name ! rowspan=2 style="background-color:#edf3fe;" class="unsortable"| Image ! rowspan=2 style="background-color:#edf3fe; font-weight: normal;" | Dimensions(km)(a) ! rowspan=2 style="background-color:#edf3fe;" width=70 | Discoveryyear ! rowspan=2 style="background-color:#ffffcc;" | Name ! colspan=3 style="background-color:#ffffcc;"| Closest approach ! rowspan=2 style="background-color:#ffffcc;" class="unsortable"| Remarks |- ! width=60 style="background-color:#ffffcc;" | year ! width=60 style="background-color:#ffffcc;" | in km ! width=60 style="background-color:#ffffcc; font-weight: normal;" | in radii(b) |- | 21P/Giacobini–Zinner | bgcolor=#334d4c | | align=center | 2 | align=center | 1900 | ICE | align=center | 1985 | align=right | 7,800 | align=right | 7,800 | first flyby of a comet The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What parts of comets make them easy to see? A. craters and tails B. arcs and tails C. novas and tails D. comas and tails Answer:
sciq-5509
multiple_choice
Carnivores are animals that eat other animals. the word carnivore is derived from latin and literally means this?
[ "amount eater", "meat eater", "thick eater", "leaf eater" ]
B
Relavent Documents: Document 0::: Consumer–resource interactions are the core motif of ecological food chains or food webs, and are an umbrella term for a variety of more specialized types of biological species interactions including prey-predator (see predation), host-parasite (see parasitism), plant-herbivore and victim-exploiter systems. These kinds of interactions have been studied and modeled by population ecologists for nearly a century. Species at the bottom of the food chain, such as algae and other autotrophs, consume non-biological resources, such as minerals and nutrients of various kinds, and they derive their energy from light (photons) or chemical sources. Species higher up in the food chain survive by consuming other species and can be classified by what they eat and how they obtain or find their food. Classification of consumer types The standard categorization Various terms have arisen to define consumers by what they eat, such as meat-eating carnivores, fish-eating piscivores, insect-eating insectivores, plant-eating herbivores, seed-eating granivores, and fruit-eating frugivores and omnivores are meat eaters and plant eaters. An extensive classification of consumer categories based on a list of feeding behaviors exists. The Getz categorization Another way of categorizing consumers, proposed by South African American ecologist Wayne Getz, is based on a biomass transformation web (BTW) formulation that organizes resources into five components: live and dead animal, live and dead plant, and particulate (i.e. broken down plant and animal) matter. It also distinguishes between consumers that gather their resources by moving across landscapes from those that mine their resources by becoming sessile once they have located a stock of resources large enough for them to feed on during completion of a full life history stage. In Getz's scheme, words for miners are of Greek etymology and words for gatherers are of Latin etymology. Thus a bestivore, such as a cat, preys on live animal Document 1::: An omnivore () is an animal that has the ability to eat and survive on both plant and animal matter. Obtaining energy and nutrients from plant and animal matter, omnivores digest carbohydrates, protein, fat, and fiber, and metabolize the nutrients and energy of the sources absorbed. Often, they have the ability to incorporate food sources such as algae, fungi, and bacteria into their diet. Omnivores come from diverse backgrounds that often independently evolved sophisticated consumption capabilities. For instance, dogs evolved from primarily carnivorous organisms (Carnivora) while pigs evolved from primarily herbivorous organisms (Artiodactyla). Despite this, physical characteristics such as tooth morphology may be reliable indicators of diet in mammals, with such morphological adaptation having been observed in bears. The variety of different animals that are classified as omnivores can be placed into further sub-categories depending on their feeding behaviors. Frugivores include cassowaries, orangutans and grey parrots; insectivores include swallows and pink fairy armadillos; granivores include large ground finches and mice. All of these animals are omnivores, yet still fall into special niches in terms of feeding behavior and preferred foods. Being omnivores gives these animals more food security in stressful times or makes possible living in less consistent environments. Etymology and definitions The word omnivore derives from Latin omnis 'all' and vora, from vorare 'to eat or devour', having been coined by the French and later adopted by the English in the 1800s. Traditionally the definition for omnivory was entirely behavioral by means of simply "including both animal and vegetable tissue in the diet." In more recent times, with the advent of advanced technological capabilities in fields like gastroenterology, biologists have formulated a standardized variation of omnivore used for labeling a species' actual ability to obtain energy and nutrients from ma Document 2::: A herbivore is an animal anatomically and physiologically adapted to eating plant material, for example foliage or marine algae, for the main component of its diet. As a result of their plant diet, herbivorous animals typically have mouthparts adapted to rasping or grinding. Horses and other herbivores have wide flat teeth that are adapted to grinding grass, tree bark, and other tough plant material. A large percentage of herbivores have mutualistic gut flora that help them digest plant matter, which is more difficult to digest than animal prey. This flora is made up of cellulose-digesting protozoans or bacteria. Etymology Herbivore is the anglicized form of a modern Latin coinage, herbivora, cited in Charles Lyell's 1830 Principles of Geology. Richard Owen employed the anglicized term in an 1854 work on fossil teeth and skeletons. Herbivora is derived from Latin herba 'small plant, herb' and vora, from vorare 'to eat, devour'. Definition and related terms Herbivory is a form of consumption in which an organism principally eats autotrophs such as plants, algae and photosynthesizing bacteria. More generally, organisms that feed on autotrophs in general are known as primary consumers. Herbivory is usually limited to animals that eat plants. Insect herbivory can cause a variety of physical and metabolic alterations in the way the host plant interacts with itself and other surrounding biotic factors. Fungi, bacteria, and protists that feed on living plants are usually termed plant pathogens (plant diseases), while fungi and microbes that feed on dead plants are described as saprotrophs. Flowering plants that obtain nutrition from other living plants are usually termed parasitic plants. There is, however, no single exclusive and definitive ecological classification of consumption patterns; each textbook has its own variations on the theme. Evolution of herbivory The understanding of herbivory in geological time comes from three sources: fossilized plants, which may Document 3::: Feeding is the process by which organisms, typically animals, obtain food. Terminology often uses either the suffixes -vore, -vory, or -vorous from Latin vorare, meaning "to devour", or -phage, -phagy, or -phagous from Greek φαγεῖν (), meaning "to eat". Evolutionary history The evolution of feeding is varied with some feeding strategies evolving several times in independent lineages. In terrestrial vertebrates, the earliest forms were large amphibious piscivores 400 million years ago. While amphibians continued to feed on fish and later insects, reptiles began exploring two new food types, other tetrapods (carnivory), and later, plants (herbivory). Carnivory was a natural transition from insectivory for medium and large tetrapods, requiring minimal adaptation (in contrast, a complex set of adaptations was necessary for feeding on highly fibrous plant materials). Evolutionary adaptations The specialization of organisms towards specific food sources is one of the major causes of evolution of form and function, such as: mouth parts and teeth, such as in whales, vampire bats, leeches, mosquitos, predatory animals such as felines and fishes, etc. distinct forms of beaks in birds, such as in hawks, woodpeckers, pelicans, hummingbirds, parrots, kingfishers, etc. specialized claws and other appendages, for apprehending or killing (including fingers in primates) changes in body colour for facilitating camouflage, disguise, setting up traps for preys, etc. changes in the digestive system, such as the system of stomachs of herbivores, commensalism and symbiosis Classification By mode of ingestion There are many modes of feeding that animals exhibit, including: Filter feeding: obtaining nutrients from particles suspended in water Deposit feeding: obtaining nutrients from particles suspended in soil Fluid feeding: obtaining nutrients by consuming other organisms' fluids Bulk feeding: obtaining nutrients by eating all of an organism. Ram feeding and suction feeding: in Document 4::: A graminivore is a herbivorous animal that feeds primarily on grass, specifically "true" grasses, plants of the family Poaceae (also known as Graminae). Graminivory is a form of grazing. These herbivorous animals have digestive systems that are adapted to digest large amounts of cellulose, which is abundant in fibrous plant matter and more difficult to break down for many other animals. As such, they have specialized enzymes to aid in digestion and in some cases symbiotic bacteria that live in their digestive track and "assist" with the digestive process through fermentation as the matter travels through the intestines. Horses, cattle, geese, guinea pigs, hippopotamuses, capybara and giant pandas are examples of vertebrate graminivores. Some carnivorous vertebrates, such as dogs and cats, are known to eat grass occasionally. Grass consumption in dogs can be a way to rid their intestinal tract of parasites that may be threatening to the carnivore's health. Various invertebrates also have graminivorous diets. Many grasshoppers, such as individuals from the family Acrididae, have diets consisting primarily of plants from the family Poaceae. Although humans are not graminivores, we do get much of our nutrition from a type of grass called cereal, and especially from the fruit of that grass which is called grain. Graminivores generally exhibit a preference on which species of grass they choose to consume. For example, according to a study done on North American bison feeding on shortgrass plains in north-eastern Colorado, the cattle consumed a total of thirty-six different species of plant. Of that thirty-six, five grass species were favoured and consumed the most pervasively. The average consumption of these five species comprised about 80% of their diet. A few of these species include Aristida longiseta, Muhlenbergia species, and Bouteloua gracilis. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Carnivores are animals that eat other animals. the word carnivore is derived from latin and literally means this? A. amount eater B. meat eater C. thick eater D. leaf eater Answer:
sciq-6114
multiple_choice
What is the name for the cooler, darker areas on the sun’s surface?
[ "anomalies", "sunspots", "corona", "aurora borealis" ]
B
Relavent Documents: Document 0::: Sunspot drawing or sunspot sketching is the act of drawing sunspots. Sunspots are darker spots on the Sun's photosphere. Their prediction is very important for radio communication because they are strongly associated with solar activity, which can seriously damage radio equipment. History Sunspots were probably first drawn by an English monk John of Worcester on 8 December 1128. There are records of observing sunspots from 28 BC, but that is the first known drawing of sunspots, almost 500 years before the telescope. His drawing seems to come around solar maximum. Five days later, the Korean astronomer saw the northern lights above his country, so this is also the first prediction of coronal mass ejection. In 1612, Galileo Galilei was writing letters on sunspots to Mark Welser. They were published in 1613. In his telescope, he saw some darker spots on Sun's surface. It seems like he was observing the Sun and drawing sunspots without any filter, which is very hard. He said, "The spots seen at sunset are observed to change the place from one evening to the next, descending from the part of the sun then uppermost, and the morning spots ascend from the part then below ...". From there it seems that he observed the Sun at sunset, but not at sunrise because of the high horizon of Apennines. It is also possible, that he was referring to Scheiner's observation, where he first saw that the Sun is rotating. He complained that he couldn't observe the Sun every morning and evening because of low clouds and so he couldn't see their motion with confidence. He Probably never observed them in the middle of the day. In the same year, his student Benedetto Castelli invented a new method for observing and drawing sunspots, the projection method. Probably, he was never looking at the Sun directly through the telescope. The Mount Wilson observatory started drawing sunspots by hand in 1917. This tradition continues still today. The early drawers did not draw their shapes and positions Document 1::: Starspots are stellar phenomena, so-named by analogy with sunspots. Spots as small as sunspots have not been detected on other stars, as they would cause undetectably small fluctuations in brightness. The commonly observed starspots are in general much larger than those on the Sun: up to about 30% of the stellar surface may be covered, corresponding to starspots 100 times larger than those on the Sun. Detection and measurements To detect and measure the extent of starspots one uses several types of methods. For rapidly rotating stars – Doppler imaging and Zeeman-Doppler imaging. With the Zeeman-Doppler imaging technique the direction of the magnetic field on stars can be determined since spectral lines are split according to the Zeeman effect, revealing the direction and magnitude of the field. For slowly rotating stars – Line Depth Ratio (LDR). Here one measures two different spectral lines, one sensitive to temperature and one which is not. Since starspots have a lower temperature than their surroundings the temperature-sensitive line changes its depth. From the difference between these two lines the temperature and size of the spot can be calculated, with a temperature accuracy of 10K. For eclipsing binary stars – Eclipse mapping produces images and maps of spots on both stars. For giant binary stars - Very-long-baseline interferometry For stars with transiting extrasolar planets – Light curve variations. Temperature Observed starspots have a temperature which is in general 500–2000 kelvins cooler than the stellar photosphere. This temperature difference could give rise to a brightness variation up to 0.6 magnitudes between the spot and the surrounding surface. There also seems to be a relation between the spot temperature and the temperature for the stellar photosphere, indicating that starspots behave similarly for different types of stars (observed in G–K dwarfs). Lifetimes The lifetime for a starspot depends on its size. For small spots the lifetim Document 2::: Heliophysics (from the prefix "helio", from Attic Greek hḗlios, meaning Sun, and the noun "physics": the science of matter and energy and their interactions) is the physics of the Sun and its connection with the Solar System. NASA defines heliophysics as "(1) the comprehensive new term for the science of the Sun - Solar System Connection, (2) the exploration, discovery, and understanding of Earth's space environment, and (3) the system science that unites all of the linked phenomena in the region of the cosmos influenced by a star like our Sun." Heliophysics concentrates on the Sun's effects on Earth and other bodies within the Solar System, as well as the changing conditions in space. It is primarily concerned with the magnetosphere, ionosphere, thermosphere, mesosphere, and upper atmosphere of the Earth and other planets. Heliophysics combines the science of the Sun, corona, heliosphere and geospace, and encompasses a wide variety of astronomical phenomena, including "cosmic rays and particle acceleration, space weather and radiation, dust and magnetic reconnection, nuclear energy generation and internal solar dynamics, solar activity and stellar magnetic fields, aeronomy and space plasmas, magnetic fields and global change", and the interactions of the Solar System with the Milky Way Galaxy. Term “heliophysics” (Russian: “гелиофизика”) was widely used in Russian-language scientific literature. The Great Soviet Encyclopedia third edition (1969—1978) defines “Heliophysics” as “[…] a division of astrophysics  that studies physics of the Sun". In 1990, the Higher Attestation Commission, responsible for the advanced academic degrees in Soviet Union and later in Russia and the Former Soviet Union, established a new specialty “Heliophysics and physics of solar system”. In English-language scientific literature prior to about 2002, the term heliophysics was sporadically used to describe the study of the "physics of the Sun". As such it was a direct translation from th Document 3::: Limb darkening is an optical effect seen in stars (including the Sun) and planets, where the central part of the disk appears brighter than the edge, or limb. Its understanding offered early solar astronomers an opportunity to construct models with such gradients. This encouraged the development of the theory of radiative transfer. Basic theory Optical depth, a measure of the opacity of an object or part of an object, combines with effective temperature gradients inside the star to produce limb darkening. The light seen is approximately the integral of all emission along the line of sight modulated by the optical depth to the viewer (i.e. 1/e times the emission at 1 optical depth, 1/e2 times the emission at 2 optical depths, etc.). Near the center of the star, optical depth is effectively infinite, causing approximately constant brightness. However, the effective optical depth decreases with increasing radius due to lower gas density and a shorter line of sight distance through the star, producing a gradual dimming, until it becomes zero at the apparent edge of the star. The effective temperature of the photosphere also decreases with increasing distance from the center of the star. The radiation emitted from a gas is approximately black-body radiation, the intensity of which is proportional to the fourth power of the temperature. Therefore, even in line of sight directions where the optical depth is effectively infinite, the emitted energy comes from cooler parts of the photosphere, resulting in less total energy reaching the viewer. The temperature in the atmosphere of a star does not always decrease with increasing height. For certain spectral lines, the optical depth is greatest in regions of increasing temperature. In this scenario, the phenomenon of "limb brightening" is seen instead. In the Sun, the existence of a temperature minimum region means that limb brightening should start to dominate at far-infrared or radio wavelengths. Above the lower atmosphe Document 4::: Solar Physics is a peer-reviewed scientific journal published monthly by Springer Science+Business Media. The editors-in-chief are Lidia van Driel-Gesztelyi (various affiliations), John Leibacher (National Solar Observatory, and Institut d'Astrophysique Spatiale), Cristina Mandrini (Universidad de Buenos Aires), and Iñigo Arregui (Instituto de Astrofísica de Canarias). Scope and history The focus of this journal is fundamental research on the Sun and it covers all aspects of solar physics. Topical coverage includes solar-terrestrial physics and stellar research if it pertains to the focus of this journal. Publishing formats include regular manuscripts, invited reviews, invited memoirs, and topical collections. Solar Physics was established in 1967 by solar physicists Cornelis de Jager and Zdeněk Švestka, and publisher D. Reidel. Abstracting and indexing This journal is indexed by the following services: Science Citation Index Scopus INSPEC Chemical Abstracts Service Current Contents/Physical, Chemical & Earth Sciences GeoRef Journal Citation Reports SIMBAD The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the name for the cooler, darker areas on the sun’s surface? A. anomalies B. sunspots C. corona D. aurora borealis Answer:
sciq-7102
multiple_choice
What do decomposers release when they break down dead organisms?
[ "carbon dioxide", "nutrients", "methane", "acids" ]
B
Relavent Documents: Document 0::: Decomposers are organisms that break down dead or decaying organisms; they carry out decomposition, a process possible by only certain kingdoms, such as fungi. Like herbivores and predators, decomposers are heterotrophic, meaning that they use organic substrates to get their energy, carbon and nutrients for growth and development. While the terms decomposer and detritivore are often interchangeably used, detritivores ingest and digest dead matter internally, while decomposers directly absorb nutrients through external chemical and biological processes. Thus, invertebrates such as earthworms, woodlice, and sea cucumbers are technically detritivores, not decomposers, since they are unable to absorb nutrients without ingesting them. Fungi The primary decomposer of litter in many ecosystems is fungi. Unlike bacteria, which are unicellular organisms and are decomposers as well, most saprotrophic fungi grow as a branching network of hyphae. While bacteria are restricted to growing and feeding on the exposed surfaces of organic matter, fungi can use their hyphae to penetrate larger pieces of organic matter, below the surface. Additionally, only wood-decay fungi have evolved the enzymes necessary to decompose lignin, a chemically complex substance found in wood. These two factors make fungi the primary decomposers in forests, where litter has high concentrations of lignin and often occurs in large pieces. Fungi decompose organic matter by releasing enzymes to break down the decaying material, after which they absorb the nutrients in the decaying material. Hyphae are used to break down matter and absorb nutrients and are also used in reproduction. When two compatible fungi hyphae grow close to each other, they will then fuse together for reproduction, and form another fungus. See also Chemotroph Micro-animals Microorganism Document 1::: Microbiology of decomposition is the study of all microorganisms involved in decomposition, the chemical and physical processes during which organic matter is broken down and reduced to its original elements. Decomposition microbiology can be divided into two fields of interest, namely the decomposition of plant materials and the decomposition of cadavers and carcasses. The decomposition of plant materials is commonly studied in order to understand the cycling of carbon within a given environment and to understand the subsequent impacts on soil quality. Plant material decomposition is also often referred to as composting. The decomposition of cadavers and carcasses has become an important field of study within forensic taphonomy. Decomposition microbiology of plant materials The breakdown of vegetation is highly dependent on oxygen and moisture levels. During decomposition, microorganisms require oxygen for their respiration. If anaerobic conditions dominate the decomposition environment, microbial activity will be slow and thus decomposition will be slow. Appropriate moisture levels are required for microorganisms to proliferate and to actively decompose organic matter. In arid environments, bacteria and fungi dry out and are unable to take part in decomposition. In wet environments, anaerobic conditions will develop and decomposition can also be considerably slowed down. Decomposing microorganisms also require the appropriate plant substrates in order to achieve good levels of decomposition. This usually translates to having appropriate carbon to nitrogen ratios (C:N). The ideal composting carbon-to-nitrogen ratio is thought to be approximately 30:1. As in any microbial process, the decomposition of plant litter by microorganisms will also be dependent on temperature. For example, leaves on the ground will not undergo decomposition during the winter months where snow cover occurs as temperatures are too low to sustain microbial activities. Decomposition mi Document 2::: In biology, detritus () is dead particulate organic material, as distinguished from dissolved organic material. Detritus typically includes the bodies or fragments of bodies of dead organisms, and fecal material. Detritus typically hosts communities of microorganisms that colonize and decompose (i.e. remineralize) it. In terrestrial ecosystems it is present as leaf litter and other organic matter that is intermixed with soil, which is denominated "soil organic matter". The detritus of aquatic ecosystems is organic substances that is suspended in the water and accumulates in depositions on the floor of the body of water; when this floor is a seabed, such a deposition is denominated "marine snow". Theory The corpses of dead plants or animals, material derived from animal tissues (e.g. molted skin), and fecal matter gradually lose their form due to physical processes and the action of decomposers, including grazers, bacteria, and fungi. Decomposition, the process by which organic matter is decomposed, occurs in several phases. Micro- and macro-organisms that feed on it rapidly consume and absorb materials such as proteins, lipids, and sugars that are low in molecular weight, while other compounds such as complex carbohydrates are decomposed more slowly. The decomposing microorganisms degrade the organic materials so as to gain the resources they require for their survival and reproduction. Accordingly, simultaneous to microorganisms' decomposition of the materials of dead plants and animals is their assimilation of decomposed compounds to construct more of their biomass (i.e. to grow their own bodies). When microorganisms die, fine organic particles are produced, and if small animals that feed on microorganisms eat these particles they collect inside the intestines of the consumers, and change shape into large pellets of dung. As a result of this process, most of the materials of dead organisms disappear and are not visible and recognizable in any form, but are pres Document 3::: Necrophages are organisms that obtain nutrients by consuming decomposing dead animal biomass, such as the muscle and soft tissue of carcasses and corpses. The term derives from Greek , meaning 'dead', and , meaning 'to eat.' Mainly, necrophages are species within the phylum Arthropoda; however, other animals, such as gastropods and Accipitrimorphae birds have been noted to engage in necrophagy. Necrophages play a critical role in the study of forensic entomology, as certain Arthropoda, such as Diptera larvae, engage in myiasis and colonization of the human body. Invertebrates Diptera Members of the order Diptera, such as Nematocera, Calliphoridae, Sacrophagidae, and Muscidae, as well as semi-aquatic Diptera larvae, such as Simuliidae and Chironomidae, are the most common necrophages within the Animalia kingdom. Diptera species play a critical role in forensic entomology, as they tend to colonize the human body during the early floating phase of decomposition. The flies utilize the submerged corpse as a source of food as well as an attachment site. Notably, Diptera do not specifically colonize and feed on human carcasses. Diptera species, such as Musca domestica and Chloroprocta idioidea have been observed feeding on the carcasses of other mammalian carcasses, including the Mona monkey, the European rabbit, and the Giant cane rat, as well as fish carrion. The carcass' appeal is characterized by the putridness of the odour it emits; thus, the olfactory system of Diptera species plays a role in their food selectivity. In addition, the diversity and abundance of Diptera species vary both spatially and temporally. Necrophagous Diptera, such as Calliphora vicina, tend to be concentrated in urban areas and rare in more rural areas. However, some researchers oppose this notion and claim anthropogenic impacts are negligible regarding species richness. Temporally, the necrophagous Diptera are observed in higher abundances in the summer season than the winter season. The p Document 4::: Saprobionts are organisms that digest their food externally and then absorb the products. This process is called saprotrophic nutrition. Fungi are examples of saprobiontic organisms, which are a type of decomposer. Saprobiontic organisms feed off dead and/or decaying biological materials. Digestion is accomplished by excretion of digestive enzymes which break down cell tissues, allowing saprobionts to extract the nutrients they need while leaving the indigestible waste. This is called extracellular digestion. This is very important in ecosystems, for the nutrient cycle. Saprobionts should not be confused with detritivores, another class of decomposers which digest internally. These organisms can be good sources of extracellular enzymes for industrial processes such as the production of fruit juice. For instance, the fungus Aspergillus niger is used to produce pectinase, an enzyme which is used to break down pectin in juice concentrates, making the juice appear more translucent. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What do decomposers release when they break down dead organisms? A. carbon dioxide B. nutrients C. methane D. acids Answer:
sciq-5229
multiple_choice
A sperm cell and an egg cell fuse to become what?
[ "embryo", "unfertilized egg", "fertilized egg", "fetus" ]
C
Relavent Documents: Document 0::: In biology, a blastomere is a type of cell produced by cell division (cleavage) of the zygote after fertilization; blastomeres are an essential part of blastula formation, and blastocyst formation in mammals. Human blastomere characteristics In humans, blastomere formation begins immediately following fertilization and continues through the first week of embryonic development. About 90 minutes after fertilization, the zygote divides into two cells. The two-cell blastomere state, present after the zygote first divides, is considered the earliest mitotic product of the fertilized oocyte. These mitotic divisions continue and result in a grouping of cells called blastomeres. During this process, the total size of the embryo does not increase, so each division results in smaller and smaller cells. When the zygote contains 16 to 32 blastomeres it is referred to as a morula. These are the preliminary stages in the embryo beginning to form. Once this begins, microtubules within the morula's cytosolic material in the blastomere cells can develop into important membrane functions, such as sodium pumps. These pumps allow the inside of the embryo to fill with blastocoelic fluid, which supports the further growth of life. The blastomere is considered totipotent; that is, blastomeres are capable of developing from a single cell into a fully fertile adult organism. This has been demonstrated through studies and conjectures made with mouse blastomeres, which have been accepted as true for most mammalian blastomeres as well. Studies have analyzed monozygotic twin mouse blastomeres in their two-cell state, and have found that when one of the twin blastomeres is destroyed, a fully fertile adult mouse can still develop. Thus, it can be assumed that since one of the twin cells was totipotent, the destroyed one originally was as well. Relative blastomere size within the embryo is dependent not only on the stage of the cleavage, but also on the regularity of the cleavage amongst t Document 1::: Fertilisation or fertilization (see spelling differences), also known as generative fertilisation, syngamy and impregnation, is the fusion of gametes to give rise to a new individual organism or offspring and initiate its development. While processes such as insemination or pollination which happen before the fusion of gametes are also sometimes informally referred to as fertilisation, these are technically separate processes. The cycle of fertilisation and development of new individuals is called sexual reproduction. During double fertilisation in angiosperms the haploid male gamete combines with two haploid polar nuclei to form a triploid primary endosperm nucleus by the process of vegetative fertilisation. History In Antiquity, Aristotle conceived the formation of new individuals through fusion of male and female fluids, with form and function emerging gradually, in a mode called by him as epigenetic. In 1784, Spallanzani established the need of interaction between the female's ovum and male's sperm to form a zygote in frogs. In 1827, von Baer observed a therian mammalian egg for the first time. Oscar Hertwig (1876), in Germany, described the fusion of nuclei of spermatozoa and of ova from sea urchin. Evolution The evolution of fertilisation is related to the origin of meiosis, as both are part of sexual reproduction, originated in eukaryotes. One theory states that meiosis originated from mitosis. Fertilisation in plants The gametes that participate in fertilisation of plants are the sperm (male) and the egg (female) cell. Various families of plants have differing methods by which the gametes produced by the male and female gametophytes come together and are fertilised. In Bryophyte land plants, fertilisation of the sperm and egg takes place within the archegonium. In seed plants, the male gametophyte is called a pollen grain. After pollination, the pollen grain germinates, and a pollen tube grows and penetrates the ovule through a tiny pore called a mic Document 2::: The spermatid is the haploid male gametid that results from division of secondary spermatocytes. As a result of meiosis, each spermatid contains only half of the genetic material present in the original primary spermatocyte. Spermatids are connected by cytoplasmic material and have superfluous cytoplasmic material around their nuclei. When formed, early round spermatids must undergo further maturational events to develop into spermatozoa, a process termed spermiogenesis (also termed spermeteliosis). The spermatids begin to grow a living thread, develop a thickened mid-piece where the mitochondria become localised, and form an acrosome. Spermatid DNA also undergoes packaging, becoming highly condensed. The DNA is packaged firstly with specific nuclear basic proteins, which are subsequently replaced with protamines during spermatid elongation. The resultant tightly packed chromatin is transcriptionally inactive. In 2016 scientists at Nanjing Medical University claimed they had produced cells resembling mouse spermatids artificially from stem cells. They injected these spermatids into mouse eggs and produced pups. DNA repair As postmeiotic germ cells develop to mature sperm they progressively lose the ability to repair DNA damage that may then accumulate and be transmitted to the zygote and ultimately the embryo. In particular, the repair of DNA double-strand breaks by the non-homologous end joining pathway, although present in round spermatids, appears to be lost as they develop into elongated spermatids. Additional images See also List of distinct cell types in the adult human body Document 3::: Spermatogenesis is the process by which haploid spermatozoa develop from germ cells in the seminiferous tubules of the testis. This process starts with the mitotic division of the stem cells located close to the basement membrane of the tubules. These cells are called spermatogonial stem cells. The mitotic division of these produces two types of cells. Type A cells replenish the stem cells, and type B cells differentiate into primary spermatocytes. The primary spermatocyte divides meiotically (Meiosis I) into two secondary spermatocytes; each secondary spermatocyte divides into two equal haploid spermatids by Meiosis II. The spermatids are transformed into spermatozoa (sperm) by the process of spermiogenesis. These develop into mature spermatozoa, also known as sperm cells. Thus, the primary spermatocyte gives rise to two cells, the secondary spermatocytes, and the two secondary spermatocytes by their subdivision produce four spermatozoa and four haploid cells. Spermatozoa are the mature male gametes in many sexually reproducing organisms. Thus, spermatogenesis is the male version of gametogenesis, of which the female equivalent is oogenesis. In mammals it occurs in the seminiferous tubules of the male testes in a stepwise fashion. Spermatogenesis is highly dependent upon optimal conditions for the process to occur correctly, and is essential for sexual reproduction. DNA methylation and histone modification have been implicated in the regulation of this process. It starts during puberty and usually continues uninterrupted until death, although a slight decrease can be discerned in the quantity of produced sperm with increase in age (see Male infertility). Spermatogenesis starts in the bottom part of seminiferous tubes and, progressively, cells go deeper into tubes and moving along it until mature spermatozoa reaches the lumen, where mature spermatozoa are deposited. The division happens asynchronically; if the tube is cut transversally one could observe different Document 4::: Cell potency is a cell's ability to differentiate into other cell types. The more cell types a cell can differentiate into, the greater its potency. Potency is also described as the gene activation potential within a cell, which like a continuum, begins with totipotency to designate a cell with the most differentiation potential, pluripotency, multipotency, oligopotency, and finally unipotency. Totipotency Totipotency (Lat. totipotentia, "ability for all [things]") is the ability of a single cell to divide and produce all of the differentiated cells in an organism. Spores and zygotes are examples of totipotent cells. In the spectrum of cell potency, totipotency represents the cell with the greatest differentiation potential, being able to differentiate into any embryonic cell, as well as any extraembryonic cell. In contrast, pluripotent cells can only differentiate into embryonic cells. A fully differentiated cell can return to a state of totipotency. The conversion to totipotency is complex and not fully understood. In 2011, research revealed that cells may differentiate not into a fully totipotent cell, but instead into a "complex cellular variation" of totipotency. Stem cells resembling totipotent blastomeres from 2-cell stage embryos can arise spontaneously in mouse embryonic stem cell cultures and also can be induced to arise more frequently in vitro through down-regulation of the chromatin assembly activity of CAF-1. The human development model can be used to describe how totipotent cells arise. Human development begins when a sperm fertilizes an egg and the resulting fertilized egg creates a single totipotent cell, a zygote. In the first hours after fertilization, this zygote divides into identical totipotent cells, which can later develop into any of the three germ layers of a human (endoderm, mesoderm, or ectoderm), or into cells of the placenta (cytotrophoblast or syncytiotrophoblast). After reaching a 16-cell stage, the totipotent cells of the morula d The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. A sperm cell and an egg cell fuse to become what? A. embryo B. unfertilized egg C. fertilized egg D. fetus Answer:
sciq-9937
multiple_choice
What is the concentration of gas molecules in the mesosphere?
[ "medium density", "very low density", "low density", "high density" ]
B
Relavent Documents: Document 0::: Vapour density is the density of a vapour in relation to that of hydrogen. It may be defined as mass of a certain volume of a substance divided by mass of same volume of hydrogen. vapour density = mass of n molecules of gas / mass of n molecules of hydrogen gas . vapour density = molar mass of gas / molar mass of H2 vapour density = molar mass of gas / 2.016 vapour density = × molar mass (and thus: molar mass = ~2 × vapour density) For example, vapour density of mixture of NO2 and N2O4 is 38.3. Vapour density is a dimensionless quantity. Alternative definition In many web sources, particularly in relation to safety considerations at commercial and industrial facilities in the U.S., vapour density is defined with respect to air, not hydrogen. Air is given a vapour density of one. For this use, air has a molecular weight of 28.97 atomic mass units, and all other gas and vapour molecular weights are divided by this number to derive their vapour density. For example, acetone has a vapour density of 2 in relation to air. That means acetone vapour is twice as heavy as air. This can be seen by dividing the molecular weight of Acetone, 58.1 by that of air, 28.97, which equals 2. With this definition, the vapour density would indicate whether a gas is denser (greater than one) or less dense (less than one) than air. The density has implications for container storage and personnel safety—if a container can release a dense gas, its vapour could sink and, if flammable, collect until it is at a concentration sufficient for ignition. Even if not flammable, it could collect in the lower floor or level of a confined space and displace air, possibly presenting an asphyxiation hazard to individuals entering the lower part of that space. See also Relative density (also known as specific gravity) Victor Meyer apparatus Document 1::: The density of air or atmospheric density, denoted ρ, is the mass per unit volume of Earth's atmosphere. Air density, like air pressure, decreases with increasing altitude. It also changes with variations in atmospheric pressure, temperature and humidity. At 101.325 kPa (abs) and 20 °C (68 °F), air has a density of approximately , according to the International Standard Atmosphere (ISA). At 101.325kPa (abs) and , air has a density of approximately , which is about that of water, according to the International Standard Atmosphere (ISA). Pure liquid water is . Air density is a property used in many branches of science, engineering, and industry, including aeronautics; gravimetric analysis; the air-conditioning industry; atmospheric research and meteorology; agricultural engineering (modeling and tracking of Soil-Vegetation-Atmosphere-Transfer (SVAT) models); and the engineering community that deals with compressed air. Depending on the measuring instruments used, different sets of equations for the calculation of the density of air can be applied. Air is a mixture of gases and the calculations always simplify, to a greater or lesser extent, the properties of the mixture. Temperature Other things being equal, hotter air is less dense than cooler air and will thus rise through cooler air. This can be seen by using the ideal gas law as an approximation. Dry air The density of dry air can be calculated using the ideal gas law, expressed as a function of temperature and pressure: where: , air density (kg/m3) , absolute pressure (Pa) , absolute temperature (K) is the gas constant, in J⋅K−1⋅mol−1 is the molar mass of dry air, approximately in kg⋅mol−1. is the Boltzmann constant, in J⋅K−1 is the molecular mass of dry air, approximately in kg. , the specific gas constant for dry air, which using the values presented above would be approximately in J⋅kg−1⋅K−1. Therefore: At IUPAC standard temperature and pressure (0°C and 100kPa), dry air has a density of a Document 2::: The Gas composition of any gas can be characterised by listing the pure substances it contains, and stating for each substance its proportion of the gas mixture's molecule count.Nitrogen 78.084 Oxygen 20.9476 Argon Ar 0.934 Carbon Dioxide 0.0314 Gas composition of air To give a familiar example, air has a composition of: Standard Dry Air is the agreed-upon gas composition for air from which all water vapour has been removed. There are various standards bodies which publish documents that define a dry air gas composition. Each standard provides a list of constituent concentrations, a gas density at standard conditions and a molar mass. It is extremely unlikely that the actual composition of any specific sample of air will completely agree with any definition for standard dry air. While the various definitions for standard dry air all attempt to provide realistic information about the constituents of air, the definitions are important in and of themselves because they establish a standard which can be cited in legal contracts and publications documenting measurement calculation methodologies or equations of state. The standards below are two examples of commonly used and cited publications that provide a composition for standard dry air: ISO TR 29922-2017 provides a definition for standard dry air which specifies an air molar mass of 28,965 46 ± 0,000 17 kg·kmol-1. GPA 2145:2009 is published by the Gas Processors Association. It provides a molar mass for air of 28.9625 g/mol, and provides a composition for standard dry air as a footnote. Document 3::: Median aerodynamic diameter (MAD) is one of two parameters influencing the deposition of inhaled particles, the other being the geometric standard deviation of the particle size distribution. The MAD is the value of aerodynamic diameter for which 50% of some quantity in a given aerosol is associated with particles smaller than the MAD, and 50% of the quantity is associated with particles larger than the MAD. It simplifies the true distribution of aerodynamic diameters of a given aerosol as a single value. It is also used to describe those particle sizes for which deposition depends chiefly on inertial impaction and sedimentation. Activity median aerodynamic diameter In the context of radiation protection, activity median aerodynamic diameter (AMAD) is the MAD for the airborne activity in a given aerosol. Internal dosimetry uses it as a means of simplifying the true distribution of aerodynamic diameters of a given aerosol. Count median aerodynamic diameter Count median aerodynamic diameter (CMAD) is only used rarely. Half of the particles (by count) of a given aerosol have the aerodynamic diameter smaller than the CMAD, and the other half larger. A similar quantity, count median (geometric) diameter (CMD) is more common. Mass median aerodynamic diameter Mass median aerodynamic diameter (MMAD) is the MAD for mass. Document 4::: In chemistry and related fields, the molar volume, symbol Vm, or of a substance is the ratio of the volume occupied by a substance to the amount of substance, usually given at a given temperature and pressure. It is equal to the molar mass (M) divided by the mass density (ρ): The molar volume has the SI unit of cubic metres per mole (m3/mol), although it is more typical to use the units cubic decimetres per mole (dm3/mol) for gases, and cubic centimetres per mole (cm3/mol) for liquids and solids. Definition The molar volume of a substance i is defined as its molar mass divided by its density ρi0: For an ideal mixture containing N components, the molar volume of the mixture is the weighted sum of the molar volumes of its individual components. For a real mixture the molar volume cannot be calculated without knowing the density: There are many liquid–liquid mixtures, for instance mixing pure ethanol and pure water, which may experience contraction or expansion upon mixing. This effect is represented by the quantity excess volume of the mixture, an example of excess property. Relation to specific volume Molar volume is related to specific volume by the product with molar mass. This follows from above where the specific volume is the reciprocal of the density of a substance: Ideal gases For ideal gases, the molar volume is given by the ideal gas equation; this is a good approximation for many common gases at standard temperature and pressure. The ideal gas equation can be rearranged to give an expression for the molar volume of an ideal gas: Hence, for a given temperature and pressure, the molar volume is the same for all ideal gases and is based on the gas constant: R = , or about . The molar volume of an ideal gas at 100 kPa (1 bar) is at 0 °C, at 25 °C. The molar volume of an ideal gas at 1 atmosphere of pressure is at 0 °C, at 25 °C. Crystalline solids For crystalline solids, the molar volume can be measured by X-ray crystallography. The unit cell The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the concentration of gas molecules in the mesosphere? A. medium density B. very low density C. low density D. high density Answer:
sciq-9307
multiple_choice
Ponds and lakes are examples of what kind of biome?
[ "standing freshwater biome", "standing liquid biome", "standing marine biome", "standing lake biome" ]
A
Relavent Documents: Document 0::: The Cowardin classification system is a system for classifying wetlands, devised by Lewis M. Cowardin et al. in 1979 for the United States Fish and Wildlife Service. The system includes five main types of wetlands: Marine wetlands- which are areas exposed to the open ocean Estuarine wetlands- partially enclosed by land and also exposed to a mixture of fresh and salt water bodies of water Riverine wetlands- associated with flowing water Lacustrine wetlands- associated with a lake or other body of fresh water Palustrine wetlands- freshwater wetlands not associated with a river or lake. The primary purpose of this ecological classification system was to establish consistent terms and definitions used in inventory of wetlands and to provide standard measurements for mapping these lands. See also Wetland conservation Wetlands of the United States Document 1::: A Directory of Important Wetlands in Australia (DIWA) is a list of wetlands of national importance to Australia published by the Department of Climate Change, Energy, the Environment and Water. Intended to augment the list of wetlands of international importance under the Ramsar Convention, it was formerly published in report form, but is now essentially an online publication. Wetlands that appear in the Directory are commonly referred to as "DIWA wetlands" or "Directory wetlands". Criteria for determining wetland importance Using criteria agreed in 1994, a wetland can be considered “nationally important” if it satisfies at least one of the following criteria: It is a good example of a wetland type occurring within a biogeographic region in Australia. It is a wetland which plays an important ecological or hydrological role in the natural functioning of a major wetland system/complex. It is a wetland which is important as the habitat for animal taxa at a vulnerable stage in their life cycles, or provides a refuge when adverse conditions such as drought prevail. The wetland supports 1% or more of the national populations of any native plant or animal taxa. The wetland supports native plant or animal taxa or communities which are considered endangered or vulnerable at the national level. The wetland is of outstanding historical or cultural significance. Types of wetlands The directory uses a classification system consisting of the following three categories (i.e. A, B and C) which are further sub-divided into a total of 40 different wetland types: A. Marine and Coastal Zone wetlands, which consists of 12 wetland types B. Inland wetlands, which consists of 19 wetland types C. Human-made wetlands, which consists of 9 wetland types. See also List of Ramsar sites in Australia Wetland classification Document 2::: A biome () is a biogeographical unit consisting of a biological community that has formed in response to the physical environment in which they are found and a shared regional climate. Biomes may span more than one continent. Biome is a broader term than habitat and can comprise a variety of habitats. While a biome can cover small areas, a microbiome is a mix of organisms that coexist in a defined space on a much smaller scale. For example, the human microbiome is the collection of bacteria, viruses, and other microorganisms that are present on or in a human body. A biota is the total collection of organisms of a geographic region or a time period, from local geographic scales and instantaneous temporal scales all the way up to whole-planet and whole-timescale spatiotemporal scales. The biotas of the Earth make up the biosphere. Etymology The term was suggested in 1916 by Clements, originally as a synonym for biotic community of Möbius (1877). Later, it gained its current definition, based on earlier concepts of phytophysiognomy, formation and vegetation (used in opposition to flora), with the inclusion of the animal element and the exclusion of the taxonomic element of species composition. In 1935, Tansley added the climatic and soil aspects to the idea, calling it ecosystem. The International Biological Program (1964–74) projects popularized the concept of biome. However, in some contexts, the term biome is used in a different manner. In German literature, particularly in the Walter terminology, the term is used similarly as biotope (a concrete geographical unit), while the biome definition used in this article is used as an international, non-regional, terminology—irrespectively of the continent in which an area is present, it takes the same biome name—and corresponds to his "zonobiome", "orobiome" and "pedobiome" (biomes determined by climate zone, altitude or soil). In Brazilian literature, the term "biome" is sometimes used as synonym of biogeographic pr Document 3::: The University of Michigan Biological Station (UMBS) is a research and teaching facility operated by the University of Michigan. It is located on the south shore of Douglas Lake in Cheboygan County, Michigan. The station consists of 10,000 acres (40 km2) of land near Pellston, Michigan in the northern Lower Peninsula of Michigan and 3,200 acres (13 km2) on Sugar Island in the St. Mary's River near Sault Ste. Marie, in the Upper Peninsula. It is one of only 28 Biosphere Reserves in the United States. Overview Founded in 1909, it has grown to include approximately 150 buildings, including classrooms, student cabins, dormitories, a dining hall, and research facilities. Undergraduate and graduate courses are available in the spring and summer terms. It has a full-time staff of 15. In the 2000s, UMBS is increasingly focusing on the measurement of climate change. Its field researchers are gauging the impact of global warming and increased levels of atmospheric carbon dioxide on the ecosystem of the upper Great Lakes region, and are using field data to improve the computer models used to forecast further change. Several archaeological digs have been conducted at the station as well. UMBS field researchers sometimes call the station "bug camp" amongst themselves. This is believed to be due to the number of mosquitoes and other insects present. It is also known as "The Bio-Station". The UMBS is also home to Michigan's most endangered species and one of the most endangered species in the world: the Hungerford's Crawling Water Beetle. The species lives in only five locations in the world, two of which are in Emmet County. One of these, a two and a half mile stretch downstream from the Douglas Road crossing of the East Branch of the Maple River supports the only stable population of the Hungerford's Crawling Water Beetle, with roughly 1000 specimens. This area, though technically not part of the UMBS is largely within and along the boundary of the University of Michigan Document 4::: There are 62 named Ecological Systems found in Montana These systems are described in the Montana Field Guides-Ecological Systems of Montana. About An ecosystem is a biological environment consisting of all the organisms living in a particular area, as well as all the nonliving, physical components of the environment with which the organisms interact, such as air, soil, water and sunlight. It is all the organisms in a given area, along with the nonliving (abiotic) factors with which they interact; a biological community and its physical environment. As stated in an article from Montana State University in their Institute on Ecosystems; "An ecosystem can be small, such as the area under a pine tree or a single hot spring in Yellowstone National Park, or it can be large, such as the Rocky Mountains, the rainforest or the Antarctic Ocean." The Montana Fish, Wildlife and Parks (FWP) have shared their views on Montana's Main Ecosystems as montane forest, intermountain grasslands, plains grasslands and shrub grasslands. The Montana Agricultural Experiment Station (MAES) categorized Montana's ecosystems based on the different rangelands. They have recognized 22 different ecosystems whereas the Montana Natural Heritage Program named 62 ecosystems for the entire state. Forest and Woodland Systems Northern Rocky Mountain Mesic Montane Mixed Conifer Forest Rocky Mountain Subalpine Mesic Spruce-Fir Forest and Woodland Northwestern Great Plains - Black Hills Ponderosa Pine Woodland and Savanna Northern Rocky Mountain Dry-Mesic Montane Mixed Conifer Forest Rocky Mountain Foothill Limber Pine - Juniper Woodland Northern Rocky Mountain Foothill Conifer Wooded Steppe Rocky Mountain Lodgepole Pine Forest Middle Rocky Mountain Montane Douglas-Fir Forest and Woodland Northern Rocky Mountain Ponderosa Pine Woodland and Savanna Rocky Mountain Poor Site Lodgepole Pine Forest Rocky Mountain Subalpine Dry-Mesic Spruce-Fir Forest and Woodland Northern Rocky Mountain Subalpin The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Ponds and lakes are examples of what kind of biome? A. standing freshwater biome B. standing liquid biome C. standing marine biome D. standing lake biome Answer:
sciq-2400
multiple_choice
The barrier defenses are not a response to infections, but they are continuously working to protect against a broad range of what?
[ "nutrients", "pathogens", "ecosystems", "mates" ]
B
Relavent Documents: Document 0::: Barrier nursing is a largely archaic term for a set of stringent infection control techniques used in nursing. The aim of barrier nursing is to protect medical staff against infection by patients and also protect patients with highly [infectious disease]from spreading their pathogens to other non-infected people. Barrier nursing was created as a means to maximize isolation care. Since it is impossible to isolate a patient from society and medical staff while still providing care, there are often compromises made when it comes to treating infectious patients. Barrier nursing is a method to regulate and minimize the number and severity of compromises being made in isolation care, while also preventing the disease from spreading. History & usage Barrier nursing started off as a term used by the Centre for Disease Control (CDC) to describe early infection control methods in the late 1800s. From the mid-1900s to early 2000s, 15 new terms had emerged and were also being used to describe infection control. The variety of terms that described infection care led to a misunderstanding of practice recommendations and eventual low adherence to isolation precautions; this eventually forced the CDC to combine all 15 terms into one term called isolation. Nowadays barrier nursing is becoming a less commonly used term and is not even recognized by most reputable databases or online scientific journals. Yet when it is seldom used, it relates mostly to circumstantial protocols for situations regarding isolation health care. The lack of constant use of the term is why there are no systematically reviewed articles on the topic and also why most of the sources that include the topic are from the late 1900s. Simple vs strict barrier nursing Simple barrier nursing Simple barrier nursing is used when an infectious agent is suspected within a patient and standard precautions aren't working. Simple barrier nursing consists of utilizing sterile: gloves, masks, gowns, head-covers an Document 1::: MicrobeLibrary is a permanent collection of over 1400 original peer-reviewed resources for teaching undergraduate microbiology. It is provided by the American Society for Microbiology, Washington DC, United States. Contents include curriculum activities; images and animations; reviews of books, websites and other resources; and articles from Focus on Microbiology Education, Microbiology Education and Microbe. Around 40% of the materials are free to educators and students, the remainder require a subscription. the service is suspended with the message to: "Please check back with us in 2017". External links MicrobeLibrary Microbiology Document 2::: Long-term close-knit interactions between symbiotic microbes and their host can alter host immune system responses to other microorganisms, including pathogens, and are required to maintain proper homeostasis. The immune system is a host defense system consisting of anatomical physical barriers as well as physiological and cellular responses, which protect the host against harmful microorganisms while limiting host responses to harmless symbionts. Humans are home to 1013 to 1014 bacteria, roughly equivalent to the number of human cells, and while these bacteria can be pathogenic to their host most of them are mutually beneficial to both the host and bacteria. The human immune system consists of two main types of immunity: innate and adaptive. The innate immune system is made of non-specific defensive mechanisms against foreign cells inside the host including skin as a physical barrier to entry, activation of the complement cascade to identify foreign bacteria and activate necessary cell responses, and white blood cells that remove foreign substances. The adaptive immune system, or acquired immune system, is a pathogen-specific immune response that is carried out by lymphocytes through antigen presentation on MHC molecules to distinguish between self and non-self antigens. Microbes can promote the development of the host's immune system in the gut and skin, and may help to prevent pathogens from invading. Some release anti-inflammatory products, protecting against parasitic gut microbes. Commensals promote the development of B cells that produce a protective antibody, Immunoglobulin A (IgA). This can neutralize pathogens and exotoxins, and promote the development of immune cells and mucosal immune response. However, microbes have been implicated in human diseases including inflammatory bowel disease, obesity, and cancer. General principles Microbial symbiosis relies on interspecies communication. between the host and microbial symbionts. Immunity has been histori Document 3::: Mucosal immunology is the study of immune system responses that occur at mucosal membranes of the intestines, the urogenital tract, and the respiratory system. The mucous membranes are in constant contact with microorganisms, food, and inhaled antigens. In healthy states, the mucosal immune system protects the organism against infectious pathogens and maintains a tolerance towards non-harmful commensal microbes and benign environmental substances. Disruption of this balance between tolerance and deprivation of pathogens can lead to pathological conditions such as food allergies, irritable bowel syndrome, susceptibility to infections, and more. The mucosal immune system consists of a cellular component, humoral immunity, and defense mechanisms that prevent the invasion of microorganisms and harmful foreign substances into the body. These defense mechanisms can be divided into physical barriers (epithelial lining, mucus, cilia function, intestinal peristalsis, etc.) and chemical factors (pH, antimicrobial peptides, etc.). Function The mucosal immune system provides three main functions: First line of defense from harmful antigenic structures and infection. Prevents systemic immune responses to commensal bacteria and food antigens. Regulates appropriate immune responses to pathogens. Physical barrier Mucosal barrier integrity physically stops pathogens from entering the body. Barrier function is determined by factors such as age, genetics, types of mucins present on the mucosa, interactions between immune cells, nerves and neuropeptides, and co-infection. Barrier integrity depends on the immunosuppressive mechanisms implemented on the mucosa. The mucosal barrier is formed due to the tight junctions between the epithelial cells of the mucosa and the presence of the mucus on the cell surface. The mucins that form mucus offer protection from components on the mucosa by static shielding and limit the immunogenicity of intestinal antigens by inducing an anti-inflam Document 4::: The innate, or nonspecific, immune system is one of the two main immunity strategies (the other being the adaptive immune system) in vertebrates. The innate immune system is an alternate defense strategy and is the dominant immune system response found in plants, fungi, insects, and primitive multicellular organisms (see Beyond vertebrates). The major functions of the innate immune system are to: recruit immune cells to infection sites by producing chemical factors, including chemical mediators called cytokines activate the complement cascade to identify bacteria, activate cells, and promote clearance of antibody complexes or dead cells identify and remove foreign substances present in organs, tissues, blood and lymph, by specialized white blood cells activate the adaptive immune system through antigen presentation act as a physical and chemical barrier to infectious agents; via physical measures such as skin and chemical measures such as clotting factors in blood, which are released following a contusion or other injury that breaks through the first-line physical barrier (not to be confused with a second-line physical or chemical barrier, such as the blood–brain barrier, which protects the nervous system from pathogens that have already gained access to the host). Anatomical barriers Anatomical barriers include physical, chemical and biological barriers. The epithelial surfaces form a physical barrier that is impermeable to most infectious agents, acting as the first line of defense against invading organisms. Desquamation (shedding) of skin epithelium also helps remove bacteria and other infectious agents that have adhered to the epithelial surface. Lack of blood vessels, the inability of the epidermis to retain moisture, and the presence of sebaceous glands in the dermis, produces an environment unsuitable for the survival of microbes. In the gastrointestinal and respiratory tract, movement due to peristalsis or cilia, respectively, helps remove infectious The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The barrier defenses are not a response to infections, but they are continuously working to protect against a broad range of what? A. nutrients B. pathogens C. ecosystems D. mates Answer:
sciq-219
multiple_choice
What is formed when an oxygen atom picks up a pair of hydrogen ions from a solution?
[ "water", "turpentine", "liquid", "ammonia" ]
A
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Water () is a polar inorganic compound that is at room temperature a tasteless and odorless liquid, which is nearly colorless apart from an inherent hint of blue. It is by far the most studied chemical compound and is described as the "universal solvent" and the "solvent of life". It is the most abundant substance on the surface of Earth and the only common substance to exist as a solid, liquid, and gas on Earth's surface. It is also the third most abundant molecule in the universe (behind molecular hydrogen and carbon monoxide). Water molecules form hydrogen bonds with each other and are strongly polar. This polarity allows it to dissociate ions in salts and bond to other polar substances such as alcohols and acids, thus dissolving them. Its hydrogen bonding causes its many unique properties, such as having a solid form less dense than its liquid form, a relatively high boiling point of 100 °C for its molar mass, and a high heat capacity. Water is amphoteric, meaning that it can exhibit properties of an acid or a base, depending on the pH of the solution that it is in; it readily produces both and ions. Related to its amphoteric character, it undergoes self-ionization. The product of the activities, or approximately, the concentrations of and is a constant, so their respective concentrations are inversely proportional to each other. Physical properties Water is the chemical substance with chemical formula ; one molecule of water has two hydrogen atoms covalently bonded to a single oxygen atom. Water is a tasteless, odorless liquid at ambient temperature and pressure. Liquid water has weak absorption bands at wavelengths of around 750 nm which cause it to appear to have a blue color. This can easily be observed in a water-filled bath or wash-basin whose lining is white. Large ice crystals, as in glaciers, also appear blue. Under standard conditions, water is primarily a liquid, unlike other analogous hydrides of the oxygen family, which are generally gaseou Document 2::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of Document 3::: Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams. Course content E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are: Electrostatics Conductors, capacitors, and dielectrics Electric circuits Magnetic fields Electromagnetism. Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class. AP test The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution. Registration The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test. Format The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with Document 4::: The self-ionization of water (also autoionization of water, and autodissociation of water, or simply dissociation of water) is an ionization reaction in pure water or in an aqueous solution, in which a water molecule, H2O, deprotonates (loses the nucleus of one of its hydrogen atoms) to become a hydroxide ion, OH−. The hydrogen nucleus, H+, immediately protonates another water molecule to form a hydronium cation, H3O+. It is an example of autoprotolysis, and exemplifies the amphoteric nature of water. History and notation The self-ionization of water was first proposed in 1884 by Svante Arrhenius as part of the theory of ionic dissociation which he proposed to explain the conductivity of electrolytes including water. Arrhenius wrote the self-ionization as H2O <=> H+ + OH-. At that time, nothing was yet known of atomic structure or subatomic particles, so he had no reason to consider the formation of an H+ ion from a hydrogen atom on electrolysis as any less likely than, say, the formation of a Na+ ion from a sodium atom. In 1923 Johannes Nicolaus Brønsted and Martin Lowry proposed that the self-ionization of water actually involves two water molecules: H2O + H2O <=> H3O+ + OH-. By this time the electron and the nucleus had been discovered and Rutherford had shown that a nucleus is very much smaller than an atom. This would include a bare ion H+ which would correspond to a proton with zero electrons. Brønsted and Lowry proposed that this ion does not exist free in solution, but always attaches itself to a water (or other solvent) molecule to form the hydronium ion H3O+ (or other protonated solvent). Later spectroscopic evidence has shown that many protons are actually hydrated by more than one water molecule. The most descriptive notation for the hydrated ion is H+(aq), where aq (for aqueous) indicates an indefinite or variable number of water molecules. However the notations H+ and H3O+ are still also used extensively because of their historical importance. Thi The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is formed when an oxygen atom picks up a pair of hydrogen ions from a solution? A. water B. turpentine C. liquid D. ammonia Answer:
sciq-9271
multiple_choice
What are the two main types of diabetes?
[ "type 1, type 2", "type 0, 1", "type a, b", "type 3, 4" ]
A
Relavent Documents: Document 0::: GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test. Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95. After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17. Content specification Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below: Biochemistry (36%) A Chemical and Physical Foundations Thermodynamics and kinetics Redox states Water, pH, acid-base reactions and buffers Solutions and equilibria Solute-solvent interactions Chemical interactions and bonding Chemical reaction mechanisms B Structural Biology: Structure, Assembly, Organization and Dynamics Small molecules Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids) Supramolecular complexes (e.g. Document 1::: Single Best Answer (SBA or One Best Answer) is a written examination form of multiple choice questions used extensively in medical education. Structure A single question is posed with typically five alternate answers, from which the candidate must choose the best answer. This method avoids the problems of past examinations of a similar form described as Single Correct Answer. The older form can produce confusion where more than one of the possible answers has some validity. The newer form makes it explicit that more than one answer may have elements that are correct, but that one answer will be superior. Prior to the widespread introduction of SBAs into medical education, the typical form of examination was true-false multiple choice questions. But during the 2000s, educators found that SBAs would be superior. Document 2::: Progress tests are longitudinal, feedback oriented educational assessment tools for the evaluation of development and sustainability of cognitive knowledge during a learning process. A progress test is a written knowledge exam (usually involving multiple choice questions) that is usually administered to all students in the "A" program at the same time and at regular intervals (usually twice to four times yearly) throughout the entire academic program. The test samples the complete knowledge domain expected of new graduates upon completion of their courses, regardless of the year level of the student). The differences between students’ knowledge levels show in the test scores; the further a student has progressed in the curriculum the higher the scores. As a result, these resultant scores provide a longitudinal, repeated measures, curriculum-independent assessment of the objectives (in knowledge) of the entire programme. History Since its inception in the late 1970s at both Maastricht University and the University of Missouri–Kansas City independently, the progress test of applied knowledge has been increasingly used in medical and health sciences programs across the globe. They are well established and increasingly used in medical education in both undergraduate and postgraduate medical education. They are used formatively and summatively. Use in academic programs The progress test is currently used by national progress test consortia in the United Kingdom, Italy, The Netherlands, in Germany (including Austria), and in individual schools in Africa, Saudi Arabia, South East Asia, the Caribbean, Australia, New Zealand, Sweden, Finland, UK, and the USA. The National Board of Medical Examiners in the USA also provides progress testing in various countries The feasibility of an international approach to progress testing has been recently acknowledged and was first demonstrated by Albano et al. in 1996, who compared test scores across German, Dutch and Italian medi Document 3::: Advanced Placement (AP) Calculus (also known as AP Calc, Calc AB / Calc BC or simply AB / BC) is a set of two distinct Advanced Placement calculus courses and exams offered by the American nonprofit organization College Board. AP Calculus AB covers basic introductions to limits, derivatives, and integrals. AP Calculus BC covers all AP Calculus AB topics plus additional topics (including integration by parts, Taylor series, parametric equations, vector calculus, and polar coordinate functions). AP Calculus AB AP Calculus AB is an Advanced Placement calculus course. It is traditionally taken after precalculus and is the first calculus course offered at most schools except for possibly a regular calculus class. The Pre-Advanced Placement pathway for math helps prepare students for further Advanced Placement classes and exams. Purpose According to the College Board: Topic outline The material includes the study and application of differentiation and integration, and graphical analysis including limits, asymptotes, and continuity. An AP Calculus AB course is typically equivalent to one semester of college calculus. Analysis of graphs (predicting and explaining behavior) Limits of functions (one and two sided) Asymptotic and unbounded behavior Continuity Derivatives Concept At a point As a function Applications Higher order derivatives Techniques Integrals Interpretations Properties Applications Techniques Numerical approximations Fundamental theorem of calculus Antidifferentiation L'Hôpital's rule Separable differential equations AP Calculus BC AP Calculus BC is equivalent to a full year regular college course, covering both Calculus I and II. After passing the exam, students may move on to Calculus III (Multivariable Calculus). Purpose According to the College Board, Topic outline AP Calculus BC includes all of the topics covered in AP Calculus AB, as well as the following: Convergence tests for series Taylor series Parametric equations Polar functions (inclu Document 4::: Advanced Placement (AP) Biology (also known as AP Bio) is an Advanced Placement biology course and exam offered by the College Board in the United States. For the 2012–2013 school year, the College Board unveiled a new curriculum with a greater focus on "scientific practices". This course is designed for students who wish to pursue an interest in the life sciences. The College Board recommends successful completion of high school biology and high school chemistry before commencing AP Biology, although the actual prerequisites vary from school to school and from state to state. This course, nevertheless, is considered very challenging and one of the most difficult AP classes, as shown with AP Finals grade distributions. Topic outline The exam covers the following 8 units. The percentage indicates the portion of the multiple-choice section of the exam focused on each content area: The course is based on and tests six skills, called scientific practices which include: In addition to the topics above, students are required to be familiar with general lab procedure. Students should know how to collect data, analyze data to form conclusions, and apply those conclusions. Exam Students are allowed to use a four-function, scientific, or graphing calculator. The exam has two sections: a 90 minute multiple choice section and a 90 minute free response section. There are 60 multiple choice questions and six free responses, two long and four short. Both sections are worth 50% of the score. Score distribution Commonly used textbooks Biology, AP Edition by Sylvia Mader (2012, hardcover ) Life: The Science of Biology (Sadava, Heller, Orians, Purves, and Hillis, ) Campbell Biology AP Ninth Edition (Reece, Urry, Cain, Wasserman, Minorsky, and Andrew Jackson ) See also Glossary of biology A.P Bio (TV Show) The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What are the two main types of diabetes? A. type 1, type 2 B. type 0, 1 C. type a, b D. type 3, 4 Answer:
sciq-4916
multiple_choice
The study of energy and energy transfer involving physical matter is what?
[ "geology", "thermodynamics", "nuclear energy", "biochemistry" ]
B
Relavent Documents: Document 0::: Engineering mathematics is a branch of applied mathematics concerning mathematical methods and techniques that are typically used in engineering and industry. Along with fields like engineering physics and engineering geology, both of which may belong in the wider category engineering science, engineering mathematics is an interdisciplinary subject motivated by engineers' needs both for practical, theoretical and other considerations outside their specialization, and to deal with constraints to be effective in their work. Description Historically, engineering mathematics consisted mostly of applied analysis, most notably: differential equations; real and complex analysis (including vector and tensor analysis); approximation theory (broadly construed, to include asymptotic, variational, and perturbative methods, representations, numerical analysis); Fourier analysis; potential theory; as well as linear algebra and applied probability, outside of analysis. These areas of mathematics were intimately tied to the development of Newtonian physics, and the mathematical physics of that period. This history also left a legacy: until the early 20th century subjects such as classical mechanics were often taught in applied mathematics departments at American universities, and fluid mechanics may still be taught in (applied) mathematics as well as engineering departments. The success of modern numerical computer methods and software has led to the emergence of computational mathematics, computational science, and computational engineering (the last two are sometimes lumped together and abbreviated as CS&E), which occasionally use high-performance computing for the simulation of phenomena and the solution of problems in the sciences and engineering. These are often considered interdisciplinary fields, but are also of interest to engineering mathematics. Specialized branches include engineering optimization and engineering statistics. Engineering mathematics in tertiary educ Document 1::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 2::: Computer science and engineering (CSE) is an academic program at many universities which comprises computer science classes (e.g. data structures and algorithms) and computer engineering classes (e.g computer architecture). There is no clear division in computing between science and engineering, just like in the field of materials science and engineering. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both undergraduate as well postgraduate with specializations. Academic courses Academic programs vary between colleges, but typically include a combination of topics in computer science, computer engineering, and electrical engineering. Undergraduate courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most CSE programs require introductory mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, probability, and statistics, as well as the basics of electrical and electronic engineering, physics, and electromagnetism. Example universities with CSE majors and departments APJ Abdul Kalam Technological University American International University-B Document 3::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 4::: The mathematical sciences are a group of areas of study that includes, in addition to mathematics, those academic disciplines that are primarily mathematical in nature but may not be universally considered subfields of mathematics proper. Statistics, for example, is mathematical in its methods but grew out of bureaucratic and scientific observations, which merged with inverse probability and then grew through applications in some areas of physics, biometrics, and the social sciences to become its own separate, though closely allied, field. Theoretical astronomy, theoretical physics, theoretical and applied mechanics, continuum mechanics, mathematical chemistry, actuarial science, computer science, computational science, data science, operations research, quantitative biology, control theory, econometrics, geophysics and mathematical geosciences are likewise other fields often considered part of the mathematical sciences. Some institutions offer degrees in mathematical sciences (e.g. the United States Military Academy, Stanford University, and University of Khartoum) or applied mathematical sciences (for example, the University of Rhode Island). See also The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The study of energy and energy transfer involving physical matter is what? A. geology B. thermodynamics C. nuclear energy D. biochemistry Answer:
sciq-1405
multiple_choice
What is a purification process where the components of a liquid mixture are vaporized and then condensed and isolated?
[ "sterilization", "distillation", "dispersion", "conduction" ]
B
Relavent Documents: Document 0::: Liquid–liquid extraction (LLE), also known as solvent extraction and partitioning, is a method to separate compounds or metal complexes, based on their relative solubilities in two different immiscible liquids, usually water (polar) and an organic solvent (non-polar). There is a net transfer of one or more species from one liquid into another liquid phase, generally from aqueous to organic. The transfer is driven by chemical potential, i.e. once the transfer is complete, the overall system of chemical components that make up the solutes and the solvents are in a more stable configuration (lower free energy). The solvent that is enriched in solute(s) is called extract. The feed solution that is depleted in solute(s) is called the raffinate. LLE is a basic technique in chemical laboratories, where it is performed using a variety of apparatus, from separatory funnels to countercurrent distribution equipment called as mixer settlers. This type of process is commonly performed after a chemical reaction as part of the work-up, often including an acidic work-up. The term partitioning is commonly used to refer to the underlying chemical and physical processes involved in liquid–liquid extraction, but on another reading may be fully synonymous with it. The term solvent extraction can also refer to the separation of a substance from a mixture by preferentially dissolving that substance in a suitable solvent. In that case, a soluble compound is separated from an insoluble compound or a complex matrix. From a hydrometallurgical perspective, solvent extraction is exclusively used in separation and purification of uranium and plutonium, zirconium and hafnium, separation of cobalt and nickel, separation and purification of rare earth elements etc., its greatest advantage being its ability to selectively separate out even very similar metals. One obtains high-purity single metal streams on 'stripping' out the metal value from the 'loaded' organic wherein one can precipitate or de Document 1::: Decoction is a method of extraction by boiling herbal or plant material (which may include stems, roots, bark and rhizomes) to dissolve the chemicals of the material. It is the most common preparation method in various herbal-medicine systems. Decoction involves first drying the plant material; then mashing, slicing, or cutting the material to allow for maximum dissolution; and finally boiling in water to extract oils, volatile organic compounds and other various chemical substances. Occasionally, aqueous ethanol or glycerol may be used instead of water. Decoction can be used to make tisanes, tinctures and similar solutions. Decoctions and infusions may produce liquids with differing chemical properties, as the temperature or preparation difference may result in more oil-soluble chemicals in decoctions versus infusions. The process can also be applied to meats and vegetables to prepare bouillon or stock, though the term is typically only used to describe boiled plant extracts, usually for medicinal or scientific purposes. Decoction is also the name for the resulting liquid. Although this method of extraction differs from infusion and percolation, the resultant liquids can sometimes be similar in their effects, or general appearance and taste. Etymology The term dates back to 1350–1400 from the past participle stem of Latin (meaning "to boil down"), from ("from") + ("to cook"). Use In brewing, decoction mashing is the traditional method where a portion of the mash is removed to a separate vessel, boiled for a time and then returned to the main mash, raising the mash to the next temperature step. In herbalism, decoctions are usually made to extract fluids from hard plant materials such as roots and bark. To achieve this, the plant material is usually boiled for 1–2 hours in 1-5 liters of water. It is then strained. Ayurveda also utilizes this method to create Kashayam-type herbal medicines. For teas, decoction involves boiling the same amount of the herb and w Document 2::: In chemistry, recrystallization is a technique used to purify chemicals. By dissolving a mixture of a compound and impurities in an appropriate solvent, either the desired compound or impurities can be removed from the solution, leaving the other behind. It is named for the crystals often formed when the compound precipitates out. Alternatively, recrystallization can refer to the natural growth of larger ice crystals at the expense of smaller ones. Chemistry In chemistry, recrystallization is a procedure for purifying compounds. The most typical situation is that a desired "compound A" is contaminated by a small amount of "impurity B". There are various methods of purification that may be attempted (see Separation process), recrystallization being one of them. There are also different recrystallization techniques that can be used such as: Single-solvent recrystallization Typically, the mixture of "compound A" and "impurity B" is dissolved in the smallest amount of hot solvent to fully dissolve the mixture, thus making a saturated solution. The solution is then allowed to cool. As the solution cools the solubility of compounds in the solution drops. This results in the desired compound dropping (recrystallizing) from the solution. The slower the rate of cooling, the bigger the crystals form. In an ideal situation the solubility product of the impurity, B, is not exceeded at any temperature. In that case, the solid crystals will consist of pure A and all the impurities will remain in the solution. The solid crystals are collected by filtration and the filtrate is discarded. If the solubility product of the impurity is exceeded, some of the impurities will co-precipitate. However, because of the relatively low concentration of the impurity, its concentration in the precipitated crystals will be less than its concentration in the original solid. Repeated recrystallization will result in an even purer crystalline precipitate. The purity is checked after each recrysta Document 3::: A sublimatory or sublimation apparatus is equipment, commonly laboratory glassware, for purification of compounds by selective sublimation. In principle, the operation resembles purification by distillation, except that the products do not pass through a liquid phase. Overview A typical sublimation apparatus separates a mix of appropriate solid materials in a vessel in which it applies heat under a controllable atmosphere (air, vacuum or inert gas). If the material is not at first solid, then it may freeze under reduced pressure. Conditions are so chosen that the solid volatilizes and condenses as a purified compound on a cooled surface, leaving the non-volatile residual impurities or solid products behind. The form of the cooled surface often is a so-called cold finger which for very low-temperature sublimation may actually be cryogenically cooled. If the operation is a batch process, then the sublimed material can be collected from the cooled surface once heating ceases and the vacuum is released. Although this may be quite convenient for small quantities, adapting sublimation processes to large volume is generally not practical with the apparatus becoming extremely large and generally needing to be disassembled to recover products and remove residue. Among the advantages of applying the principle to certain materials are the comparatively low working temperatures, reduced exposure to gases such as oxygen that might harm certain products, and the ease with which it can be performed on extremely small quantities. The same apparatus may also be used for conventional distillation of extremely small quantities due to the very small volume and surface area between evaporating and condensing regions, although this is generally only useful if the cold finger can be cold enough to solidify the condensate. Temperature gradient More sophisticated variants of sublimation apparatus include those that apply a temperature gradient so as to allow for controlled recrystall Document 4::: A heteroazeotrope is an azeotrope where the vapour phase coexists with two liquid phases. Sketch of a T-x/y equilibrium curve of a typical heteroazeotropic mixture Examples of heteroazeotropes Benzene - Water NBP 69.2 °C Dichloromethane - Water NBP 38.5 °C n-Butanol - Water NBP 93.5 °C Toluene - Water NBP 82 °C Continuous heteroazeotropic distillation Heterogeneous distillation means that during the distillation the liquid phase of the mixture is immiscible. In this case on the plates can be two liquid phases and the top vapour condensate splits in two liquid phases, which can be separated in a decanter. The simplest case of continuous heteroazeotropic distillation is the separation of a binary heterogeneous azeotropic mixture. In this case the system contains two columns and a decanter. The fresh feed (A-B) is added into the first column. (The feed may also be added into the decanter directly or into the second column depending on the composition of the mixture). From the decanter the A-rich phase is withdrawn as reflux into the first column while the B-rich phase is withdrawn as reflux into the second column. This mean the first column produces "A" and the second column produces "B" as a bottoms product. In industry the butanol-water mixture is separated with this technique. At the previous case the binary system forms already a heterogeneous azeotrope. The other application of the heteroazeotropic distillation is the separation of a binary system (A-B) forming a homogeneous azeotrope. In this case an entrainer or solvent is added to the mixture in order to form an heteroazeotrope with one or both of the components in order to help the separation of the original A-B mixture. Batch heteroazeotropic distillation Batch heteroazeotropic distillation is an efficient method for the separation of azeotropic and low relative volatility (low α) mixtures. A third component (entrainer, E) is added to the binary A-B mixture, which makes the separation of A and B poss The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is a purification process where the components of a liquid mixture are vaporized and then condensed and isolated? A. sterilization B. distillation C. dispersion D. conduction Answer:
sciq-7545
multiple_choice
Which nervous system consists of all the nervous tissue that lies outside the central nervous system?
[ "auxiliary nervous system", "function nervous system", "peripheral nervous system", "significant nervous system" ]
C
Relavent Documents: Document 0::: The following diagram is provided as an overview of and topical guide to the human nervous system: Human nervous system – the part of the human body that coordinates a person's voluntary and involuntary actions and transmits signals between different parts of the body. The human nervous system consists of two main parts: the central nervous system (CNS) and the peripheral nervous system (PNS). The CNS contains the brain and spinal cord. The PNS consists mainly of nerves, which are long fibers that connect the CNS to every other part of the body. The PNS includes motor neurons, mediating voluntary movement; the autonomic nervous system, comprising the sympathetic nervous system and the parasympathetic nervous system and regulating involuntary functions; and the enteric nervous system, a semi-independent part of the nervous system whose function is to control the gastrointestinal system. Evolution of the human nervous system Evolution of nervous systems Evolution of human intelligence Evolution of the human brain Paleoneurology Some branches of science that study the human nervous system Neuroscience Neurology Paleoneurology Central nervous system The central nervous system (CNS) is the largest part of the nervous system and includes the brain and spinal cord. Spinal cord Brain Brain – center of the nervous system. Outline of the human brain List of regions of the human brain Principal regions of the vertebrate brain: Peripheral nervous system Peripheral nervous system (PNS) – nervous system structures that do not lie within the CNS. Sensory system A sensory system is a part of the nervous system responsible for processing sensory information. A sensory system consists of sensory receptors, neural pathways, and parts of the brain involved in sensory perception. List of sensory systems Sensory neuron Perception Visual system Auditory system Somatosensory system Vestibular system Olfactory system Taste Pain Components of the nervous system Neuron I Document 1::: Cutaneous innervation refers to an area of the skin which is supplied by a specific cutaneous nerve. Dermatomes are similar; however, a dermatome only specifies the area served by a spinal nerve. In some cases, the dermatome is less specific (when a spinal nerve is the source for more than one cutaneous nerve), and in other cases it is more specific (when a cutaneous nerve is derived from multiple spinal nerves.) Modern texts are in agreement about which areas of the skin are served by which nerves, but there are minor variations in some of the details. The borders designated by the diagrams in the 1918 edition of Gray's Anatomy are similar, but not identical, to those generally accepted today. Importance of the peripheral nervous system The peripheral nervous system (PNS) is divided into the somatic nervous system, the autonomic nervous system, and the enteric nervous system. However, it is the somatic nervous system, responsible for body movement and the reception of external stimuli, which allows one to understand how cutaneous innervation is made possible by the action of specific sensory fibers located on the skin, as well as the distinct pathways they take to the central nervous system. The skin, which is part of the integumentary system, plays an important role in the somatic nervous system because it contains a range of nerve endings that react to heat and cold, touch, pressure, vibration, and tissue injury. Importance of the central nervous system The central nervous system (CNS) works with the peripheral nervous system in cutaneous innervation. The CNS is responsible for processing the information it receives from the cutaneous nerves that detect a given stimulus, and then identifying the kind of sensory inputs which project to a specific region of the primary somatosensory cortex. The role of nerve endings on the surface of the skin Groups of nerve terminals located in the different layers of the skin are categorized depending on whether the skin Document 2::: The ovarian cortex is the outer portion of the ovary. The ovarian follicles are located within the ovarian cortex. The ovarian cortex is made up of connective tissue. Ovarian cortex tissue transplant has been performed to treat infertility. Document 3::: H2.00.04.4.01001: Lymphoid tissue H2.00.05.0.00001: Muscle tissue H2.00.05.1.00001: Smooth muscle tissue H2.00.05.2.00001: Striated muscle tissue H2.00.06.0.00001: Nerve tissue H2.00.06.1.00001: Neuron H2.00.06.2.00001: Synapse H2.00.06.2.00001: Neuroglia h3.01: Bones h3.02: Joints h3.03: Muscles h3.04: Alimentary system h3.05: Respiratory system h3.06: Urinary system h3.07: Genital system h3.08: Document 4::: Body reactivity is usually understood as the body's ability to react in a proper way to influence the environment. Resistance of an organism is its stability under the influence of pathogenic factors. The body reactivity can range from homeostasis to a fight or flight response. Ultimately, they are all governed by the nervous system. Nervous system divisions The central nervous system (CNS) consists of parts that are encased by the bones of the skull and spinal column: the brain and spinal cord. The peripheral nervous system (PNS) is found outside those bones and consists of the nerves and most of the sensory organs. Central nervous system The CNS can be divided into the brain and spinal cord. The CNS processes many different kinds of incoming sensory information. It is also the source of thoughts, emotions, and memories. Most signals that stimulate muscles to contract and glands to secrete originate in the CNS. The spinal cord and spinal nerves contribute to homeostasis by providing quick reflexive responses to many stimuli. The spinal cord is the pathway for sensory input to the brain and motor output from the brain. The brain is responsible for integrating most sensory information and coordinating body function, both consciously and unconsciously. Peripheral nervous system The PNS can be divided into the autonomic and somatic nervous system. The autonomic nervous system can be divided into the parasympathetic, sympathetic, and enteric nervous system. The sympathetic nervous system regulates the “fight or flight” responses. The parasympathetic nervous system regulates the “rest and digest” responses. The enteric nervous system innervates the viscera (gastrointestinal tract, pancreas, and gall bladder). The somatic nervous system consists of peripheral nerve fibers that send sensory information to the central nervous system and motor nerve fibers that project to skeletal muscle. The somatic nervous system engages in voluntary reactions, and the autonomic nervous The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Which nervous system consists of all the nervous tissue that lies outside the central nervous system? A. auxiliary nervous system B. function nervous system C. peripheral nervous system D. significant nervous system Answer:
sciq-2068
multiple_choice
What do you call the zone in a body of water where there is too little sunlight for photosynthesis?
[ "observable zone", "semimetal zone", "Dark Zone", "aphotic zone" ]
D
Relavent Documents: Document 0::: The oceanic zone is typically defined as the area of the ocean lying beyond the continental shelf (e.g. the neritic zone), but operationally is often referred to as beginning where the water depths drop to below , seaward from the coast into the open ocean with its pelagic zone. It is the region of open sea beyond the edge of the continental shelf and includes 65% of the ocean's completely open water. The oceanic zone has a wide array of undersea terrain, including trenches that are often deeper than Mount Everest is tall, as well as deep-sea volcanoes and basins. While it is often difficult for life to sustain itself in this type of environment, many species have adapted and do thrive in the oceanic zone. The open ocean is vertically divided into four zones: the sunlight zone, twilight zone, midnight zone, and abyssal zone. Sub zones The Mesopelagic (disphotic) zone, which is where only small amounts of light penetrate, lies below the Epipelagic zone. This zone is often referred to as the "Twilight Zone" due to its scarce amount of light. Temperatures in the Mesopelagic zone range from . The pressure is higher here, it can be up to and increases with depth. 54% of the ocean lies in the Bathypelagic (aphotic) zone into which no light penetrates. This is also called the midnight zone and the deep ocean. Due to the complete lack of sunlight, photosynthesis cannot occur and the only light source is bioluminescence. Water pressure is very intense and the temperatures are near freezing (range ). Marine life Oceanographers have divided the ocean into zones based on how far light reaches. All of the light zones can be found in the oceanic zone. The epipelagic zone is the one closest to the surface and is the best lit. It extends to 100 meters and contains both phytoplankton and zooplankton that can support larger organisms like marine mammals and some types of fish. Past 100 meters, not enough light penetrates the water to support life, and no plant life exists. T Document 1::: The aphotic zone (aphotic from Greek prefix + "without light") is the portion of a lake or ocean where there is little or no sunlight. It is formally defined as the depths beyond which less than 1 percent of sunlight penetrates. Above the aphotic zone is the photic zone, which consists of the euphotic zone and the disphotic zone. The euphotic zone is the layer of water in which there is enough light for net photosynthesis to occur. The disphotic zone, also known as the twilight zone, is the layer of water with enough light for predators to see but not enough for the rate of photosynthesis to be greater than the rate of respiration. The depth at which less than one percent of sunlight reaches begins the aphotic zone. While most of the ocean's biomass lives in the photic zone, the majority of the ocean's water lies in the aphotic zone. Bioluminescence is more abundant than sunlight in this zone. Most food in this zone comes from dead organisms sinking to the bottom of the lake or ocean from overlying waters. The depth of the aphotic zone can be greatly affected by such things as turbidity and the season of the year. The aphotic zone underlies the photic zone, which is that portion of a lake or ocean directly affected by sunlight. The Dark Ocean In the ocean, the aphotic zone is sometimes referred to as the dark ocean. Depending on how it is defined, the aphotic zone of the ocean begins between depths of about to and extends to the ocean floor. The majority of the ocean is aphotic, with the average depth of the sea being deep with the deepest part of the sea, being the Challenger Deep in the Mariana Trench, is about deep. The depth at which the aphotic zone begins in the ocean depends on many factors. In clear, tropical water sunlight can penetrate deeper and so the aphotic zone starts at greater depths. Around the poles, the angle of the sunlight means it does not penetrate as deeply so the aphotic zone is shallower. If the water is turbid, suspended materi Document 2::: The littoral zone, also called litoral or nearshore, is the part of a sea, lake, or river that is close to the shore. In coastal ecology, the littoral zone includes the intertidal zone extending from the high water mark (which is rarely inundated), to coastal areas that are permanently submerged — known as the foreshore — and the terms are often used interchangeably. However, the geographical meaning of littoral zone extends well beyond the intertidal zone to include all neritic waters within the bounds of continental shelves. Etymology The word littoral may be used both as a noun and as an adjective. It derives from the Latin noun litus, litoris, meaning "shore". (The doubled t is a late-medieval innovation, and the word is sometimes seen in the more classical-looking spelling litoral.) Description The term has no single definition. What is regarded as the full extent of the littoral zone, and the way the littoral zone is divided into subregions, varies in different contexts. For lakes, the littoral zone is the nearshore habitat where photosynthetically active radiation penetrates to the lake bottom in sufficient quantities to support photosynthesis. The use of the term also varies from one part of the world to another, and between different disciplines. For example, military commanders speak of the littoral in ways that are quite different from the definition used by marine biologists. The adjacency of water gives a number of distinctive characteristics to littoral regions. The erosive power of water results in particular types of landforms, such as sand dunes, and estuaries. The natural movement of the littoral along the coast is called the littoral drift. Biologically, the ready availability of water enables a greater variety of plant and animal life, and particularly the formation of extensive wetlands. In addition, the additional local humidity due to evaporation usually creates a microclimate supporting unique types of organisms. In oceanography and marin Document 3::: The ocean (also known as the sea or the world ocean) is a body of salt water that covers approximately 70.8% of the Earth and contains 97% of Earth's water. The term ocean also refers to any of the large bodies of water into which the world ocean is conventionally divided. Distinct names are used to identify five different areas of the ocean: Pacific (the largest), Atlantic, Indian, Antarctic/Southern, and Arctic (the smallest). Seawater covers approximately of the planet. The ocean is the primary component of the Earth's hydrosphere, and thus essential to life on Earth. The ocean influences climate and weather patterns, the carbon cycle, and the water cycle by acting as a huge heat reservoir. Oceanographers split the ocean into vertical and horizontal zones based on physical and biological conditions. The pelagic zone is the open ocean's water column from the surface to the ocean floor. The water column is further divided into zones based on depth and the amount of light present. The photic zone starts at the surface and is defined to be "the depth at which light intensity is only 1% of the surface value" (approximately 200 m in the open ocean). This is the zone where photosynthesis can occur. In this process plants and microscopic algae (free floating phytoplankton) use light, water, carbon dioxide, and nutrients to produce organic matter. As a result, the photic zone is the most biodiverse and the source of the food supply which sustains most of the ocean ecosystem. Ocean photosynthesis also produces half of the oxygen in the earth's atmosphere. Light can only penetrate a few hundred more meters; the rest of the deeper ocean is cold and dark (these zones are called mesopelagic and aphotic zones). The continental shelf is where the ocean meets dry land. It is more shallow, with a depth of a few hundred meters or less. Human activity often has negative impacts on the ecosystems within the continental shelf. Ocean temperatures depend on the amount of solar radia Document 4::: Dead zones are hypoxic (low-oxygen) areas in the world's oceans and large lakes. Hypoxia occurs when dissolved oxygen (DO) concentration falls to or below 2 mg of O2/liter. When a body of water experiences hypoxic conditions, aquatic flora and fauna begin to change behavior in order to reach sections of water with higher oxygen levels. Once DO declines below 0.5 ml O2/liter in a body of water, mass mortality occurs. With such a low concentration of DO, these bodies of water fail to support the aquatic life living there. Historically, many of these sites were naturally occurring. However, in the 1970s, oceanographers began noting increased instances and expanses of dead zones. These occur near inhabited coastlines, where aquatic life is most concentrated. Coastal regions, such as the Baltic Sea, the northern Gulf of Mexico, and the Chesapeake Bay, as well as large enclosed water bodies like Lake Erie, have been affected by deoxygenation due to eutrophication. Excess nutrients are input into these systems by rivers, ultimately from urban and agricultural runoff and exacerbated by deforestation. These nutrients lead to high productivity that produces organic material that sinks to the bottom and is respired. The respiration of that organic material uses up the oxygen and causes hypoxia or anoxia. The UN Environment Programme reported 146 dead zones in 2004 in the world's oceans where marine life could not be supported due to depleted oxygen levels. Some of these were as small as a square kilometer (0.4 mi2), but the largest dead zone covered 70,000 square kilometers (27,000 mi2). A 2008 study counted 405 dead zones worldwide. Causes Aquatic and marine dead zones can be caused by an increase in nutrients (particularly nitrogen and phosphorus) in the water, known as eutrophication. These nutrients are the fundamental building blocks of single-celled, plant-like organisms that live in the water column, and whose growth is limited in part by the availability of these The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What do you call the zone in a body of water where there is too little sunlight for photosynthesis? A. observable zone B. semimetal zone C. Dark Zone D. aphotic zone Answer:
sciq-8149
multiple_choice
What kind of cleavage do mammalian eggs exhibit?
[ "spicule", "holoblastic", "cocklebur", "Meroblastic" ]
B
Relavent Documents: Document 0::: In biology, a blastomere is a type of cell produced by cell division (cleavage) of the zygote after fertilization; blastomeres are an essential part of blastula formation, and blastocyst formation in mammals. Human blastomere characteristics In humans, blastomere formation begins immediately following fertilization and continues through the first week of embryonic development. About 90 minutes after fertilization, the zygote divides into two cells. The two-cell blastomere state, present after the zygote first divides, is considered the earliest mitotic product of the fertilized oocyte. These mitotic divisions continue and result in a grouping of cells called blastomeres. During this process, the total size of the embryo does not increase, so each division results in smaller and smaller cells. When the zygote contains 16 to 32 blastomeres it is referred to as a morula. These are the preliminary stages in the embryo beginning to form. Once this begins, microtubules within the morula's cytosolic material in the blastomere cells can develop into important membrane functions, such as sodium pumps. These pumps allow the inside of the embryo to fill with blastocoelic fluid, which supports the further growth of life. The blastomere is considered totipotent; that is, blastomeres are capable of developing from a single cell into a fully fertile adult organism. This has been demonstrated through studies and conjectures made with mouse blastomeres, which have been accepted as true for most mammalian blastomeres as well. Studies have analyzed monozygotic twin mouse blastomeres in their two-cell state, and have found that when one of the twin blastomeres is destroyed, a fully fertile adult mouse can still develop. Thus, it can be assumed that since one of the twin cells was totipotent, the destroyed one originally was as well. Relative blastomere size within the embryo is dependent not only on the stage of the cleavage, but also on the regularity of the cleavage amongst t Document 1::: The blastodisc, also called the germinal disc, is the embryo-forming part on the yolk of the egg of an animal that undergoes discoidal meroblastic cleavage. Discoidal cleavage occurs in those animals with a large proportion of yolk in their eggs, and include insects, fish, reptiles and birds. The blastodisc is a small disc of cytoplasm that sits on top of the yolk. In birds it is a small, circular, white spot (approximately 1.5-3 mm across) on the surface of the yellow yolk of an egg, at the animal pole. Document 2::: A teloblast is a large cell in the embryos of clitellate annelids which asymmetrically divide to form many smaller cells known as blast cells. These blast cells further proliferate and differentiate to form the segmental tissues of the annelid. Teloblasts are well studied in leeches, though they are also present in the other major class of clitellates: the oligochaetes. Developmental role and morphology All teloblasts are specified from the D quadrant macromere after the second round of divisions post-fertilization. There are five pairs of teloblasts, one on each side of the embryo. Four of the teloblasts (N, O, P, and Q) give rise to ectodermal tissue and one pair (M) gives rise to mesodermal tissue. The column of blast cells arising out of each teloblast is known as a bandlet. All five bandlets coalesce into one germinal band on each side of the embryo, extending out from the teloblast towards the head (in the rostral direction). The teloblasts are located at the rear of the embryo. Teloblasts have two separate cytoplasmic domains: the teloplasm and the vitelloplasm. The teloplasm contains the nucleus, ribosomes, mitochondria, and other subcellular organelles. The vitelloplasm contains mostly yolk platelets. Only the teloplasm gets passed onto the daughter stem cells after cell division. O/P specification The O and P teloblasts are specified from two separate but identical precursors, which form an equivalence group These two precursor cells are termed O/P cells for their ability to become either O or P teloblasts. Signals from the surrounding cells act to specify which fate the teloblasts and their progeny take on. Interactions with the q bandlet, however transient, can induce the p fate in the adjacent o/p bandlet. The M bandlet has been shown to In some species (i.e. Helobdella triserialis), the provisional epithelium covering the cells plays a role in inducing the O fate. In the absence of cell-cell interactions, the O/P precursors will become O tel Document 3::: Chickens (Gallus gallus domesticus) and their eggs have been used extensively as research models throughout the history of biology. Today they continue to serve as an important model for normal human biology as well as pathological disease processes. History Chicken embryos as a research model Human fascination with the chicken and its egg are so deeply rooted in history that it is hard to say exactly when avian exploration began. As early as 1400 BCE, ancient Egyptians artificially incubated chicken eggs to propagate their food supply. The developing chicken in the egg first appears in written history after catching the attention of the famous Greek philosopher, Aristotle, around 350 BCE. As Aristotle opened chicken eggs at various time points of incubation, he noted how the organism changed over time. Through his writing of Historia Animalium, he introduced some of the earliest studies of embryology based on his observations of the chicken in the egg. Aristotle recognized significant similarities between human and chicken development. From his studies of the developing chick, he was able to correctly decipher the role of the placenta and umbilical cord in the human. Chick research of the 16th century significantly modernized ideas about human physiology. European scientists, including Ulisse Aldrovandi, Volcher Cotier and William Harvey, used the chick to demonstrate tissue differentiation, disproving the widely held belief of the time that organisms are "preformed" in their adult version and only grow larger during development. Distinct tissue areas were recognized that grew and gave rise to specific structures, including the blastoderm, or chick origin. Harvey also closely watched the development of the heart and blood and was the first to note the directional flow of blood between veins and arteries. The relatively large size of the chick as a model organism allowed scientists during this time to make these significant observations without the hel Document 4::: Embryonated, unembryonated and de-embryonated are terms generally used in reference to eggs or, in botany, to seeds. The words are often used as professional jargon rather than as universally applicable terms or concepts. Examples of relevant fields in which the words are useful include reproductive biology, virology, microbiology, parasitology, entomology, and poultry husbandry. Since the words are widely used in the various disciplines, there seems to be little present prospect of replacing them with universal, definitive, and distinct terms. Meaning The terms embryonated, unembryonated and de-embryonated respectively mean "having an embryo", "not having an embryo", and "having lost an embryo", and they most often refer to eggs. In Merriam-Webster the earliest known use of the term "embryonated" dates from 1687, while Oxford gives a reference dating from 1669. Embryonate The term embryonate can be used as an adjective to mean embryonated, or as a noun to mean one containing an embryo (e.g. "We selected only the embryonates and discarded the rest"). Embryonate can also be used as an intransitive verb meaning to develop an embryo (e.g. "In 2-4 weeks after deposition in soil, they embryonate if the soil conditions are suitable"). De-embryonate De-embryonate refers to the removal of embryos from seeds or similar reproductive units, typically in physiological studies. As with embrionate, it can either be a verb, noun or adjective. In some contexts the term "embryonectomy" may be used. For example, loss of the embryo may result from the activity of seed predation by insects. Usage There often is confusion in applying the term to various classes of unfertilised eggs and trophic eggs, depending on the area of expertise. Virology In virology, eggs of domestic poultry are used for culturing viruses for research purposes. Viruses generally can propagate only in live cells, so only a fertilised egg with a good supply of growing embryonic tissue is useful. Practitioners The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What kind of cleavage do mammalian eggs exhibit? A. spicule B. holoblastic C. cocklebur D. Meroblastic Answer:
sciq-8081
multiple_choice
Evolutionary adaptation is evidenced by different shapes of what structures in birds with different food preferences?
[ "Stomach", "beaks", "necks", "claws" ]
B
Relavent Documents: Document 0::: Tinbergen's four questions, named after 20th century biologist Nikolaas Tinbergen, are complementary categories of explanations for animal behaviour. These are also commonly referred to as levels of analysis. It suggests that an integrative understanding of behaviour must include ultimate (evolutionary) explanations, in particular: behavioural adaptive functions phylogenetic history; and the proximate explanations underlying physiological mechanisms ontogenetic/developmental history. Four categories of questions and explanations When asked about the purpose of sight in humans and animals, even elementary-school children can answer that animals have vision to help them find food and avoid danger (function/adaptation). Biologists have three additional explanations: sight is caused by a particular series of evolutionary steps (phylogeny), the mechanics of the eye (mechanism/causation), and even the process of an individual's development (ontogeny). This schema constitutes a basic framework of the overlapping behavioural fields of ethology, behavioural ecology, comparative psychology, sociobiology, evolutionary psychology, and anthropology. Julian Huxley identified the first three questions. Niko Tinbergen gave only the fourth question, as Huxley's questions failed to distinguish between survival value and evolutionary history; Tinbergen's fourth question helped resolve this problem. Evolutionary (ultimate) explanations First question: Function (adaptation) Darwin's theory of evolution by natural selection is the only scientific explanation for why an animal's behaviour is usually well adapted for survival and reproduction in its environment. However, claiming that a particular mechanism is well suited to the present environment is different from claiming that this mechanism was selected for in the past due to its history of being adaptive. The literature conceptualizes the relationship between function and evolution in two ways. On the one hand, function Document 1::: Vestigiality is the retention, during the process of evolution, of genetically determined structures or attributes that have lost some or all of the ancestral function in a given species. Assessment of the vestigiality must generally rely on comparison with homologous features in related species. The emergence of vestigiality occurs by normal evolutionary processes, typically by loss of function of a feature that is no longer subject to positive selection pressures when it loses its value in a changing environment. The feature may be selected against more urgently when its function becomes definitively harmful, but if the lack of the feature provides no advantage, and its presence provides no disadvantage, the feature may not be phased out by natural selection and persist across species. Examples of vestigial structures (also called degenerate, atrophied, or rudimentary organs) are the loss of functional wings in island-dwelling birds; the human vomeronasal organ; and the hindlimbs of the snake and whale. Overview Vestigial features may take various forms; for example, they may be patterns of behavior, anatomical structures, or biochemical processes. Like most other physical features, however functional, vestigial features in a given species may successively appear, develop, and persist or disappear at various stages within the life cycle of the organism, ranging from early embryonic development to late adulthood. Vestigiality, biologically speaking, refers to organisms retaining organs that have seemingly lost their original function. Vestigial organs are common evolutionary knowledge. In addition, the term vestigiality is useful in referring to many genetically determined features, either morphological, behavioral, or physiological; in any such context, however, it need not follow that a vestigial feature must be completely useless. A classic example at the level of gross anatomy is the human vermiform appendix, vestigial in the sense of retaining no significa Document 2::: Structures built by non-human animals, often called animal architecture, are common in many species. Examples of animal structures include termite mounds, ant hills, wasp and beehives, burrow complexes, beaver dams, elaborate nests of birds, and webs of spiders. Often, these structures incorporate sophisticated features such as temperature regulation, traps, bait, ventilation, special-purpose chambers and many other features. They may be created by individuals or complex societies of social animals with different forms carrying out specialized roles. These constructions may arise from complex building behaviour of animals such as in the case of night-time nests for chimpanzees, from inbuilt neural responses, which feature prominently in the construction of bird songs, or triggered by hormone release as in the case of domestic sows, or as emergent properties from simple instinctive responses and interactions, as exhibited by termites, or combinations of these. The process of building such structures may involve learning and communication, and in some cases, even aesthetics. Tool use may also be involved in building structures by animals. Building behaviour is common in many non-human mammals, birds, insects and arachnids. It is also seen in a few species of fish, reptiles, amphibians, molluscs, urochordates, crustaceans, annelids and some other arthropods. It is virtually absent from all the other animal phyla. Functions Animals create structures primarily for three reasons: to create protected habitats, i.e. homes. to catch prey and for foraging, i.e. traps. for communication between members of the species (intra-specific communication), i.e. display. Animals primarily build habitat for protection from extreme temperatures and from predation. Constructed structures raise physical problems which need to be resolved, such as humidity control or ventilation, which increases the complexity of the structure. Over time, through evolution, animals use shelters for ot Document 3::: Animal science is described as "studying the biology of animals that are under the control of humankind". It can also be described as the production and management of farm animals. Historically, the degree was called animal husbandry and the animals studied were livestock species, like cattle, sheep, pigs, poultry, and horses. Today, courses available look at a broader area, including companion animals, like dogs and cats, and many exotic species. Degrees in Animal Science are offered at a number of colleges and universities. Animal science degrees are often offered at land-grant universities, which will often have on-campus farms to give students hands-on experience with livestock animals. Education Professional education in animal science prepares students for careers in areas such as animal breeding, food and fiber production, nutrition, animal agribusiness, animal behavior, and welfare. Courses in a typical Animal Science program may include genetics, microbiology, animal behavior, nutrition, physiology, and reproduction. Courses in support areas, such as genetics, soils, agricultural economics and marketing, legal aspects, and the environment also are offered. Bachelor degree At many universities, a Bachelor of Science (BS) degree in Animal Science allows emphasis in certain areas. Typical areas are species-specific or career-specific. Species-specific areas of emphasis prepare students for a career in dairy management, beef management, swine management, sheep or small ruminant management, poultry production, or the horse industry. Other career-specific areas of study include pre-veterinary medicine studies, livestock business and marketing, animal welfare and behavior, animal nutrition science, animal reproduction science, or genetics. Youth programs are also an important part of animal science programs. Pre-veterinary emphasis Many schools that offer a degree option in Animal Science also offer a pre-veterinary emphasis such as Iowa State University, th Document 4::: Exaptation and the related term co-option describe a shift in the function of a trait during evolution. For example, a trait can evolve because it served one particular function, but subsequently it may come to serve another. Exaptations are common in both anatomy and behaviour. Bird feathers are a classic example. Initially they may have evolved for temperature regulation, but later were adapted for flight. When feathers were first used to aid in flight, that was an exaptive use. They have since then been shaped by natural selection to improve flight, so in their current state they are best regarded as adaptations for flight. So it is with many structures that initially took on a function as an exaptation: once molded for a new function, they become further adapted for that function. Interest in exaptation relates to both the process and products of evolution: the process that creates complex traits and the products (functions, anatomical structures, biochemicals, etc.) that may be imperfectly developed. The term "exaptation" was proposed by Stephen Jay Gould and Elisabeth Vrba, as a replacement for 'pre-adaptation', which they considered to be a teleologically loaded term. History and definitions The idea that the function of a trait might shift during its evolutionary history originated with Charles Darwin (). For many years the phenomenon was labeled "preadaptation", but since this term suggests teleology in biology, appearing to conflict with natural selection, it has been replaced by the term exaptation. The idea had been explored by several scholars when in 1982 Stephen Jay Gould and Elisabeth Vrba introduced the term "exaptation". However, this definition had two categories with different implications for the role of adaptation. (1) A character, previously shaped by natural selection for a particular function (an adaptation), is coopted for a new use—cooptation. (2) A character whose origin cannot be ascribed to the direct action of natural selection ( The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Evolutionary adaptation is evidenced by different shapes of what structures in birds with different food preferences? A. Stomach B. beaks C. necks D. claws Answer: