id
stringlengths
6
15
question_type
stringclasses
1 value
question
stringlengths
15
683
choices
listlengths
4
4
answer
stringclasses
5 values
explanation
stringclasses
481 values
prompt
stringlengths
1.75k
10.9k
sciq-8601
multiple_choice
What is the name of the project concerning genetics that is one of the landmark scientific events of the last 50 years?
[ "human genome project", "manhattan project", "human organism project", "blue beam project" ]
A
Relavent Documents: Document 0::: The Personal Genetics Education Project (pgEd) aims to engage and inform a worldwide audience about the benefits of knowing one's genome as well as the ethical, legal and social issues (ELSI) and dimensions of personal genetics. pgEd was founded in 2006, is housed in the Department of Genetics at Harvard Medical School and is directed by Ting Wu, a professor in that department. It employs a variety of strategies for reaching general audiences, including generating online curricular materials, leading discussions in classrooms, workshops, and conferences, developing a mobile educational game (Map-Ed), holding an annual conference geared toward accelerating awareness (GETed), and working with the world of entertainment to improve accuracy and outreach. Online curricular materials and professional development for teachers pgEd develops tools for teachers and general audiences that examine the potential benefits and risks of personalized genome analysis. These include freely accessible, interactive lesson plans that tackle issues such as genetic testing of minors, reproductive genetics, complex human traits and genetics, and the history of eugenics. pgEd also engages educators at conferences as well as organizes professional development workshops. All of pgEd's materials are freely available online. Map-Ed, a mobile quiz In 2013, pgEd created a mobile educational quiz called Map-Ed. Map-Ed invites players to work their way through five questions that address key concepts in genetics and then pin themselves on a world map. Within weeks of its launch, Map-Ed gained over 1,000 pins around the world, spanning across all 7 continents. Translations and new maps linked to questions on topics broadly related to genetics are in development. GETed conference pgEd hosts the annual GETed conference, a meeting that brings together experts from across the United States and beyond in education, research, health, entertainment, and policy to develop strategies for acceleratin Document 1::: The DNA Learning Center (DNALC) is a genetics learning center affiliated with the Cold Spring Harbor Laboratory, in Cold Spring Harbor, New York. It is the world's first science center devoted entirely to genetics education and offers online education, class field trips, student summer day camps, and teacher training. The DNALC's family of internet sites cover broad topics including basic heredity, genetic disorders, eugenics, the discovery of the structure of DNA, DNA sequencing, cancer, neuroscience, and plant genetics. The center developed a website called DNA Subway for the iPlant Collaborative. See also National Centre for Biotechnology Education, UK Document 2::: The Human Genome Project (HGP) was an international scientific research project with the goal of determining the base pairs that make up human DNA, and of identifying, mapping and sequencing all of the genes of the human genome from both a physical and a functional standpoint. It started in 1990 and was completed in 2003. It remains the world's largest collaborative biological project. Planning for the project started after it was adopted in 1984 by the US government, and it officially launched in 1990. It was declared complete on April 14, 2003, and included about 92% of the genome. Level "complete genome" was achieved in May 2021, with a remaining only 0.3% bases covered by potential issues. The final gapless assembly was finished in January 2022. Funding came from the United States government through the National Institutes of Health (NIH) as well as numerous other groups from around the world. A parallel project was conducted outside the government by the Celera Corporation, or Celera Genomics, which was formally launched in 1998. Most of the government-sponsored sequencing was performed in twenty universities and research centres in the United States, the United Kingdom, Japan, France, Germany, and China, working in the International Human Genome Sequencing Consortium (IHGSC). The Human Genome Project originally aimed to map the complete set of nucleotides contained in a human haploid reference genome, of which there are more than three billion. The "genome" of any given individual is unique; mapping the "human genome" involved sequencing samples collected from a small number of individuals and then assembling the sequenced fragments to get a complete sequence for each of 24 human chromosomes (22 autosomes and 2 sex chromosomes). Therefore, the finished human genome is a mosaic, not representing any one individual. Much of the project's utility comes from the fact that the vast majority of the human genome is the same in all humans. History The Human G Document 3::: Genetics (from Ancient Greek , “genite” and that from , “origin”), a discipline of biology, is the science of heredity and variation in living organisms. Articles (arranged alphabetically) related to genetics include: # A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Document 4::: CHLC (or Cooperative Human Linkage Center) was a National Institutes of Health project to map a large number of human genome markers, prior to the completion of the Human Genome Project. The project was stopped in 1999. National Institutes of Health Genetic mapping The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the name of the project concerning genetics that is one of the landmark scientific events of the last 50 years? A. human genome project B. manhattan project C. human organism project D. blue beam project Answer:
sciq-7195
multiple_choice
During photosynthesis what organelle is used by plants to change sunlight into chemical energy?
[ "golgi apparatus", "ribosome", "chloroplasts", "mitochondria" ]
C
Relavent Documents: Document 0::: The light-harvesting complex (or antenna complex; LH or LHC) is an array of protein and chlorophyll molecules embedded in the thylakoid membrane of plants and cyanobacteria, which transfer light energy to one chlorophyll a molecule at the reaction center of a photosystem. The antenna pigments are predominantly chlorophyll b, xanthophylls, and carotenes. Chlorophyll a is known as the core pigment. Their absorption spectra are non-overlapping and broaden the range of light that can be absorbed in photosynthesis. The carotenoids have another role as an antioxidant to prevent photo-oxidative damage of chlorophyll molecules. Each antenna complex has between 250 and 400 pigment molecules and the energy they absorb is shuttled by resonance energy transfer to a specialized chlorophyll-protein complex known as the reaction center of each photosystem. The reaction center initiates a complex series of chemical reactions that capture energy in the form of chemical bonds. For photosystem II, when either of the two chlorophyll a molecules at the reaction center absorb energy, an electron is excited and transferred to an electron acceptor molecule, pheophytin, leaving the chlorophyll a in an oxidized state. The oxidised chlorophyll a replaces the electrons by photolysis that involves the oxidation of water molecules to oxygen, protons and electrons. The N-terminus of the chlorophyll a-b binding protein extends into the stroma where it is involved with adhesion of granal membranes and photo-regulated by reversible phosphorylation of its threonine residues. Both these processes are believed to mediate the distribution of excitation energy between photosystems I and II. This family also includes the photosystem II protein PsbS, which plays a role in energy-dependent quenching that increases thermal dissipation of excess absorbed light energy in the photosystem. LH 1 Light-harvesting complex I is permanently bound to photosystem I via the plant-specific subunit PsaG. It is made u Document 1::: Photosynthesis Oxygenic photosynthesis uses two multi-subunit photosystems (I and II) located in the cell membranes of cyanobacteria and in the thylakoid membranes of chloroplasts in plants and algae. Photosystem II (PSII) has a P680 reaction centre containing chlorophyll 'a' that uses light energy to carr Document 2::: Autumn leaf color is a phenomenon that affects the normally green leaves of many deciduous trees and shrubs by which they take on, during a few weeks in the autumn season, various shades of yellow, orange, red, purple, and brown. The phenomenon is commonly called autumn colours or autumn foliage in British English and fall colors, fall foliage, or simply foliage in American English. In some areas of Canada and the United States, "leaf peeping" tourism is a major contribution to economic activity. This tourist activity occurs between the beginning of color changes and the onset of leaf fall, usually around September and October in the Northern Hemisphere and April to May in the Southern Hemisphere. Chlorophyll and the green/yellow/orange colors A green leaf is green because of the presence of a pigment known as chlorophyll, which is inside an organelle called a chloroplast. When abundant in the leaf's cells, as during the growing season, the chlorophyll's green color dominates and masks out the colors of any other pigments that may be present in the leaf. Thus, the leaves of summer are characteristically green. Chlorophyll has a vital function: it captures solar rays and uses the resulting energy in the manufacture of the plant's food simple sugars which are produced from water and carbon dioxide. These sugars are the basis of the plant's nourishment the sole source of the carbohydrates needed for growth and development. In their food-manufacturing process, the chlorophylls break down, thus are continually "used up". During the growing season, however, the plant replenishes the chlorophyll so that the supply remains high and the leaves stay green. In late summer, with daylight hours shortening and temperatures cooling, the veins that carry fluids into and out of the leaf are gradually closed off as a layer of special cork cells forms at the base of each leaf. As this cork layer develops, water and mineral intake into the leaf is reduced, slowly at first, and the Document 3::: Tannosomes are organelles found in plant cells of vascular plants. Formation and functions Tannosomes are formed when the chloroplast membrane forms pockets filled with tannin. Slowly, the pockets break off as tiny vacuoles that carry tannins to the large vacuole filled with acidic fluid. Tannins are then released into the vacuole and stored inside as tannin accretions. They are responsible for synthesizing and producing condensed tannins and polyphenols. Tannosomes condense tannins in chlorophyllous organs, providing defenses against herbivores and pathogens, and protection against UV radiation. Discovery Tannosomes were discovered in September 2013 by French and Hungarian researchers. The discovery of tannosomes showed how to get enough tannins to change the flavour of wine, tea, chocolate, and other food or snacks. See also Chloroplast Leucoplast Plastid Document 4::: Photosynthesis systems are electronic scientific instruments designed for non-destructive measurement of photosynthetic rates in the field. Photosynthesis systems are commonly used in agronomic and environmental research, as well as studies of the global carbon cycle. How photosynthesis systems function Photosynthesis systems function by measuring gas exchange of leaves. Atmospheric carbon dioxide is taken up by leaves in the process of photosynthesis, where is used to generate sugars in a molecular pathway known as the Calvin cycle. This draw-down of induces more atmospheric to diffuse through stomata into the air spaces of the leaf. While stoma are open, water vapor can easily diffuse out of plant tissues, a process known as transpiration. It is this exchange of and water vapor that is measured as a proxy of photosynthetic rate. The basic components of a photosynthetic system are the leaf chamber, infrared gas analyzer (IRGA), batteries and a console with keyboard, display and memory. Modern 'open system' photosynthesis systems also incorporate miniature disposable compressed gas cylinder and gas supply pipes. This is because external air has natural fluctuations in and water vapor content, which can introduce measurement noise. Modern 'open system' photosynthesis systems remove the and water vapour by passage over soda lime and Drierite, then add at a controlled rate to give a stable concentration. Some systems are also equipped with temperature control and a removable light unit, so the effect of these environmental variables can also be measured. The leaf to be analysed is placed in the leaf chamber. The concentrations is measured by the infrared gas analyzer. The IRGA shines infrared light through a gas sample onto a detector. in the sample absorbs energy, so the reduction in the level of energy that reaches the detector indicates the concentration. Modern IRGAs take account of the fact that absorbs energy at similar wavelengths as . Modern IRG The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. During photosynthesis what organelle is used by plants to change sunlight into chemical energy? A. golgi apparatus B. ribosome C. chloroplasts D. mitochondria Answer:
sciq-142
multiple_choice
What muscles are used to pump water over the gills?
[ "pharynx and tonsils", "lungs and pharynx", "jaws and pharynx", "muscles and pharynx" ]
C
Relavent Documents: Document 0::: Fish gills are organs that allow fish to breathe underwater. Most fish exchange gases like oxygen and carbon dioxide using gills that are protected under gill covers (operculum) on both sides of the pharynx (throat). Gills are tissues that are like short threads, protein structures called filaments. These filaments have many functions including the transfer of ions and water, as well as the exchange of oxygen, carbon dioxide, acids and ammonia. Each filament contains a capillary network that provides a large surface area for exchanging oxygen and carbon dioxide. Fish exchange gases by pulling oxygen-rich water through their mouths and pumping it over their gills. Within the gill filaments, capillary blood flows in the opposite direction to the water, causing counter-current exchange. The gills push the oxygen-poor water out through openings in the sides of the pharynx. Some fish, like sharks and lampreys, possess multiple gill openings. However, bony fish have a single gill opening on each side. This opening is hidden beneath a protective bony cover called the operculum. Juvenile bichirs have external gills, a very primitive feature that they share with larval amphibians. Previously, the evolution of gills was thought to have occurred through two diverging lines: gills formed from the endoderm, as seen in jawless fish species, or those form by the ectoderm, as seen in jawed fish. However, recent studies on gill formation of the little skate (Leucoraja erinacea) has shown potential evidence supporting the claim that gills from all current fish species have in fact evolved from a common ancestor. Breathing with gills Air breathing fish can be divided into obligate air breathers and facultative air breathers. Obligate air breathers, such as the African lungfish, are obligated to breathe air periodically or they suffocate. Facultative air breathers, such as the catfish Hypostomus plecostomus, only breathe air if they need to and can otherwise rely on their gills f Document 1::: Fish anatomy is the study of the form or morphology of fish. It can be contrasted with fish physiology, which is the study of how the component parts of fish function together in the living fish. In practice, fish anatomy and fish physiology complement each other, the former dealing with the structure of a fish, its organs or component parts and how they are put together, such as might be observed on the dissecting table or under the microscope, and the latter dealing with how those components function together in living fish. The anatomy of fish is often shaped by the physical characteristics of water, the medium in which fish live. Water is much denser than air, holds a relatively small amount of dissolved oxygen, and absorbs more light than air does. The body of a fish is divided into a head, trunk and tail, although the divisions between the three are not always externally visible. The skeleton, which forms the support structure inside the fish, is either made of cartilage (cartilaginous fish) or bone (bony fish). The main skeletal element is the vertebral column, composed of articulating vertebrae which are lightweight yet strong. The ribs attach to the spine and there are no limbs or limb girdles. The main external features of the fish, the fins, are composed of either bony or soft spines called rays which, with the exception of the caudal fins, have no direct connection with the spine. They are supported by the muscles which compose the main part of the trunk. The heart has two chambers and pumps the blood through the respiratory surfaces of the gills and then around the body in a single circulatory loop. The eyes are adapted for seeing underwater and have only local vision. There is an inner ear but no external or middle ear. Low-frequency vibrations are detected by the lateral line system of sense organs that run along the length of the sides of fish, which responds to nearby movements and to changes in water pressure. Sharks and rays are basal fish with Document 2::: The swim bladder, gas bladder, fish maw, or air bladder is an internal gas-filled organ that contributes to the ability of many bony fish (but not cartilaginous fish) to control their buoyancy, and thus to stay at their current water depth without having to expend energy in swimming. Also, the dorsal position of the swim bladder means the center of mass is below the center of volume, allowing it to act as a stabilizing agent. Additionally, the swim bladder functions as a resonating chamber, to produce or receive sound. The swim bladder is evolutionarily homologous to the lungs of tetrapods and lungfish. Charles Darwin remarked upon this in On the Origin of Species. Darwin reasoned that the lung in air-breathing vertebrates had derived from a more primitive swim bladder as a specialized form of enteral respiration. In the embryonic stages, some species, such as redlip blenny, have lost the swim bladder again, mostly bottom dwellers like the weather fish. Other fish—like the opah and the pomfret—use their pectoral fins to swim and balance the weight of the head to keep a horizontal position. The normally bottom dwelling sea robin can use their pectoral fins to produce lift while swimming. The gas/tissue interface at the swim bladder produces a strong reflection of sound, which is used in sonar equipment to find fish. Cartilaginous fish, such as sharks and rays, do not have swim bladders. Some of them can control their depth only by swimming (using dynamic lift); others store fats or oils with density less than that of seawater to produce a neutral or near neutral buoyancy, which does not change with depth. Structure and function The swim bladder normally consists of two gas-filled sacs located in the dorsal portion of the fish, although in a few primitive species, there is only a single sac. It has flexible walls that contract or expand according to the ambient pressure. The walls of the bladder contain very few blood vessels and are lined with guanine crystals, Document 3::: Aquatic respiration is the process whereby an aquatic organism exchanges respiratory gases with water, obtaining oxygen from oxygen dissolved in water and excreting carbon dioxide and some other metabolic waste products into the water. Unicellular and simple small organisms In very small animals, plants and bacteria, simple diffusion of gaseous metabolites is sufficient for respiratory function and no special adaptations are found to aid respiration. Passive diffusion or active transport are also sufficient mechanisms for many larger aquatic animals such as many worms, jellyfish, sponges, bryozoans and similar organisms. In such cases, no specific respiratory organs or organelles are found. Higher plants Although higher plants typically use carbon dioxide and excrete oxygen during photosynthesis, they also respire and, particularly during darkness, many plants excrete carbon dioxide and require oxygen to maintain normal functions. In fully submerged aquatic higher plants specialised structures such as stoma on leaf surfaces to control gas interchange. In many species, these structures can be controlled to be open or closed depending on environmental conditions. In conditions of high light intensity and relatively high carbonate ion concentrations, oxygen may be produced in sufficient quantities to form gaseous bubbles on the surface of leaves and may produce oxygen super-saturation in the surrounding water body. Animals All animals that practice truly aquatic respiration are poikilothermic. All aquatic homeothermic animals and birds including cetaceans and penguins are air breathing despite a fully aquatic life-style. Echinoderms Echinoderms have a specialised water vascular system which provides a number of functions including providing the hydraulic power for tube feet but also serves to convey oxygenated sea water into the body and carry waste water out again. In many genera, the water enters through a madreporite, a sieve like structure on the upper surfac Document 4::: The mediastinal branches are numerous small vessels which supply the lymph glands and loose areolar tissue in the posterior mediastinum. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What muscles are used to pump water over the gills? A. pharynx and tonsils B. lungs and pharynx C. jaws and pharynx D. muscles and pharynx Answer:
sciq-10483
multiple_choice
Which layer is found below the lithosphere?
[ "asthenosphere", "stratosphere", "magnetosphere", "troposphere" ]
A
Relavent Documents: Document 0::: A lithosphere () is the rigid, outermost rocky shell of a terrestrial planet or natural satellite. On Earth, it is composed of the crust and the lithospheric mantle, the topmost portion of the upper mantle that behaves elastically on time scales of up to thousands of years or more. The crust and upper mantle are distinguished on the basis of chemistry and mineralogy. Earth's lithosphere Earth's lithosphere, which constitutes the hard and rigid outer vertical layer of the Earth, includes the crust and the lithospheric mantle (or mantle lithosphere), the uppermost part of the mantle that is not convecting. The lithosphere is underlain by the asthenosphere which is the weaker, hotter, and deeper part of the upper mantle that is able to convect. The lithosphere–asthenosphere boundary is defined by a difference in response to stress. The lithosphere remains rigid for very long periods of geologic time in which it deforms elastically and through brittle failure, while the asthenosphere deforms viscously and accommodates strain through plastic deformation. The thickness of the lithosphere is thus considered to be the depth to the isotherm associated with the transition between brittle and viscous behavior. The temperature at which olivine becomes ductile (~) is often used to set this isotherm because olivine is generally the weakest mineral in the upper mantle. The lithosphere is subdivided horizontally into tectonic plates, which often include terranes accreted from other plates. History of the concept The concept of the lithosphere as Earth's strong outer layer was described by the English mathematician A. E. H. Love in his 1911 monograph "Some problems of Geodynamics" and further developed by the American geologist Joseph Barrell, who wrote a series of papers about the concept and introduced the term "lithosphere". The concept was based on the presence of significant gravity anomalies over continental crust, from which he inferred that there must exist a strong, s Document 1::: BedMachine Antarctica is a project to map the sub-surface landmass below the ice of Antarctica using data from radar depth sounding and ice shelf bathymetry methods and computer analysis of that data based on the conservation of mass. The project is uses data from 19 research institutes. It is led by the University of California, Irvine. It has revealed that the Antarctic bedrock is the deepest natural location on land (or at least not under liquid water) worldwide, with the bedrock being below sea level. Document 2::: The core–mantle boundary (CMB) of Earth lies between the planet's silicate mantle and its liquid iron–nickel outer core, at a depth of below Earth's surface. The boundary is observed via the discontinuity in seismic wave velocities at that depth due to the differences between the acoustic impedances of the solid mantle and the molten outer core. P-wave velocities are much slower in the outer core than in the deep mantle while S-waves do not exist at all in the liquid portion of the core. Recent evidence suggests a distinct boundary layer directly above the CMB possibly made of a novel phase of the basic perovskite mineralogy of the deep mantle named post-perovskite. Seismic tomography studies have shown significant irregularities within the boundary zone and appear to be dominated by the African and Pacific Large Low-Shear-Velocity Provinces (LLSVP). The uppermost section of the outer core is thought to be about 500–1,800 K hotter than the overlying mantle, creating a thermal boundary layer. The boundary is thought to harbor topography, much like Earth's surface, that is supported by solid-state convection within the overlying mantle. Variations in the thermal properties of the core-mantle boundary may affect how the outer core's iron-rich fluids flow, which are ultimately responsible for Earth's magnetic field. The D″ region The approx. 200 km thick layer of the lower mantle directly above the boundary is referred to as the D″ region ("D double-prime" or "D prime prime") and is sometimes included in discussions regarding the core–mantle boundary zone. The D″ name originates from geophysicist Keith Bullen's designations for the Earth's layers. His system was to label each layer alphabetically, A through G, with the crust as 'A' and the inner core as 'G'. In his 1942 publication of his model, the entire lower mantle was the D layer. In 1949, Bullen found his 'D' layer to actually be two different layers. The upper part of the D layer, about 1800 km thick, was r Document 3::: Atmospheric temperature is a measure of temperature at different levels of the Earth's atmosphere. It is governed by many factors, including incoming solar radiation, humidity and altitude. When discussing surface air temperature, the annual atmospheric temperature range at any geographical location depends largely upon the type of biome, as measured by the Köppen climate classification Temperature versus altitude Temperature varies greatly at different heights relative to Earth's surface and this variation in temperature characterizes the four layers that exist in the atmosphere. These layers include the troposphere, stratosphere, mesosphere, and thermosphere. The troposphere is the lowest of the four layers, extending from the surface of the Earth to about into the atmosphere where the tropopause (the boundary between the troposphere stratosphere) is located. The width of the troposphere can vary depending on latitude, for example, the troposphere is thicker in the tropics (about ) because the tropics are generally warmer, and thinner at the poles (about ) because the poles are colder. Temperatures in the atmosphere decrease with height at an average rate of 6.5°C (11.7°F) per kilometer. Because the troposphere experiences its warmest temperatures closer to Earth's surface, there is great vertical movement of heat and water vapour, causing turbulence. This turbulence, in conjunction with the presence of water vapour, is the reason that weather occurs within the troposphere. Following the tropopause is the stratosphere. This layer extends from the tropopause to the stratopause which is located at an altitude of about . Temperatures remain constant with height from the tropopause to an altitude of , after which they start to increase with height. This happening is referred to as an inversion and It is because of this inversion that the stratosphere is not characterised as turbulent. The stratosphere receives its warmth from the sun and the ozone layer which ab Document 4::: Martian geysers (or jets) are putative sites of small gas and dust eruptions that occur in the south polar region of Mars during the spring thaw. "Dark dune spots" and "spiders" – or araneiforms – are the two most visible types of features ascribed to these eruptions. Martian geysers are distinct from geysers on Earth, which are typically associated with hydrothermal activity. These are unlike any terrestrial geological phenomenon. The reflectance (albedo), shapes and unusual spider appearance of these features have stimulated a variety of hypotheses about their origin, ranging from differences in frosting reflectance, to explanations involving biological processes. However, all current geophysical models assume some sort of jet or geyser-like activity on Mars. Their characteristics, and the process of their formation, are still a matter of debate. These features are unique to the south polar region of Mars in an area informally called the 'cryptic region', at latitudes 60° to 80° south and longitudes 150°W to 310°W; this 1 meter deep carbon dioxide (CO2) ice transition area—between the scarps of the thick polar ice layer and the permafrost—is where clusters of the apparent geyser systems are located. The seasonal frosting and defrosting of carbon dioxide ice results in the appearance of a number of features, such dark dune spots with spider-like rilles or channels below the ice, where spider-like radial channels are carved between the ground and the carbon dioxide ice, giving it an appearance of spider webs, then, pressure accumulating in their interior ejects gas and dark basaltic sand or dust, which is deposited on the ice surface and thus, forming dark dune spots. This process is rapid, observed happening in the space of a few days, weeks or months, a growth rate rather unusual in geology – especially for Mars. However, it would seem that multiple years would be required to carve the larger spider-like channels. There is no direct data on these features othe The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Which layer is found below the lithosphere? A. asthenosphere B. stratosphere C. magnetosphere D. troposphere Answer:
sciq-8553
multiple_choice
Genetic variation helps ensure that some organisms will survive if what happens?
[ "there's an earthquake", "they die", "they get eaten", "their environment changes" ]
D
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Evolutionary biology is the subfield of biology that studies the evolutionary processes (natural selection, common descent, speciation) that produced the diversity of life on Earth. It is also defined as the study of the history of life forms on Earth. Evolution holds that all species are related and gradually change over generations. In a population, the genetic variations affect the phenotypes (physical characteristics) of an organism. These changes in the phenotypes will be an advantage to some organisms, which will then be passed onto their offspring. Some examples of evolution in species over many generations are the peppered moth and flightless birds. In the 1930s, the discipline of evolutionary biology emerged through what Julian Huxley called the modern synthesis of understanding, from previously unrelated fields of biological research, such as genetics and ecology, systematics, and paleontology. The investigational range of current research has widened to encompass the genetic architecture of adaptation, molecular evolution, and the different forces that contribute to evolution, such as sexual selection, genetic drift, and biogeography. Moreover, the newer field of evolutionary developmental biology ("evo-devo") investigates how embryogenesis is controlled, thus yielding a wider synthesis that integrates developmental biology with the fields of study covered by the earlier evolutionary synthesis. Subfields Evolution is the central unifying concept in biology. Biology can be divided into various ways. One way is by the level of biological organization, from molecular to cell, organism to population. Another way is by perceived taxonomic group, with fields such as zoology, botany, and microbiology, reflecting what was once seen as the major divisions of life. A third way is by approaches, such as field biology, theoretical biology, experimental evolution, and paleontology. These alternative ways of dividing up the subject have been combined with evolution Document 2::: Advanced Placement (AP) Biology (also known as AP Bio) is an Advanced Placement biology course and exam offered by the College Board in the United States. For the 2012–2013 school year, the College Board unveiled a new curriculum with a greater focus on "scientific practices". This course is designed for students who wish to pursue an interest in the life sciences. The College Board recommends successful completion of high school biology and high school chemistry before commencing AP Biology, although the actual prerequisites vary from school to school and from state to state. This course, nevertheless, is considered very challenging and one of the most difficult AP classes, as shown with AP Finals grade distributions. Topic outline The exam covers the following 8 units. The percentage indicates the portion of the multiple-choice section of the exam focused on each content area: The course is based on and tests six skills, called scientific practices which include: In addition to the topics above, students are required to be familiar with general lab procedure. Students should know how to collect data, analyze data to form conclusions, and apply those conclusions. Exam Students are allowed to use a four-function, scientific, or graphing calculator. The exam has two sections: a 90 minute multiple choice section and a 90 minute free response section. There are 60 multiple choice questions and six free responses, two long and four short. Both sections are worth 50% of the score. Score distribution Commonly used textbooks Biology, AP Edition by Sylvia Mader (2012, hardcover ) Life: The Science of Biology (Sadava, Heller, Orians, Purves, and Hillis, ) Campbell Biology AP Ninth Edition (Reece, Urry, Cain, Wasserman, Minorsky, and Andrew Jackson ) See also Glossary of biology A.P Bio (TV Show) Document 3::: In biology, evolution is the process of change in all forms of life over generations, and evolutionary biology is the study of how evolution occurs. Biological populations evolve through genetic changes that correspond to changes in the organisms' observable traits. Genetic changes include mutations, which are caused by damage or replication errors in organisms' DNA. As the genetic variation of a population drifts randomly over generations, natural selection gradually leads traits to become more or less common based on the relative reproductive success of organisms with those traits. The age of the Earth is about 4.5 billion years. The earliest undisputed evidence of life on Earth dates from at least 3.5 billion years ago. Evolution does not attempt to explain the origin of life (covered instead by abiogenesis), but it does explain how early lifeforms evolved into the complex ecosystem that we see today. Based on the similarities between all present-day organisms, all life on Earth is assumed to have originated through common descent from a last universal ancestor from which all known species have diverged through the process of evolution. All individuals have hereditary material in the form of genes received from their parents, which they pass on to any offspring. Among offspring there are variations of genes due to the introduction of new genes via random changes called mutations or via reshuffling of existing genes during sexual reproduction. The offspring differs from the parent in minor random ways. If those differences are helpful, the offspring is more likely to survive and reproduce. This means that more offspring in the next generation will have that helpful difference and individuals will not have equal chances of reproductive success. In this way, traits that result in organisms being better adapted to their living conditions become more common in descendant populations. These differences accumulate resulting in changes within the population. This proce Document 4::: The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591. On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education. Format This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions. The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test. Preparation The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Genetic variation helps ensure that some organisms will survive if what happens? A. there's an earthquake B. they die C. they get eaten D. their environment changes Answer:
sciq-848
multiple_choice
Types of radiation that cause cancer include ultraviolet (uv) radiation and what?
[ "thermal", "molecular", "radon", "vibrational" ]
C
Relavent Documents: Document 0::: Radiosensitivity is the relative susceptibility of cells, tissues, organs or organisms to the harmful effect of ionizing radiation. Cells types affected Cells are least sensitive when in the S phase, then the G1 phase, then the G2 phase, and most sensitive in the M phase of the cell cycle. This is described by the 'law of Bergonié and Tribondeau', formulated in 1906: X-rays are more effective on cells which have a greater reproductive activity. From their observations, they concluded that quickly dividing tumor cells are generally more sensitive than the majority of body cells. This is not always true. Tumor cells can be hypoxic and therefore less sensitive to X-rays because most of their effects are mediated by the free radicals produced by ionizing oxygen. It has meanwhile been shown that the most sensitive cells are those that are undifferentiated, well nourished, dividing quickly and highly active metabolically. Amongst the body cells, the most sensitive are spermatogonia and erythroblasts, epidermal stem cells, gastrointestinal stem cells. The least sensitive are nerve cells and muscle fibers. Very sensitive cells are also oocytes and lymphocytes, although they are resting cells and do not meet the criteria described above. The reasons for their sensitivity are not clear. There also appears to be a genetic basis for the varied vulnerability of cells to ionizing radiation. This has been demonstrated across several cancer types and in normal tissues. Cell damage classification The damage to the cell can be lethal (the cell dies) or sublethal (the cell can repair itself). Cell damage can ultimately lead to health effects which can be classified as either Tissue Reactions or Stochastic Effects according to the International Commission on Radiological Protection. Tissue reactions Tissue reactions have a threshold of irradiation under which they do not appear and above which they typically appear. Fractionation of dose, dose rate, the application of antioxidan Document 1::: Non-ionizing (or non-ionising) radiation refers to any type of electromagnetic radiation that does not carry enough energy per quantum (photon energy) to ionize atoms or molecules—that is, to completely remove an electron from an atom or molecule. Instead of producing charged ions when passing through matter, non-ionizing electromagnetic radiation has sufficient energy only for excitation (the movement of an electron to a higher energy state). Non-ionizing radiation is not a significant health risk. In contrast, ionizing radiation has a higher frequency and shorter wavelength than non-ionizing radiation, and can be a serious health hazard: exposure to it can cause burns, radiation sickness, many kinds of cancer, and genetic damage. Using ionizing radiation requires elaborate radiological protection measures, which in general are not required with non-ionizing radiation. The region at which radiation is considered "ionizing" is not well defined, since different molecules and atoms ionize at different energies. The usual definitions have suggested that radiation with particle or photon energies less than 10 electronvolts (eV) be considered non-ionizing. Another suggested threshold is 33 electronvolts, which is the energy needed to ionize water molecules. The light from the Sun that reaches the earth is largely composed of non-ionizing radiation, since the ionizing far-ultraviolet rays have been filtered out by the gases in the atmosphere, particularly oxygen. The remaining ultraviolet radiation from the Sun causes molecular damage (for example, sunburn) by photochemical and free-radical-producing means. Mechanisms of interaction with matter, including living tissue Near ultraviolet, visible light, infrared, microwave, radio waves, and low-frequency radio frequency (longwave) are all examples of non-ionizing radiation. By contrast, far ultraviolet light, X-rays, gamma-rays, and all particle radiation from radioactive decay are ionizing. Visible and near ultraviolet e Document 2::: Radiation sensitivity is the susceptibility of a material to physical or chemical changes induced by radiation. Examples of radiation sensitive materials are silver chloride, photoresists and biomaterials. Pine trees are more radiation susceptible than birch due to the complexity of the pine DNA in comparison to the birch. Examples of radiation insensitive materials are metals and ionic crystals such as quartz and sapphire. The radiation effect depends on the type of the irradiating particles, their energy, and the number of incident particles per unit volume. Radiation effects can be transient or permanent. The persistence of the radiation effect depends on the stability of the induced physical and chemical change. Physical radiation effects depending on diffusion properties can be thermally annealed whereby the original structure of the material is recovered. Chemical radiation effects usually cannot be recovered. Document 3::: absorbed dose Electromagnetic radiation equivalent dose hormesis Ionizing radiation Louis Harold Gray (British physicist) rad (unit) radar radar astronomy radar cross section radar detector radar gun radar jamming (radar reflector) corner reflector radar warning receiver (Radarange) microwave oven radiance (radiant: see) meteor shower radiation Radiation absorption Radiation acne Radiation angle radiant barrier (radiation belt: see) Van Allen radiation belt Radiation belt electron Radiation belt model Radiation Belt Storm Probes radiation budget Radiation burn Radiation cancer (radiation contamination) radioactive contamination Radiation contingency Radiation damage Radiation damping Radiation-dominated era Radiation dose reconstruction Radiation dosimeter Radiation effect radiant energy Radiation enteropathy (radiation exposure) radioactive contamination Radiation flux (radiation gauge: see) gauge fixing radiation hardening (radiant heat) thermal radiation radiant heating radiant intensity radiation hormesis radiation impedance radiation implosion Radiation-induced lung injury Radiation Laboratory radiation length radiation mode radiation oncologist radiation pattern radiation poisoning (radiation sickness) radiation pressure radiation protection (radiation shield) (radiation shielding) radiation resistance Radiation Safety Officer radiation scattering radiation therapist radiation therapy (radiotherapy) (radiation treatment) radiation therapy (radiation units: see) :Category:Units of radiation dose (radiation weight factor: see) equivalent dose radiation zone radiative cooling radiative forcing radiator radio (radio amateur: see) amateur radio (radio antenna) antenna (radio) radio astronomy radio beacon (radio broadcasting: see) broadcasting radio clock (radio communications) radio radio control radio controlled airplane radio controlled car radio-controlled helicopter radio control Document 4::: Radiation damage is the effect of ionizing radiation on physical objects including non-living structural materials. It can be either detrimental or beneficial for materials. Radiobiology is the study of the action of ionizing radiation on living things, including the health effects of radiation in humans. High doses of ionizing radiation can cause damage to living tissue such as radiation burning and harmful mutations such as causing cells to become cancerous, and can lead to health problems such as radiation poisoning. Causes This radiation may take several forms: Cosmic rays and subsequent energetic particles caused by their collision with the atmosphere and other materials. Radioactive daughter products (radioisotopes) caused by the collision of cosmic rays with the atmosphere and other materials, including living tissues. Energetic particle beams from a particle accelerator. Energetic particles or electro-magnetic radiation (X-rays) released from collisions of such particles with a target, as in an X ray machine or incidentally in the use of a particle accelerator. Particles or various types of rays released by radioactive decay of elements, which may be naturally occurring, created by accelerator collisions, or created in a nuclear reactor. They may be manufactured for therapeutic or industrial use or be released accidentally by nuclear accident, or released intentionally by a dirty bomb, or released into the atmosphere, ground, or ocean incidental to the explosion of a nuclear weapon for warfare or nuclear testing. Effects on materials and devices Radiation may affect materials and devices in deleterious and beneficial ways: By causing the materials to become radioactive (mainly by neutron activation, or in presence of high-energy gamma radiation by photodisintegration). By nuclear transmutation of the elements within the material including, for example, the production of Hydrogen and Helium which can in turn alter the mechanical properties of the materials The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Types of radiation that cause cancer include ultraviolet (uv) radiation and what? A. thermal B. molecular C. radon D. vibrational Answer:
sciq-2293
multiple_choice
What do communication satellites carry and use to provide energy during their missions?
[ "infrared panels", "batteries", "geothermal panels", "solar panels" ]
D
Relavent Documents: Document 0::: Space-based solar power (SBSP, SSP) is the concept of collecting solar power in outer space with solar power satellites (SPS) and distributing it to Earth. Its advantages include a higher collection of energy due to the lack of reflection and absorption by the atmosphere, the possibility of very little night, and a better ability to orient to face the Sun. Space-based solar power systems convert sunlight to some other form of energy (such as microwaves) which can be transmitted through the atmosphere to receivers on the Earth's surface. Various SBSP proposals have been researched since the early 1970s, but none is economically viable with present-day space launch costs. Some technologists speculate that this may change in the distant future with space manufacturing from asteroids or lunar material, or with radical new space launch technologies other than rocketry. Besides cost, SBSP also introduces several technological hurdles, including the problem of transmitting energy from orbit. Since wires extending from Earth's surface to an orbiting satellite are not feasible with current technology, SBSP designs generally include the wireless power transmission with its concomitant conversion inefficiencies, as well as land use concerns for antenna stations to receive the energy at Earth's surface. The collecting satellite would convert solar energy into electrical energy, power a microwave transmitter or laser emitter, and transmit this energy to a collector (or microwave rectenna) on Earth's surface. Contrary to appearances in fiction, most designs propose beam energy densities that are not harmful if human beings were to be inadvertently exposed, such as if a transmitting satellite's beam were to wander off-course. But the necessarily vast size of the receiving antennas would still require large blocks of land near the end users. The service life of space-based collectors in the face of long-term exposure to the space environment, including degradation from radiation Document 1::: The space segment of an artificial satellite system is one of its three operational components (the others being the user and ground segments). It comprises the satellite or satellite constellation and the uplink and downlink satellite links. The overall design of the payload, satellite, ground segment, and end-to-end system is a complex task. Satellite communications payload design must be properly coupled with the capabilities and interaction with the spacecraft bus that provides power, stability and environmental support to the payload. Telecommunications satellites Geostationary Earth orbit (GEO) supports businesses in satellite television and radio broadcasting, as well as data and mobile communications. The medium Earth orbit (MEO) and low Earth orbit (LEO) configurations can also be used for various applications. A communications satellite is composed of a communications payload (repeater and antenna) and supporting spacecraft bus (including solar arrays and batteries, attitude and orbit control systems, structure and thermal control system), and is placed in orbit by a launch vehicle. A successful satellite operator needs the right orbital slot or constellation, and satellites that deliver effective power and bandwidth to desirable regions and markets (i.e., those with growing demand for satellite services). , satellite radio serves nearly 32 million subscribers and satellite mobile telephone and data operators offer connectivity throughout the globe. Broadband mobile terminals now provide improved access to the Internet for a range of applications, including videoconferencing. See also Comparison of communication satellite operators Document 2::: Electrodynamic tethers (EDTs) are long conducting wires, such as one deployed from a tether satellite, which can operate on electromagnetic principles as generators, by converting their kinetic energy to electrical energy, or as motors, converting electrical energy to kinetic energy. Electric potential is generated across a conductive tether by its motion through a planet's magnetic field. A number of missions have demonstrated electrodynamic tethers in space, most notably the TSS-1, TSS-1R, and Plasma Motor Generator (PMG) experiments. Tether propulsion As part of a tether propulsion system, craft can use long, strong conductors (though not all tethers are conductive) to change the orbits of spacecraft. It has the potential to make space travel significantly cheaper. When direct current is applied to the tether, it exerts a Lorentz force against the magnetic field, and the tether exerts a force on the vehicle. It can be used either to accelerate or brake an orbiting spacecraft. In 2012 Star Technology and Research was awarded a $1.9 million contract to qualify a tether propulsion system for orbital debris removal. Uses for ED tethers Over the years, numerous applications for electrodynamic tethers have been identified for potential use in industry, government, and scientific exploration. The table below is a summary of some of the potential applications proposed thus far. Some of these applications are general concepts, while others are well-defined systems. Many of these concepts overlap into other areas; however, they are simply placed under the most appropriate heading for the purposes of this table. All of the applications mentioned in the table are elaborated upon in the Tethers Handbook. Three fundamental concepts that tethers possess, are gravity gradients, momentum exchange, and electrodynamics. Potential tether applications can be seen below: ISS reboost EDT has been proposed to maintain the ISS orbit and save the expense of chemical propellant re Document 3::: A satellite modem or satmodem is a modem used to establish data transfers using a communications satellite as a relay. A satellite modem's main function is to transform an input bitstream to a radio signal and vice versa. There are some devices that include only a demodulator (and no modulator, thus only allowing data to be downloaded by satellite) that are also referred to as "satellite modems." These devices are used in satellite Internet access (in this case uploaded data is transferred through a conventional PSTN modem or an ADSL modem). Satellite link A satellite modem is not the only device needed to establish a communication channel. Other equipment that is essential for creating a satellite link include satellite antennas and frequency converters. Data to be transmitted are transferred to a modem from data terminal equipment (e.g. a computer). The modem usually has intermediate frequency (IF) output (that is, 50-200 MHz), however, sometimes the signal is modulated directly to L band. In most cases, frequency has to be converted using an upconverter before amplification and transmission. A modulated signal is a sequence of symbols, pieces of data represented by a corresponding signal state, e.g. a bit or a few bits, depending upon the modulation scheme being used. Recovering a symbol clock (making a local symbol clock generator synchronous with the remote one) is one of the most important tasks of a demodulator. Similarly, a signal received from a satellite is firstly downconverted (this is done by a Low-noise block converter - LNB), then demodulated by a modem, and at last handled by data terminal equipment. The LNB is usually powered by the modem through the signal cable with 13 or 18 V DC. Features The main functions of a satellite modem are modulation and demodulation. Satellite communication standards also define error correction codes and framing formats. Popular modulation types being used for satellite communications: Binary phase-shift k Document 4::: In astronomy, a solar transit is a movement of any object passing between the Sun and the Earth. This includes the planets Mercury and Venus (see Transit of Mercury and Transit of Venus). A solar eclipse is also a solar transit of the Moon, but technically only if it does not cover the entire disc of the Sun (an annular eclipse), as "transit" counts only objects that are smaller than what they are passing in front of. Solar transit is only one of several types of astronomical transit A solar transit (also called a solar outage, sometimes solar fade, sun outage, or sun fade) also occurs to communications satellites, which pass in front of the Sun for several minutes each day for several days straight for a period in the months around the equinoxes, the exact dates depending on where the satellite is in the sky relative to its earth station. Because the Sun also produces a great deal of microwave radiation in addition to sunlight, it overwhelms the microwave radio signals coming from the satellite's transponders. This enormous electromagnetic interference causes interruptions in fixed satellite services that use satellite dishes, including TV networks and radio networks, as well as VSAT and DBS. Only downlinks from the satellite are affected, uplinks from the Earth are normally not, as the planet "shades" the Earth station when viewed from the satellite. Satellites in geosynchronous orbit are irregularly affected based on their inclination. Reception from satellites in other orbits are frequently but only momentarily affected by this, and by their nature the same signal is usually repeated or relayed on another satellite, if a tracking dish is used at all. Satellite radio and other services like GPS are not affected, as they use no receiving dish, and therefore do not concentrate the interference. (GPS and certain satellite radio systems use non-geosynchronous satellites.) Solar transit begins with only a brief degradation in signal quality for a few moments. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What do communication satellites carry and use to provide energy during their missions? A. infrared panels B. batteries C. geothermal panels D. solar panels Answer:
sciq-9916
multiple_choice
Where are about 75% of the tar sands in the world located?
[ "venezuela and onatrio,canada", "china and onatrio, canada", "venezuela and alberta, canada", "alberta, canada and peru" ]
C
Relavent Documents: Document 0::: The Bachelor of Science in Aquatic Resources and Technology (B.Sc. in AQT) (or Bachelor of Aquatic Resource) is an undergraduate degree that prepares students to pursue careers in the public, private, or non-profit sector in areas such as marine science, fisheries science, aquaculture, aquatic resource technology, food science, management, biotechnology and hydrography. Post-baccalaureate training is available in aquatic resource management and related areas. The Department of Animal Science and Export Agriculture, at the Uva Wellassa University of Badulla, Sri Lanka, has the largest enrollment of undergraduate majors in Aquatic Resources and Technology, with about 200 students as of 2014. The Council on Education for Aquatic Resources and Technology includes undergraduate AQT degrees in the accreditation review of Aquatic Resources and Technology programs and schools. See also Marine Science Ministry of Fisheries and Aquatic Resources Development Document 1::: The University of Edinburgh School of GeoSciences, is a school within the College of Science and Engineering, which was formed in 2002 by the merger of four departments. It is split between the King's Buildings and the Central Area of the university. The institutes of Ecological Sciences and Earth Science are located at the King's Buildings, whilst the Institute of Geography is located on Drummond Street in the Central Area. In 2013 the department was ranked 8th best place to study geography in the country by The Guardian University Rankings, down from 2nd in 2006. The school is ranked as one of the best in the UK for Earth Sciences. A 2008 Research Assessment Exercise assessment ranked the "Earth Systems and Environmental Science" department as the best in the UK by number of world leading research and staff. Its Geography department was ranked 15th in the world according to the 2015 QS rankings. There are over 1100 undergraduate students and 250 postgraduate students in the School of GeoSciences. There are also around 100 research and teaching staff within the school. The School collaborates with the University of Edinburgh Business School and the School of Economics, to offer a Carbon Management MSc degree, the first in the world, which has students from over 20 countries. The school also has exchange programmes though the Erasmus programme, in addition to universities in Canada, the United States, Australia and New Zealand. The head of the School of GeoSciences is currently Professor Bryne Ngwenya. Famous recent alumni of the School include former BP chief executive Tony Hayward. Former Rector of the university Peter McColl matriculated at one of the predecessors, the Department of Geography. Competition for entry is highly selective, in 2010, the School received 2221 applications, but only 275 offers were made, representing a 16.9% of an applicant receiving an offer. The school currently offers 11 undergraduate courses and a range of postgraduate degrees. Document 2::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 3::: In 2017 the department was merged with the Department of Geology and Mineral Resources Engineering, forming the new Department of Geoscience and Petroleum. The Norwegian University of Science and Technology (NTNU) is the key university of science and technology in Norway. The Department of Petroleum Engineering and Applied Geophysics (IPT) was established in 1973, shortly after the start of production (Ekofisk field) from the Norwegian continental shelf. The department came to include Petroleum Engineering as well as Geophysics, which is seen as a major strength of the petroleum education at NTNU. The department has elected chairman and vice chairman, and 4 informal groups of professors; geophysics, drilling, production and reservoir engineering. The stated primary purpose of maintaining the informal groups is to take care of the teaching in their respective disciplines. Each group is responsible for offering a sufficient number of courses, semester projects and thesis projects at MSc and PhD levels in their discipline, and to make annual revisions of these in accordance with the needs of society and industry. The total number of professors, associate professors, assistant professors and adjunct professors is 32. The administrative staff is led by a department administrator, and consists of a total of 6 secretaries. The technical support staff reports to the department head, and consists of 8 engineers and technicians. Until 2000, the department was part of the Applied Earth Sciences faculty, together with the Geology-department. After that, the department is part of the Faculty of Engineering Science and Technology (one of a total of 10 departments). Brief historical statistics of the department: Established in 1973 More than 2000 graduated M.Sc.´s More than 150 graduated Ph.D.´s Around 120 M.Sc.´s graduate every year Around 10 Ph.D.´s graduate every year Currently around 120 full-time teachers, researchers and staff Around 450 students enrolled at B.Sc. Document 4::: The Inverness Campus is an area in Inverness, Scotland. 5.5 hectares of the site have been designated as an enterprise area for life sciences by the Scottish Government. This designation is intended to encourage research and development in the field of life sciences, by providing incentives to locate at the site. The enterprise area is part of a larger site, over 200 acres, which will house Inverness College, Scotland's Rural College (SRUC), the University of the Highlands and Islands, a health science centre and sports and other community facilities. The purpose built research hub will provide space for up to 30 staff and researchers, allowing better collaboration. The Highland Science Academy will be located on the site, a collaboration formed by Highland Council, employers and public bodies. The academy will be aimed towards assisting young people to gain the necessary skills to work in the energy, engineering and life sciences sectors. History The site was identified in 2006. Work started to develop the infrastructure on the site in early 2012. A virtual tour was made available in October 2013 to help mark Doors Open Day. The construction had reached halfway stage in May 2014, meaning that it is on track to open doors to receive its first students in August 2015. In May 2014, work was due to commence on a building designed to provide office space and laboratories as part of the campus's "life science" sector. Morrison Construction have been appointed to undertake the building work. Scotland's Rural College (SRUC) will be able to relocate their Inverness-based activities to the Campus. SRUC's research centre for Comparative Epidemiology and Medicine, and Agricultural Business Consultancy services could co-locate with UHI where their activities have complementary themes. By the start of 2017, there were more than 600 people working at the site. In June 2021, a new bridge opened connecting Inverness Campus to Inverness Shopping Park. It crosses the Aberdeen The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Where are about 75% of the tar sands in the world located? A. venezuela and onatrio,canada B. china and onatrio, canada C. venezuela and alberta, canada D. alberta, canada and peru Answer:
sciq-4689
multiple_choice
What is the primary cause of air movement in the troposphere?
[ "the ozone layer", "differences in heating", "asteroids", "solar winds" ]
B
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 2::: The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields. Description The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions. The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.” Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers. Current efforts The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo Document 3::: Science, technology, engineering, and mathematics (STEM) is an umbrella term used to group together the distinct but related technical disciplines of science, technology, engineering, and mathematics. The term is typically used in the context of education policy or curriculum choices in schools. It has implications for workforce development, national security concerns (as a shortage of STEM-educated citizens can reduce effectiveness in this area), and immigration policy, with regard to admitting foreign students and tech workers. There is no universal agreement on which disciplines are included in STEM; in particular, whether or not the science in STEM includes social sciences, such as psychology, sociology, economics, and political science. In the United States, these are typically included by organizations such as the National Science Foundation (NSF), the Department of Labor's O*Net online database for job seekers, and the Department of Homeland Security. In the United Kingdom, the social sciences are categorized separately and are instead grouped with humanities and arts to form another counterpart acronym HASS (Humanities, Arts, and Social Sciences), rebranded in 2020 as SHAPE (Social Sciences, Humanities and the Arts for People and the Economy). Some sources also use HEAL (health, education, administration, and literacy) as the counterpart of STEM. Terminology History Previously referred to as SMET by the NSF, in the early 1990s the acronym STEM was used by a variety of educators, including Charles E. Vela, the founder and director of the Center for the Advancement of Hispanics in Science and Engineering Education (CAHSEE). Moreover, the CAHSEE started a summer program for talented under-represented students in the Washington, D.C., area called the STEM Institute. Based on the program's recognized success and his expertise in STEM education, Charles Vela was asked to serve on numerous NSF and Congressional panels in science, mathematics, and engineering edu Document 4::: The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work. History It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council. Function Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres. STEM ambassadors To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell. Funding STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments. See also The WISE Campaign Engineering and Physical Sciences Research Council National Centre for Excellence in Teaching Mathematics Association for Science Education Glossary of areas of mathematics Glossary of astronomy Glossary of biology Glossary of chemistry Glossary of engineering Glossary of physics The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the primary cause of air movement in the troposphere? A. the ozone layer B. differences in heating C. asteroids D. solar winds Answer:
sciq-4827
multiple_choice
Velocity affects what type of energy more than mass does?
[ "mechanical energy", "kinetic energy", "magnetic energy", "harmonic energy" ]
B
Relavent Documents: Document 0::: This is a list of topics that are included in high school physics curricula or textbooks. Mathematical Background SI Units Scalar (physics) Euclidean vector Motion graphs and derivatives Pythagorean theorem Trigonometry Motion and forces Motion Force Linear motion Linear motion Displacement Speed Velocity Acceleration Center of mass Mass Momentum Newton's laws of motion Work (physics) Free body diagram Rotational motion Angular momentum (Introduction) Angular velocity Centrifugal force Centripetal force Circular motion Tangential velocity Torque Conservation of energy and momentum Energy Conservation of energy Elastic collision Inelastic collision Inertia Moment of inertia Momentum Kinetic energy Potential energy Rotational energy Electricity and magnetism Ampère's circuital law Capacitor Coulomb's law Diode Direct current Electric charge Electric current Alternating current Electric field Electric potential energy Electron Faraday's law of induction Ion Inductor Joule heating Lenz's law Magnetic field Ohm's law Resistor Transistor Transformer Voltage Heat Entropy First law of thermodynamics Heat Heat transfer Second law of thermodynamics Temperature Thermal energy Thermodynamic cycle Volume (thermodynamics) Work (thermodynamics) Waves Wave Longitudinal wave Transverse waves Transverse wave Standing Waves Wavelength Frequency Light Light ray Speed of light Sound Speed of sound Radio waves Harmonic oscillator Hooke's law Reflection Refraction Snell's law Refractive index Total internal reflection Diffraction Interference (wave propagation) Polarization (waves) Vibrating string Doppler effect Gravity Gravitational potential Newton's law of universal gravitation Newtonian constant of gravitation See also Outline of physics Physics education Document 1::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 2::: Advanced Placement (AP) Physics C: Mechanics (also known as AP Mechanics) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a one-semester calculus-based university course in mechanics. The content of Physics C: Mechanics overlaps with that of AP Physics 1, but Physics 1 is algebra-based, while Physics C is calculus-based. Physics C: Mechanics may be combined with its electricity and magnetism counterpart to form a year-long course that prepares for both exams. Course content Intended to be equivalent to an introductory college course in mechanics for physics or engineering majors, the course modules are: Kinematics Newton's laws of motion Work, energy and power Systems of particles and linear momentum Circular motion and rotation Oscillations and gravitation. Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a Calculus I class. This course is often compared to AP Physics 1: Algebra Based for its similar course material involving kinematics, work, motion, forces, rotation, and oscillations. However, AP Physics 1: Algebra Based lacks concepts found in Calculus I, like derivatives or integrals. This course may be combined with AP Physics C: Electricity and Magnetism to make a unified Physics C course that prepares for both exams. AP test The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution. Registration The AP examination for AP Physics C: Mechanics is separate from the AP examination for AP Physics C: Electricity and Magnetism. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test. Format The exam is typically administered on a Monday aftern Document 3::: Applied physics is the application of physics to solve scientific or engineering problems. It is usually considered a bridge or a connection between physics and engineering. "Applied" is distinguished from "pure" by a subtle combination of factors, such as the motivation and attitude of researchers and the nature of the relationship to the technology or science that may be affected by the work. Applied physics is rooted in the fundamental truths and basic concepts of the physical sciences but is concerned with the utilization of scientific principles in practical devices and systems and with the application of physics in other areas of science and high technology. Examples of research and development areas Accelerator physics Acoustics Atmospheric physics Biophysics Brain–computer interfacing Chemistry Chemical physics Differentiable programming Artificial intelligence Scientific computing Engineering physics Chemical engineering Electrical engineering Electronics Sensors Transistors Materials science and engineering Metamaterials Nanotechnology Semiconductors Thin films Mechanical engineering Aerospace engineering Astrodynamics Electromagnetic propulsion Fluid mechanics Military engineering Lidar Radar Sonar Stealth technology Nuclear engineering Fission reactors Fusion reactors Optical engineering Photonics Cavity optomechanics Lasers Photonic crystals Geophysics Materials physics Medical physics Health physics Radiation dosimetry Medical imaging Magnetic resonance imaging Radiation therapy Microscopy Scanning probe microscopy Atomic force microscopy Scanning tunneling microscopy Scanning electron microscopy Transmission electron microscopy Nuclear physics Fission Fusion Optical physics Nonlinear optics Quantum optics Plasma physics Quantum technology Quantum computing Quantum cryptography Renewable energy Space physics Spectroscopy See also Applied science Applied mathematics Engineering Engineering Physics High Technology Document 4::: In physics, a number of noted theories of the motion of objects have developed. Among the best known are: Classical mechanics Newton's laws of motion Euler's laws of motion Cauchy's equations of motion Kepler's laws of planetary motion General relativity Special relativity Quantum mechanics Motion (physics) The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Velocity affects what type of energy more than mass does? A. mechanical energy B. kinetic energy C. magnetic energy D. harmonic energy Answer:
sciq-11305
multiple_choice
What is the type of volcano with a tall cone shape that you picture when picturing a volcano?
[ "advanced", "inactive", "active", "composite" ]
D
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 2::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of Document 3::: Advanced Placement (AP) Calculus (also known as AP Calc, Calc AB / Calc BC or simply AB / BC) is a set of two distinct Advanced Placement calculus courses and exams offered by the American nonprofit organization College Board. AP Calculus AB covers basic introductions to limits, derivatives, and integrals. AP Calculus BC covers all AP Calculus AB topics plus additional topics (including integration by parts, Taylor series, parametric equations, vector calculus, and polar coordinate functions). AP Calculus AB AP Calculus AB is an Advanced Placement calculus course. It is traditionally taken after precalculus and is the first calculus course offered at most schools except for possibly a regular calculus class. The Pre-Advanced Placement pathway for math helps prepare students for further Advanced Placement classes and exams. Purpose According to the College Board: Topic outline The material includes the study and application of differentiation and integration, and graphical analysis including limits, asymptotes, and continuity. An AP Calculus AB course is typically equivalent to one semester of college calculus. Analysis of graphs (predicting and explaining behavior) Limits of functions (one and two sided) Asymptotic and unbounded behavior Continuity Derivatives Concept At a point As a function Applications Higher order derivatives Techniques Integrals Interpretations Properties Applications Techniques Numerical approximations Fundamental theorem of calculus Antidifferentiation L'Hôpital's rule Separable differential equations AP Calculus BC AP Calculus BC is equivalent to a full year regular college course, covering both Calculus I and II. After passing the exam, students may move on to Calculus III (Multivariable Calculus). Purpose According to the College Board, Topic outline AP Calculus BC includes all of the topics covered in AP Calculus AB, as well as the following: Convergence tests for series Taylor series Parametric equations Polar functions (inclu Document 4::: The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields. Description The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions. The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.” Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers. Current efforts The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the type of volcano with a tall cone shape that you picture when picturing a volcano? A. advanced B. inactive C. active D. composite Answer:
sciq-8782
multiple_choice
In the past, biologists grouped living organisms into five kingdoms: animals, plants, fungi, protists, and what?
[ "pathogens", "bacteria", "trees", "lizards" ]
B
Relavent Documents: Document 0::: Animals are multicellular, eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. As of 2022, 2.16 million living animal species have been described—of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are around 7.77 million animal species. Animals range in length from to . They have complex interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology. Most living animal species are in Bilateria, a clade whose members have a bilaterally symmetric body plan. The Bilateria include the protostomes, containing animals such as nematodes, arthropods, flatworms, annelids and molluscs, and the deuterostomes, containing the echinoderms and the chordates, the latter including the vertebrates. Life forms interpreted as early animals were present in the Ediacaran biota of the late Precambrian. Many modern animal phyla became clearly established in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago. Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on ad Document 1::: A supergroup, in evolutionary biology, is a large group of organisms that share one common ancestor and have important defining characteristics. It is an informal, mostly arbitrary rank in biological taxonomy that is often greater than phylum or kingdom, although some supergroups are also treated as phyla. Eukaryotic supergroups Since the decade of 2000's, the eukaryotic tree of life (abbreviated as eToL) has been divided into 5–8 major groupings called 'supergroups'. These groupings were established after the idea that only monophyletic groups should be accepted as ranks, as an alternative to the use of paraphyletic kingdom Protista. In the early days of the eToL six traditional supergroups were considered: Amoebozoa, Opisthokonta, "Excavata", Archaeplastida, "Chromalveolata" and Rhizaria. Since then, the eToL has been rearranged profoundly, and most of these groups were found as paraphyletic or lacked defining morphological characteristics that unite their members, which makes the 'supergroup' label more arbitrary. Document 2::: Scientists trying to reconstruct evolutionary history have been challenged by the fact that genes can sometimes transfer between distant branches on the tree of life. This movement of genes can occur through horizontal gene transfer (HGT), scrambling the information on which biologists relied to reconstruct the phylogeny of organisms. Conversely, HGT can also help scientists to reconstruct and date the tree of life. Indeed, a gene transfer can be used as a phylogenetic marker, or as the proof of contemporaneity of the donor and recipient organisms, and as a trace of extinct biodiversity. HGT happens very infrequently – at the individual organism level, it is highly improbable for any such event to take place. However, on the grander scale of evolutionary history, these events occur with some regularity. On one hand, this forces biologists to abandon the use of individual genes as good markers for the history of life. On the other hand, this provides an almost unexploited large source of information about the past. Three domains of life The three main early branches of the tree of life have been intensively studied by microbiologists because the first organisms were microorganisms. Microbiologists (led by Carl Woese) have introduced the term domain for the three main branches of this tree, where domain is a phylogenetic term similar in meaning to biological kingdom. To reconstruct this tree of life, the gene sequence encoding the small subunit of ribosomal RNA (SSU rRNA, 16s rRNA) has proven useful, and the tree (as shown in the picture) relies heavily on information from this single gene. These three domains of life represent the main evolutionary lineages of early cellular life and currently include Bacteria, Archaea (single-celled organisms superficially similar to bacteria), and Eukarya. Eukarya includes only organisms having a well-defined nucleus, such as fungi, protists, and all organisms in the plant and animals kingdoms (see figure). The gene most com Document 3::: Vertebrate zoology is the biological discipline that consists of the study of Vertebrate animals, i.e., animals with a backbone, such as fish, amphibians, reptiles, birds and mammals. Many natural history museums have departments named Vertebrate Zoology. In some cases whole museums bear this name, e.g. the Museum of Vertebrate Zoology at the University of California, Berkeley. Subdivisions This subdivision of zoology has many further subdivisions, including: Ichthyology - the study of fishes. Mammalogy - the study of mammals. Chiropterology - the study of bats. Primatology - the study of primates. Ornithology - the study of birds. Herpetology - the study of reptiles. Batrachology - the study of amphibians. These divisions are sometimes further divided into more specific specialties. Document 4::: Animals are multicellular eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, are able to move, reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Over 1.5 million living animal species have been described—of which around 1 million are insects—but it has been estimated there are over 7 million in total. Animals range in size from 8.5 millionths of a metre to long and have complex interactions with each other and their environments, forming intricate food webs. The study of animals is called zoology. Animals may be listed or indexed by many criteria, including taxonomy, status as endangered species, their geographical location, and their portrayal and/or naming in human culture. By common name List of animal names (male, female, young, and group) By aspect List of common household pests List of animal sounds List of animals by number of neurons By domestication List of domesticated animals By eating behaviour List of herbivorous animals List of omnivores List of carnivores By endangered status IUCN Red List endangered species (Animalia) United States Fish and Wildlife Service list of endangered species By extinction List of extinct animals List of extinct birds List of extinct mammals List of extinct cetaceans List of extinct butterflies By region Lists of amphibians by region Lists of birds by region Lists of mammals by region Lists of reptiles by region By individual (real or fictional) Real Lists of snakes List of individual cats List of oldest cats List of giant squids List of individual elephants List of historical horses List of leading Thoroughbred racehorses List of individual apes List of individual bears List of giant pandas List of individual birds List of individual bovines List of individual cetaceans List of individual dogs List of oldest dogs List of individual monkeys List of individual pigs List of w The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. In the past, biologists grouped living organisms into five kingdoms: animals, plants, fungi, protists, and what? A. pathogens B. bacteria C. trees D. lizards Answer:
ai2_arc-330
multiple_choice
A student pushes against a tree with a force of 10 newtons (N). The tree does not move. What is the amount of force exerted by the tree on the student?
[ "0 N", "5 N", "10 N", "20 N" ]
D
Relavent Documents: Document 0::: The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered. The 1995 version has 30 five-way multiple choice questions. Example question (question 4): Gender differences The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher. Document 1::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 2::: Advanced Placement (AP) Physics C: Mechanics (also known as AP Mechanics) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a one-semester calculus-based university course in mechanics. The content of Physics C: Mechanics overlaps with that of AP Physics 1, but Physics 1 is algebra-based, while Physics C is calculus-based. Physics C: Mechanics may be combined with its electricity and magnetism counterpart to form a year-long course that prepares for both exams. Course content Intended to be equivalent to an introductory college course in mechanics for physics or engineering majors, the course modules are: Kinematics Newton's laws of motion Work, energy and power Systems of particles and linear momentum Circular motion and rotation Oscillations and gravitation. Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a Calculus I class. This course is often compared to AP Physics 1: Algebra Based for its similar course material involving kinematics, work, motion, forces, rotation, and oscillations. However, AP Physics 1: Algebra Based lacks concepts found in Calculus I, like derivatives or integrals. This course may be combined with AP Physics C: Electricity and Magnetism to make a unified Physics C course that prepares for both exams. AP test The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution. Registration The AP examination for AP Physics C: Mechanics is separate from the AP examination for AP Physics C: Electricity and Magnetism. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test. Format The exam is typically administered on a Monday aftern Document 3::: Sir Isaac Newton Sixth Form is a specialist maths and science sixth form with free school status located in Norwich, owned by the Inspiration Trust. It has the capacity for 480 students aged 16–19. It specialises in mathematics and science. History Prior to becoming a Sixth Form College the building functioned as a fire station serving the central Norwich area until August 2011 when it closed down. Two years later the Sixth Form was created within the empty building with various additions being made to the existing structure. The sixth form was ranked the 7th best state sixth form in England by the Times in 2022. Curriculum At Sir Isaac Newton Sixth Form, students can study a choice of either Maths, Further Maths, Core Maths, Biology, Chemistry, Physics, Computer Science, Environmental Science or Psychology. Additionally, students can also study any of the subjects on offer at the partner free school Jane Austen College, also located in Norwich and specialising in humanities, Arts and English. Document 4::: The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591. On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education. Format This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions. The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test. Preparation The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. A student pushes against a tree with a force of 10 newtons (N). The tree does not move. What is the amount of force exerted by the tree on the student? A. 0 N B. 5 N C. 10 N D. 20 N Answer:
sciq-3116
multiple_choice
What natural resource can be damaged by the accumulation of too much salt?
[ "soil", "mineral", "forests", "sediment" ]
A
Relavent Documents: Document 0::: Nutrient cycling in the Columbia River Basin involves the transport of nutrients through the system, as well as transformations from among dissolved, solid, and gaseous phases, depending on the element. The elements that constitute important nutrient cycles include macronutrients such as nitrogen (as ammonium, nitrite, and nitrate), silicate, phosphorus, and micronutrients, which are found in trace amounts, such as iron. Their cycling within a system is controlled by many biological, chemical, and physical processes. The Columbia River Basin is the largest freshwater system of the Pacific Northwest, and due to its complexity, size, and modification by humans, nutrient cycling within the system is affected by many different components. Both natural and anthropogenic processes are involved in the cycling of nutrients. Natural processes in the system include estuarine mixing of fresh and ocean waters, and climate variability patterns such as the Pacific Decadal Oscillation and the El Nino Southern Oscillation (both climatic cycles that affect the amount of regional snowpack and river discharge). Natural sources of nutrients in the Columbia River include weathering, leaf litter, salmon carcasses, runoff from its tributaries, and ocean estuary exchange. Major anthropogenic impacts to nutrients in the basin are due to fertilizers from agriculture, sewage systems, logging, and the construction of dams. Nutrients dynamics vary in the river basin from the headwaters to the main river and dams, to finally reaching the Columbia River estuary and ocean. Upstream in the headwaters, salmon runs are the main source of nutrients. Dams along the river impact nutrient cycling by increasing residence time of nutrients, and reducing the transport of silicate to the estuary, which directly impacts diatoms, a type of phytoplankton. The dams are also a barrier to salmon migration, and can increase the amount of methane locally produced. The Columbia River estuary exports high rates of n Document 1::: In ecology, base-richness is the level of chemical bases in water or soil, such as calcium or magnesium ions. Many organisms prefer base-rich environments. Chemical bases are alkalis, hence base-rich environments are either neutral or alkaline. Because acid-rich environments have few bases, they are dominated by environmental acids (usually organic acids). However, the relationship between base-richness and acidity is not a rigid one – changes in the levels of acids (such as dissolved carbon dioxide) may significantly change acidity without affecting base-richness. Base-rich terrestrial environments are characteristic of areas where underlying rocks (below soil) are limestone. Seawater is also base-rich, so maritime and marine environments are themselves base-rich. Base-poor environments are characteristic of areas where underlying rocks (below soil) are sandstone or granite, or where the water is derived directly from rainfall (ombrotrophic). Examples of base-rich environments Calcareous grassland Fen Limestone pavement Maquis shrubland Yew woodland Examples of base-poor environments Bog Heath (habitat) Poor fen Moorland Pine woodland Tundra See also Soil Calcicole Calcifuge Ecology Soil chemistry Document 2::: The sodium adsorption ratio (SAR) is an irrigation water quality parameter used in the management of sodium-affected soils. It is an indicator of the suitability of water for use in agricultural irrigation, as determined from the concentrations of the main alkaline and earth alkaline cations present in the water. It is also a standard diagnostic parameter for the sodicity hazard of a soil, as determined from analysis of pore water extracted from the soil. The formula for calculating the sodium adsorption ratio (SAR) is: where sodium, calcium, and magnesium concentrations are expressed in milliequivalents/liter. SAR allows assessment of the state of flocculation or of dispersion of clay aggregates in a soil. Sodium and potassium ions facilitate the dispersion of clay particles while calcium and magnesium promote their flocculation. The behaviour of clay aggregates influences the soil structure and affects the permeability of the soil on which directly depends the water infiltration rate. It is important to accurately know the nature and the concentrations of cations at which the flocculation occurs: critical flocculation concentration (CFC). The SAR parameter is also used to determine the stability of colloids in suspension in water. Although SAR is only one factor in determining the suitability of water for irrigation, in general, the higher the sodium adsorption ratio, the less suitable the water is for irrigation. Irrigation using water with high sodium adsorption ratio may require soil amendments to prevent long-term damage to the soil. If irrigation water with a high SAR is applied to a soil for years, the sodium in the water can displace the calcium and magnesium in the soil. This will cause a decrease in the ability of the soil to form stable aggregates and a loss of soil structure and tilth. This will also lead to a decrease in infiltration and permeability of the soil to water, leading to problems with crop production. Sandy soils will have less problem Document 3::: The critical relative humidity (CRH) of a salt is defined as the relative humidity of the surrounding atmosphere (at a certain temperature) at which the material begins to absorb moisture from the atmosphere and below which it will not absorb atmospheric moisture. When the humidity of the atmosphere is equal to (or is greater than) the critical relative humidity of a sample of salt, the sample will take up water until all of the salt is dissolved to yield a saturated solution. All water-soluble salts and mixtures have characteristic critical humidities; it is a unique material property. The critical relative humidity of most salts decreases with increasing temperature. For instance, the critical relative humidity of ammonium nitrate decreases 22% with a temperature from 0 °C to 40 °C (32 °F to 104 °F). The critical relative humidity of several fertilizer salts is given in table 1: Table 1: Critical relative humidities of pure salts at 30°C. Mixtures of salts usually have lower critical humidities than either of the constituents. Fertilizers that contain Urea as an ingredient usually exhibit a much lower Critical Relative Humidity than Fertilizers without Urea. Table 2 shows CRH data for two-component mixtures: Table 2: Critical relative humidities of mixtures of salts at 30°C (values are percent relative humidity). As shown, the effect of salt mixing is most dramatic in the case of ammonium nitrate with urea. This mixture has an extremely low critical relative humidity and can therefore only be used in liquid fertilisers (so called UAN-solutions). See also Deliquescent Hygroscopy Humidity Document 4::: Geomicrobiology is the scientific field at the intersection of geology and microbiology and is a major subfield of geobiology. It concerns the role of microbes on geological and geochemical processes and effects of minerals and metals to microbial growth, activity and survival. Such interactions occur in the geosphere (rocks, minerals, soils, and sediments), the atmosphere and the hydrosphere. Geomicrobiology studies microorganisms that are driving the Earth's biogeochemical cycles, mediating mineral precipitation and dissolution, and sorbing and concentrating metals. The applications include for example bioremediation, mining, climate change mitigation and public drinking water supplies. Rocks and minerals Microbe-aquifer interactions Microorganisms are known to impact aquifers by modifying their rates of dissolution. In the karstic Edwards Aquifer, microbes colonizing the aquifer surfaces enhance the dissolution rates of the host rock. In the oceanic crustal aquifer, the largest aquifer on Earth, microbial communities can impact ocean productivity, sea water chemistry as well as geochemical cycling throughout the geosphere. The mineral make-up of the rocks affects the composition and abundance of these subseafloor microbial communities present. Through bioremediation some microbes can aid in decontaminating freshwater resources in aquifers contaminated by waste products. Microbially precipitated minerals Some bacteria use metal ions as their energy source. They convert (or chemically reduce) the dissolved metal ions from one electrical state to another. This reduction releases energy for the bacteria's use, and, as a side product, serves to concentrate the metals into what ultimately become ore deposits. Biohydrometallurgy or in situ mining is where low-grade ores may be attacked by well-studied microbial processes under controlled conditions to extract metals. Certain iron, copper, uranium and even gold ores are thought to have formed as the result of micr The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What natural resource can be damaged by the accumulation of too much salt? A. soil B. mineral C. forests D. sediment Answer:
sciq-2558
multiple_choice
What developmental stage do alligators lack that most other amphibians have?
[ "larval stage", "Tadpole stage", "Egg Stage", "metamorphosis" ]
A
Relavent Documents: Document 0::: Abronia bogerti, known by the common name Bogert's arboreal alligator lizard, is a species of lizard in the family Anguidae. The species is endemic to Mexico. Etymology The specific name, bogerti, is in honor of American herpetologist Charles Mitchill Bogert. Geographic range A. bogerti is indigenous to eastern Oaxaca, Mexico. A single specimen, the holotype, of A. bogerti was collected in 1954, and it was not seen again until 2000, at which time a second specimen was photographed. The type locality is "north of Niltepec, between Cerro Atravesado and Sierra Madre, Oaxaca". Behavior A. bogerti is largely arboreal. Reproduction A. bogerti is viviparous. Conservation status Because the species A. bogerti was collected in the canopy of the forest, it is believed that deforestation and ongoing crop and livestock farming pose the largest threats to its survival. Mexican law protects the lizard. Document 1::: A juvenile is an individual organism (especially an animal) that has not yet reached its adult form, sexual maturity or size. Juveniles can look very different from the adult form, particularly in colour, and may not fill the same niche as the adult form. In many organisms the juvenile has a different name from the adult (see List of animal names). Some organisms reach sexual maturity in a short metamorphosis, such as ecdysis in many insects and some other arthropods. For others, the transition from juvenile to fully mature is a more prolonged process—puberty in humans and other species (like higher primates and whales), for example. In such cases, juveniles during this transformation are sometimes called subadults. Many invertebrates cease development upon reaching adulthood. The stages of such invertebrates are larvae or nymphs. In vertebrates and some invertebrates (e.g. spiders), larval forms (e.g. tadpoles) are usually considered a development stage of their own, and "juvenile" refers to a post-larval stage that is not fully grown and not sexually mature. In amniotes, the embryo represents the larval stage. Here, a "juvenile" is an individual in the time between hatching/birth/germination and reaching maturity. Examples For animal larval juveniles, see larva Juvenile birds or bats can be called fledglings For cat juveniles, see kitten For dog juveniles, see puppy For human juvenile life stages, see childhood and adolescence, an intermediary period between the onset of puberty and full physical, psychological, and social adulthood Document 2::: The "Standard Event System" (SES) to Study Vertebrate Embryos was developed in 2009 to establish a common language in comparative embryology. Homologous developmental characters are defined therein and should be recognisable in all vertebrate embryos. The SES includes a protocol on how to describe and depict vertebrate embryonic characters. The SES was initially developed for external developmental characters of organogenesis, particularly for turtle embryos. However, it is expandable both taxonomically and in regard to anatomical or molecular characters. This article should act as an overview on the species staged with SES and document the expansions of this system. New entries need to be validated based on the citation of scientific publications. The guideline on how to establish new SES-characters and to describe species can be found in the original paper of Werneburg (2009). SES-characters are used to reconstruct ancestral developmental sequences in evolution such as that of the last common ancestor of placental mammals. Also the plasticity of developmental characters can be documented and analysed. SES-staged species Overview on the vertebrate species staged with SES. SES-characters New SES-characters are continuously described in new publications. Currently, characters of organogenesis are described for Vertebrata (V), Gnathostomata (G), Tetrapoda (T), Amniota (A), Sauropsida (S), Squamata (SQ), Mammalia (M), and Monotremata (MO). In total, 166 SES-characters are currently defined. Document 3::: The axolotl (; from ) (Ambystoma mexicanum) is a paedomorphic salamander closely related to the tiger salamander. It is unusual among amphibians in that it reaches adulthood without undergoing metamorphosis. Instead of taking to the land, adults remain aquatic and gilled. The species was originally found in several lakes underlying what is now Mexico City, such as Lake Xochimilco and Lake Chalco. These lakes were drained by Spanish settlers after the conquest of the Aztec Empire, leading to the destruction of much of the axolotl's natural habitat. , the axolotl was near extinction due to urbanization in Mexico City and consequent water pollution, as well as the introduction of invasive species such as tilapia and perch. It is listed as critically endangered in the wild, with a decreasing population of around 50 to 1,000 adult individuals, by the International Union for Conservation of Nature and Natural Resources (IUCN) and is listed under Appendix II of the Convention on International Trade in Endangered Species (CITES). Axolotls are used extensively in scientific research due to their ability to regenerate limbs, gills and parts of their eyes and brains. Notably, their ability to regenerate declines with age, but it does not disappear. Axolotls keep modestly growing throughout their life and some consider this trait to be a direct contributor to their regenerative abilities. Further research has been conducted to examine their heart as a model of human single ventricle and excessive trabeculation. Axolotls were also sold as food in Mexican markets and were a staple in the Aztec diet. Axolotls should not be confused with the larval stage of the closely related tiger salamander (A. tigrinum), which are widespread in much of North America and occasionally become paedomorphic. Neither should they be confused with mudpuppies (Necturus spp.), fully aquatic salamanders from a different family that are not closely related to the axolotl but bear a superficial resemblan Document 4::: An associated reproductive pattern is a seasonal change in reproduction which is highly correlated with a change in gonad and associated hormone. Notable Model Organisms Parthenogenic Whiptail Lizards The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What developmental stage do alligators lack that most other amphibians have? A. larval stage B. Tadpole stage C. Egg Stage D. metamorphosis Answer:
sciq-8196
multiple_choice
What is the most common type of joint in the human body?
[ "proximal joints", "movable joints", "dorsal joints", "transverse joints" ]
B
Relavent Documents: Document 0::: Instruments used in Anatomy dissections are as follows: Instrument list Image gallery Document 1::: The axillary joints are two joints in the axillary region of the body, and include the shoulder joint and the acromioclavicular joint. Shoulder joint The shoulder joint also known as the glenohumeral joint is a synovial ball and socket joint. The shoulder joint involves articulation between the glenoid cavity of the scapula (shoulder blade) and the head of the upper arm bone (humerus) and functions as a diarthrosis and multiaxial joint. Due to the very loose joint capsule that gives a limited interface of the humerus and scapula, it is the most mobile joint of the human body. Acromioclavicular joint The acromioclavicular joint, is the joint at the top of the shoulder. It is the junction between the acromion (part of the scapula that forms the highest point of the shoulder) and the clavicle. It is a plane synovial joint. The acromioclavicular joint allows the arm to be raised above the head. This joint functions as a pivot point (although technically it is a gliding synovial joint), acting like a strut to help with movement of the scapula resulting in a greater degree of arm rotation. Document 2::: The list below describes such skeletal movements as normally are possible in particular joints of the human body. Other animals have different degrees of movement at their respective joints; this is because of differences in positions of muscles and because structures peculiar to the bodies of humans and other species block motions unsuited to their anatomies. Arm and shoulder Shoulder elbow The major muscles involved in retraction include the rhomboid major muscle, rhomboid minor muscle and trapezius muscle, whereas the major muscles involved in protraction include the serratus anterior and pectoralis minor muscles. Sternoclavicular and acromioclavicular joints Elbow Wrist and fingers Movements of the fingers Movements of the thumb Neck Spine Lower limb Knees Feet The muscles tibialis anterior and tibialis posterior invert the foot. Some sources also state that the triceps surae and extensor hallucis longus invert. Inversion occurs at the subtalar joint and transverse tarsal joint. Eversion of the foot occurs at the subtalar joint. The muscles involved in this include Fibularis longus and fibularis brevis, which are innervated by the superficial fibular nerve. Some sources also state that the fibularis tertius everts. Dorsiflexion of the foot: The muscles involved include those of the Anterior compartment of leg, specifically tibialis anterior muscle, extensor hallucis longus muscle, extensor digitorum longus muscle, and peroneus tertius. The range of motion for dorsiflexion indicated in the literature varies from 12.2 to 18 degrees. Foot drop is a condition, that occurs when dorsiflexion is difficult for an individual who is walking. Plantarflexion of the foot: Primary muscles for plantar flexion are situated in the Posterior compartment of leg, namely the superficial Gastrocnemius, Soleus and Plantaris (only weak participation), and the deep muscles Flexor hallucis longus, Flexor digitorum longus and Tibialis posterior. Muscles in the Lateral co Document 3::: The collateral ligaments of metatarsophalangeal joints are strong, rounded cords, placed one on either side of each joint, and attached, by one end, to the posterior tubercle on the side of the head of the metatarsal bone, and, by the other, to the contiguous extremity of the phalanx. The place of dorsal ligaments is supplied by the extensor tendons on the dorsal surfaces of the joints. Document 4::: A condyloid joint (also called condylar, ellipsoidal, or bicondylar) is an ovoid articular surface, or condyle that is received into an elliptical cavity. This permits movement in two planes, allowing flexion, extension, adduction, abduction, and circumduction. Examples Examples include: the wrist-joint metacarpophalangeal joints metatarsophalangeal joints atlanto-occipital joints These are also called ellipsoid joints. The oval-shaped condyle of one bone fits into the elliptical cavity of the other bone. These joints allow biaxial movements — i.e., forward and backward, or from side to side, but not rotation. Radiocarpal joint and Metacarpo-phalangeal joint are examples of condyloid joints. An example of an Ellipsoid joint is the wrist; it functions similarly to the ball and socket joint except is unable to rotate 360 degrees; it prohibits axial rotation. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the most common type of joint in the human body? A. proximal joints B. movable joints C. dorsal joints D. transverse joints Answer:
sciq-3762
multiple_choice
What can be combined with an amine to form an amide?
[ "ketones", "carbon dioxide", "acetic acid", "carboxylic acid" ]
D
Relavent Documents: Document 0::: Reduction Reduction of ethyl acetoacetate gives ethyl 3-hydroxybutyrate. Transesterification Ethyl acetoacetate transesterifies to give benzyl acetoacetate via a mechanism involving acetylketene. Ethyl (and other) acetoacetates nitrosate readily with equimolar Document 1::: Thioesters can be conveniently prepared from alcohols by the Mitsunobu reaction, using thioacetic acid. They also arise via carbonylation of alkynes and alkenes in the presence of thiols. Reactions Thioesters hydrolyze to thiols and the carboxylic acid: RC(O)SR' + H2O → RCO2H + RSH The carbonyl center in thioesters is more reactive toward amine nucleophiles to give amides: In a related reaction, but using a soft-metal to capture the thiolate, thioesters are converted into esters. Document 2::: This is a list of articles that describe particular biomolecules or types of biomolecules. A For substances with an A- or α- prefix such as α-amylase, please see the parent page (in this case Amylase). A23187 (Calcimycin, Calcium Ionophore) Abamectine Abietic acid Acetic acid Acetylcholine Actin Actinomycin D Adenine Adenosmeme Adenosine diphosphate (ADP) Adenosine monophosphate (AMP) Adenosine triphosphate (ATP) Adenylate cyclase Adiponectin Adonitol Adrenaline, epinephrine Adrenocorticotropic hormone (ACTH) Aequorin Aflatoxin Agar Alamethicin Alanine Albumins Aldosterone Aleurone Alpha-amanitin Alpha-MSH (Melaninocyte stimulating hormone) Allantoin Allethrin α-Amanatin, see Alpha-amanitin Amino acid Amylase (also see α-amylase) Anabolic steroid Anandamide (ANA) Androgen Anethole Angiotensinogen Anisomycin Antidiuretic hormone (ADH) Anti-Müllerian hormone (AMH) Arabinose Arginine Argonaute Ascomycin Ascorbic acid (vitamin C) Asparagine Aspartic acid Asymmetric dimethylarginine ATP synthase Atrial-natriuretic peptide (ANP) Auxin Avidin Azadirachtin A – C35H44O16 B Bacteriocin Beauvericin beta-Hydroxy beta-methylbutyric acid beta-Hydroxybutyric acid Bicuculline Bilirubin Biopolymer Biotin (Vitamin H) Brefeldin A Brassinolide Brucine Butyric acid C Document 3::: The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591. On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education. Format This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions. The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test. Preparation The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a Document 4::: The School of Textile and Clothing industries (ESITH) is a Moroccan engineering school, established in 1996, that focuses on textiles and clothing. It was created in collaboration with ENSAIT and ENSISA, as a result of a public private partnership designed to grow a key sector in the Moroccan economy. The partnership was successful and has been used as a model for other schools. ESITH is the only engineering school in Morocco that provides a comprehensive program in textile engineering with internships for students at the Canadian Group CTT. Edith offers three programs in industrial engineering: product management, supply chain, and logistics, and textile and clothing The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What can be combined with an amine to form an amide? A. ketones B. carbon dioxide C. acetic acid D. carboxylic acid Answer:
sciq-471
multiple_choice
Bones are the main organs of what system, which also includes cartilage and ligaments?
[ "lymphatic system", "digestive system", "skeletal system", "endocrine system" ]
C
Relavent Documents: Document 0::: In a multicellular organism, an organ is a collection of tissues joined in a structural unit to serve a common function. In the hierarchy of life, an organ lies between tissue and an organ system. Tissues are formed from same type cells to act together in a function. Tissues of different types combine to form an organ which has a specific function. The intestinal wall for example is formed by epithelial tissue and smooth muscle tissue. Two or more organs working together in the execution of a specific body function form an organ system, also called a biological system or body system. An organ's tissues can be broadly categorized as parenchyma, the functional tissue, and stroma, the structural tissue with supportive, connective, or ancillary functions. For example, the gland's tissue that makes the hormones is the parenchyma, whereas the stroma includes the nerves that innervate the parenchyma, the blood vessels that oxygenate and nourish it and carry away its metabolic wastes, and the connective tissues that provide a suitable place for it to be situated and anchored. The main tissues that make up an organ tend to have common embryologic origins, such as arising from the same germ layer. Organs exist in most multicellular organisms. In single-celled organisms such as members of the eukaryotes, the functional analogue of an organ is known as an organelle. In plants, there are three main organs. The number of organs in any organism depends on the definition used. By one widely adopted definition, 79 organs have been identified in the human body. Animals Except for placozoans, multicellular animals including humans have a variety of organ systems. These specific systems are widely studied in human anatomy. The functions of these organ systems often share significant overlap. For instance, the nervous and endocrine system both operate via a shared organ, the hypothalamus. For this reason, the two systems are combined and studied as the neuroendocrine system. The sam Document 1::: A biological system is a complex network which connects several biologically relevant entities. Biological organization spans several scales and are determined based different structures depending on what the system is. Examples of biological systems at the macro scale are populations of organisms. On the organ and tissue scale in mammals and other animals, examples include the circulatory system, the respiratory system, and the nervous system. On the micro to the nanoscopic scale, examples of biological systems are cells, organelles, macromolecular complexes and regulatory pathways. A biological system is not to be confused with a living system, such as a living organism. Organ and tissue systems These specific systems are widely studied in human anatomy and are also present in many other animals. Respiratory system: the organs used for breathing, the pharynx, larynx, bronchi, lungs and diaphragm. Digestive system: digestion and processing food with salivary glands, oesophagus, stomach, liver, gallbladder, pancreas, intestines, rectum and anus. Cardiovascular system (heart and circulatory system): pumping and channeling blood to and from the body and lungs with heart, blood and blood vessels. Urinary system: kidneys, ureters, bladder and urethra involved in fluid balance, electrolyte balance and excretion of urine. Integumentary system: skin, hair, fat, and nails. Skeletal system: structural support and protection with bones, cartilage, ligaments and tendons. Endocrine system: communication within the body using hormones made by endocrine glands such as the hypothalamus, pituitary gland, pineal body or pineal gland, thyroid, parathyroid and adrenals, i.e., adrenal glands. Lymphatic system: structures involved in the transfer of lymph between tissues and the blood stream; includes the lymph and the nodes and vessels. The lymphatic system includes functions including immune responses and development of antibodies. Immune system: protects the organism from Document 2::: Outline h1.00: Cytology h2.00: General histology H2.00.01.0.00001: Stem cells H2.00.02.0.00001: Epithelial tissue H2.00.02.0.01001: Epithelial cell H2.00.02.0.02001: Surface epithelium H2.00.02.0.03001: Glandular epithelium H2.00.03.0.00001: Connective and supportive tissues H2.00.03.0.01001: Connective tissue cells H2.00.03.0.02001: Extracellular matrix H2.00.03.0.03001: Fibres of connective tissues H2.00.03.1.00001: Connective tissue proper H2.00.03.1.01001: Ligaments H2.00.03.2.00001: Mucoid connective tissue; Gelatinous connective tissue H2.00.03.3.00001: Reticular tissue H2.00.03.4.00001: Adipose tissue H2.00.03.5.00001: Cartilage tissue H2.00.03.6.00001: Chondroid tissue H2.00.03.7.00001: Bone tissue; Osseous tissue H2.00.04.0.00001: Haemotolymphoid complex H2.00.04.1.00001: Blood cells H2.00.04.1.01001: Erythrocyte; Red blood cell H2.00.04.1.02001: Leucocyte; White blood cell H2.00.04.1.03001: Platelet; Thrombocyte H2.00.04.2.00001: Plasma H2.00.04.3.00001: Blood cell production H2.00.04.4.00001: Postnatal sites of haematopoiesis H2.00.04.4.01001: Lymphoid tissue H2.00.05.0.00001: Muscle tissue H2.00.05.1.00001: Smooth muscle tissue Document 3::: H2.00.04.4.01001: Lymphoid tissue H2.00.05.0.00001: Muscle tissue H2.00.05.1.00001: Smooth muscle tissue H2.00.05.2.00001: Striated muscle tissue H2.00.06.0.00001: Nerve tissue H2.00.06.1.00001: Neuron H2.00.06.2.00001: Synapse H2.00.06.2.00001: Neuroglia h3.01: Bones h3.02: Joints h3.03: Muscles h3.04: Alimentary system h3.05: Respiratory system h3.06: Urinary system h3.07: Genital system h3.08: Document 4::: This article contains a list of organs of the human body. A general consensus is widely believed to be 79 organs (this number goes up if you count each bone and muscle as an organ on their own, which is becoming more common practice to do); however, there is no universal standard definition of what constitutes an organ, and some tissue groups' status as one is debated. Since there is no single standard definition of what an organ is, the number of organs varies depending on how one defines an organ. For example, this list contains more than 79 organs (about ~103). It is still not clear which definition of an organ is used for all the organs in this list, it seemed that it may have been compiled based on what wikipedia articles were available on organs. Musculoskeletal system Skeleton Joints Ligaments Muscular system Tendons Digestive system Mouth Teeth Tongue Lips Salivary glands Parotid glands Submandibular glands Sublingual glands Pharynx Esophagus Stomach Small intestine Duodenum Jejunum Ileum Large intestine Cecum Ascending colon Transverse colon Descending colon Sigmoid colon Rectum Liver Gallbladder Mesentery Pancreas Anal canal Appendix Respiratory system Nasal cavity Pharynx Larynx Trachea Bronchi Bronchioles and smaller air passages Lungs Muscles of breathing Urinary system Kidneys Ureter Bladder Urethra Reproductive systems Female reproductive system Internal reproductive organs Ovaries Fallopian tubes Uterus Cervix Vagina External reproductive organs Vulva Clitoris Male reproductive system Internal reproductive organs Testicles Epididymis Vas deferens Prostate External reproductive organs Penis Scrotum Endocrine system Pituitary gland Pineal gland Thyroid gland Parathyroid glands Adrenal glands Pancreas Circulatory system Circulatory system Heart Arteries Veins Capillaries Lymphatic system Lymphatic vessel Lymph node Bone marrow Thymus Spleen Gut-associated lymphoid tissue Tonsils Interstitium Nervous system Central nervous system The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Bones are the main organs of what system, which also includes cartilage and ligaments? A. lymphatic system B. digestive system C. skeletal system D. endocrine system Answer:
sciq-5430
multiple_choice
What type of gas are stars made up of?
[ "freon", "water vapor", "calcium", "hydrogen" ]
D
Relavent Documents: Document 0::: An asteroid spectral type is assigned to asteroids based on their reflectance spectrum, color, and sometimes albedo. These types are thought to correspond to an asteroid's surface composition. For small bodies that are not internally differentiated, the surface and internal compositions are presumably similar, while large bodies such as Ceres and Vesta are known to have internal structure. Over the years, there has been a number of surveys that resulted in a set of different taxonomic systems such as the Tholen, SMASS and Bus–DeMeo classifications. Taxonomic systems In 1975, astronomers Clark R. Chapman, David Morrison, and Ben Zellner developed a simple taxonomic system for asteroids based on color, albedo, and spectral shape. The three categories were labelled "C" for dark carbonaceous objects, "S" for stony (silicaceous) objects, and "U" for those that did not fit into either C or S. This basic division of asteroid spectra has since been expanded and clarified. A number of classification schemes are currently in existence, and while they strive to retain some mutual consistency, quite a few asteroids are sorted into different classes depending on the particular scheme. This is due to the use of different criteria for each approach. The two most widely used classifications are described below: Overview of Tholen and SMASS S3OS2 classification The Small Solar System Objects Spectroscopic Survey (S3OS2 or S3OS2, also known as the Lazzaro classification) observed 820 asteroids, using the former ESO 1.52-metre telescope at La Silla Observatory during 1996–2001. This survey applied both the Tholen and Bus–Binzel (SMASS) taxonomy to the observed objects, many of which had previously not been classified. For the Tholen-like classification, the survey introduced a new "Caa-type", which shows a broad absorption band associated indicating an aqueous alteration of the body's surface. The Caa class corresponds to Tholen's C-type and to the SMASS hydrated Ch-type (inclu Document 1::: Stellar molecules are molecules that exist or form in stars. Such formations can take place when the temperature is low enough for molecules to form – typically around 6000 K or cooler. Otherwise the stellar matter is restricted to atoms (chemical elements) in the forms of gas or – at very high temperatures – plasma. Background Matter is made up by atoms (formed by protons and other subatomic particles). When the environment is right, atoms can join together and form molecules, which give rise to most materials studied in materials science. But certain environments, such as high temperatures, don't allow atoms to form molecules. Stars have very high temperatures, primarily in their interior, and therefore there are few molecules formed in stars. For this reason, a typical chemist (who studies atoms and molecules) would not have much to study in a star, so stars are better explained by astrophysicists or astrochemists. However, low abundance of molecules in stars is not equated with no molecules at all. By the mid-18th century, scientists surmised that the source of the Sun's light was incandescence, rather than combustion. Evidence and research Although the Sun is a star, its photosphere has a low enough temperature of , and therefore molecules can form. Water has been found on the Sun, and there is evidence of H2 in white dwarf stellar atmospheres. Cooler stars include absorption band spectra that are characteristic of molecules. Similar absorption bands are found in sun spots which are cooler areas on the Sun. Molecules found in the Sun include MgH, CaH, FeH, CrH, NaH, OH, SiH, VO, and TiO. Others include CN CH, MgF, NH, C2, SrF, zirconium monoxide, YO, ScO, BH. Stars of most types can contain molecules, even the Ap category of A class stars. Only the hottest O, B and A class stars have no detectable molecules. Also carbon rich white dwarfs, even though very hot, have spectral lines of C2 and CH. Laboratory measurements Measurements of simple molecules t Document 2::: Star formation is the process by which dense regions within molecular clouds in interstellar space, sometimes referred to as "stellar nurseries" or "star-forming regions", collapse and form stars. As a branch of astronomy, star formation includes the study of the interstellar medium (ISM) and giant molecular clouds (GMC) as precursors to the star formation process, and the study of protostars and young stellar objects as its immediate products. It is closely related to planet formation, another branch of astronomy. Star formation theory, as well as accounting for the formation of a single star, must also account for the statistics of binary stars and the initial mass function. Most stars do not form in isolation but as part of a group of stars referred as star clusters or stellar associations. Stellar nurseries Interstellar clouds Spiral galaxies like the Milky Way contain stars, stellar remnants, and a diffuse interstellar medium (ISM) of gas and dust. The interstellar medium consists of 104 to 106 particles per cm3, and is typically composed of roughly 70% hydrogen, 28% helium, and 1.5% heavier elements by mass. The trace amounts of heavier elements were and are produced within stars via stellar nucleosynthesis and ejected as the stars pass beyond the end of their main sequence lifetime. Higher density regions of the interstellar medium form clouds, or diffuse nebulae, where star formation takes place. In contrast to spiral galaxies, elliptical galaxies lose the cold component of its interstellar medium within roughly a billion years, which hinders the galaxy from forming diffuse nebulae except through mergers with other galaxies. In the dense nebulae where stars are produced, much of the hydrogen is in the molecular (H2) form, so these nebulae are called molecular clouds. The Herschel Space Observatory has revealed that filaments, or elongated dense gas structures, are truly ubiquitous in molecular clouds and central to the star formation process. They fr Document 3::: A red dwarf is the smallest and coolest kind of star on the main sequence. Red dwarfs are by far the most common type of star in the Milky Way, at least in the neighborhood of the Sun. However, as a result of their low luminosity, individual red dwarfs cannot be easily observed. From Earth, not one star that fits the stricter definitions of a red dwarf is visible to the naked eye. Proxima Centauri, the nearest star to the Sun, is a red dwarf, as are fifty of the sixty nearest stars. According to some estimates, red dwarfs make up three-quarters of the stars in the Milky Way. The coolest red dwarfs near the Sun have a surface temperature of about and the smallest have radii about 9% that of the Sun, with masses about 7.5% that of the Sun. These red dwarfs have spectral types of L0 to L2. There is some overlap with the properties of brown dwarfs, since the most massive brown dwarfs at lower metallicity can be as hot as and have late M spectral types. Definitions and usage of the term "red dwarf" vary on how inclusive they are on the hotter and more massive end. One definition is synonymous with stellar M dwarfs (M-type main sequence stars), yielding a maximum temperature of and . One includes all stellar M-type main-sequence and all K-type main-sequence stars (K dwarf), yielding a maximum temperature of and . Some definitions include any stellar M dwarf and part of the K dwarf classification. Other definitions are also in use. Many of the coolest, lowest mass M dwarfs are expected to be brown dwarfs, not true stars, and so those would be excluded from any definition of red dwarf. Stellar models indicate that red dwarfs less than are fully convective. Hence, the helium produced by the thermonuclear fusion of hydrogen is constantly remixed throughout the star, avoiding helium buildup at the core, thereby prolonging the period of fusion. Low-mass red dwarfs therefore develop very slowly, maintaining a constant luminosity and spectral type for trillions of years, Document 4::: Stellar chemistry is the study of the chemical composition of astronomical objects; stars in particular, hence the name stellar chemistry. The significance of stellar chemical composition is an open ended question at this point. Some research asserts that a greater abundance of certain elements (such as carbon, sodium, silicon, and magnesium) in the stellar mass are necessary for a star's inner solar system to be habitable over long periods of time. The hypothesis being that the "abundance of these elements make the star cooler and cause it to evolve more slowly, thereby giving planets in its habitable zone more time to develop life as we know it." Stellar abundance of oxygen also appears to be critical to the length of time newly developed planets exist in a habitable zone around their host star. Researchers postulate that if our own sun had a lower abundance of oxygen, the Earth would have ceased to "live" in a habitable zone a billion years ago, long before complex organisms had the opportunity to evolve. Other research Other research is being or has been done in numerous areas relating to the chemical nature of stars. The formation of stars is of particular interest. Research published in 2009 presents spectroscopic observations of so-called "young stellar objects" viewed in the Large Magellanic Cloud with the Spitzer Space Telescope. This research suggests that water, or, more specifically, ice, plays a large role in the formation of these eventual stars Others are researching much more tangible ideas relating to stars and chemistry. Research published in 2010 studied the effects of a strong stellar flare on the atmospheric chemistry of an Earth-like planet orbiting an M dwarf star, specifically, the M dwarf AD Leonis. This research simulated the effects an observed flare produced by AD Leonis on April 12, 1985 would have on a hypothetical Earth-like planet. After simulating the effects of both UV radiation and protons on the hypothetical planet's a The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What type of gas are stars made up of? A. freon B. water vapor C. calcium D. hydrogen Answer:
scienceQA-3693
multiple_choice
What do these two changes have in common? a rock heating up in a campfire cooking an egg
[ "Both are chemical changes.", "Both are caused by cooling.", "Both are caused by heating.", "Both are only physical changes." ]
C
Step 1: Think about each change. A rock heating up in a campfire is a physical change. The temperature of the rock goes up, but the rock is still made of the same type of matter. Cooking an egg is a chemical change. The heat causes the matter in the egg to change. Cooked eggs and raw eggs are made of different types of matter. Step 2: Look at each answer choice. Both are only physical changes. A rock heating up in a campfire is a physical change. But cooking an egg is not. Both are chemical changes. Cooking an egg is a chemical change. But a rock heating up in a campfire is not. Both are caused by heating. Both changes are caused by heating. Both are caused by cooling. Neither change is caused by cooling.
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds. Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate. A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density. An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge. Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change. Examples Heating and cooling Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation. Magnetism Ferro-magnetic materials can become magnetic. The process is reve Document 2::: Thermofluids is a branch of science and engineering encompassing four intersecting fields: Heat transfer Thermodynamics Fluid mechanics Combustion The term is a combination of "thermo", referring to heat, and "fluids", which refers to liquids, gases and vapors. Temperature, pressure, equations of state, and transport laws all play an important role in thermofluid problems. Phase transition and chemical reactions may also be important in a thermofluid context. The subject is sometimes also referred to as "thermal fluids". Heat transfer Heat transfer is a discipline of thermal engineering that concerns the transfer of thermal energy from one physical system to another. Heat transfer is classified into various mechanisms, such as heat conduction, convection, thermal radiation, and phase-change transfer. Engineers also consider the transfer of mass of differing chemical species, either cold or hot, to achieve heat transfer. Sections include : Energy transfer by heat, work and mass Laws of thermodynamics Entropy Refrigeration Techniques Properties and nature of pure substances Applications Engineering : Predicting and analysing the performance of machines Thermodynamics Thermodynamics is the science of energy conversion involving heat and other forms of energy, most notably mechanical work. It studies and interrelates the macroscopic variables, such as temperature, volume and pressure, which describe physical, thermodynamic systems. Fluid mechanics Fluid Mechanics the study of the physical forces at work during fluid flow. Fluid mechanics can be divided into fluid kinematics, the study of fluid motion, and fluid kinetics, the study of the effect of forces on fluid motion. Fluid mechanics can further be divided into fluid statics, the study of fluids at rest, and fluid dynamics, the study of fluids in motion. Some of its more interesting concepts include momentum and reactive forces in fluid flow and fluid machinery theory and performance. Sections include: Flu Document 3::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 4::: Energy transformation, also known as energy conversion, is the process of changing energy from one form to another. In physics, energy is a quantity that provides the capacity to perform work or moving (e.g. lifting an object) or provides heat. In addition to being converted, according to the law of conservation of energy, energy is transferable to a different location or object, but it cannot be created or destroyed. The energy in many of its forms may be used in natural processes, or to provide some service to society such as heating, refrigeration, lighting or performing mechanical work to operate machines. For example, to heat a home, the furnace burns fuel, whose chemical potential energy is converted into thermal energy, which is then transferred to the home's air to raise its temperature. Limitations in the conversion of thermal energy Conversions to thermal energy from other forms of energy may occur with 100% efficiency. Conversion among non-thermal forms of energy may occur with fairly high efficiency, though there is always some energy dissipated thermally due to friction and similar processes. Sometimes the efficiency is close to 100%, such as when potential energy is converted to kinetic energy as an object falls in a vacuum. This also applies to the opposite case; for example, an object in an elliptical orbit around another body converts its kinetic energy (speed) into gravitational potential energy (distance from the other object) as it moves away from its parent body. When it reaches the furthest point, it will reverse the process, accelerating and converting potential energy into kinetic. Since space is a near-vacuum, this process has close to 100% efficiency. Thermal energy is unique because it in most cases (willow) cannot be converted to other forms of energy. Only a difference in the density of thermal/heat energy (temperature) can be used to perform work, and the efficiency of this conversion will be (much) less than 100%. This is because t The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What do these two changes have in common? a rock heating up in a campfire cooking an egg A. Both are chemical changes. B. Both are caused by cooling. C. Both are caused by heating. D. Both are only physical changes. Answer:
sciq-3352
multiple_choice
Why do sharks sense low levels of electricity?
[ "to locate prey", "to locate mates", "to reproduce", "to sleep" ]
A
Relavent Documents: Document 0::: Electropositive metals (EPMs) are a new class of shark repellent materials that produce a measurable voltage when immersed in an electrolyte such as seawater. The voltages produced are as high as 1.75 VDC in seawater. It is hypothesized that this voltage overwhelms the ampullary organ in sharks, producing a repellent action. Since bony fish lack the ampullary organ, the repellent is selective to sharks and rays. The process is electrochemical, so no external power input is required. As chemical work is done, the metal is lost in the form of corrosion. Depending on the alloy or metal utilized and its thickness, the electropositive repellent effect lasts up to 48 hours. The reaction of the electropositive metal in seawater produces hydrogen gas bubbles and an insoluble nontoxic hydroxide as a precipitate which settles downward in the water column. History SharkDefense made the discovery of electrochemical shark repellent effects on May 1, 2006 at South Bimini, Bahamas at the Bimini Biological Field Station. An electropositive metal, which was a component of a permanent magnet, was chosen as an experimental control for a tonic immobility experiment by Eric Stroud using a juvenile lemon shark (Negaprion brevirostris). It was anticipated that this metal would produce no effect, since it was not ferromagnetic. However, a violent rousing response was observed when the metal was brought within 50 cm of the shark's nose. The experiment was repeated with three other juvenile lemon sharks and two other juvenile nurse sharks (Ginglymostoma cirratum), and care was taken to eliminate all stray metal objects in the testing site. Patrick Rice, Michael Herrmann, and Eric Stroud were present at this first trial. Mike Rowe, from Discovery Channel’s Dirty Jobs series, subsequently witnessed and participated in a test using an electropositive metal within 24 hours after the discovery. In the next three months, a variety of transition metals, lanthanides, post-transition metals, Document 1::: A shark repellent is any method of driving sharks away from an area. Shark repellents are a category of animal repellents. Shark repellent technologies include magnetic shark repellent, electropositive shark repellents, electrical repellents, and semiochemicals. Shark repellents can be used to protect people from sharks by driving the sharks away from areas where they are likely to kill human beings. In other applications, they can be used to keep sharks away from areas they may be a danger to themselves due to human activity. In this case, the shark repellent serves as a shark conservation method. There are some naturally occurring shark repellents; modern artificial shark repellents date to at least the 1940s, with the United States Navy using them in the Pacific Ocean theater of World War II. Natural repellents It has traditionally been believed that sharks are repelled by the smell of a dead shark; however, modern research has had mixed results. The Pardachirus marmoratus fish (finless sole, Red Sea Moses sole) repels sharks through its secretions. The best-understood factor is pardaxin, acting as an irritant to the sharks' gills, but other chemicals have been identified as contributing to the repellent effect. In 2017, the US Navy announced that it was developing a synthetic analog of hagfish slime with potential application as a shark repellent. History Some of the earliest research on shark repellents took place during the Second World War when military services sought to minimize the risk to stranded aviators and sailors in the water. Research has continued to the present, with notable researchers including Americans Eugenie Clark, and later Samuel H. Gruber, who has conducted tests at the Bimini Sharklab in Bimini, and the Japanese scientist Kazuo Tachibana. Future celebrity chef Julia Child developed shark repellent while working for the Office of Strategic Services Initial work, which was based on historical research and studies at the time, focused Document 2::: The jamming avoidance response is a behavior of some species of weakly electric fish. It occurs when two electric fish with wave discharges meet – if their discharge frequencies are very similar, each fish shifts its discharge frequency to increase the difference between the two. By doing this, both fish prevent jamming of their sense of electroreception. The behavior has been most intensively studied in the South American species Eigenmannia virescens. It is also present in other Gymnotiformes such as Apteronotus, as well as in the African species Gymnarchus niloticus. The jamming avoidance response was one of the first complex behavioral responses in a vertebrate to have its neural circuitry completely specified. As such, it holds special significance in the field of neuroethology. Discovery The jamming avoidance response (JAR) was discovered by Akira Watanabe and Kimihisa Takeda in 1963. The fish they used was an unspecified species of Eigenmannia, which has a quasi-sinusoidal wave discharge of about 300 Hz. They found that when a sinusoidal electrical stimulus is emitted from an electrode near the fish, if the stimulus frequency is within 5 Hz of the fish's electric organ discharge (EOD) frequency, the fish alters its EOD frequency to increase the difference between its own frequency and the stimulus frequency. Stimuli above the fish's EOD frequency push the EOD frequency downwards, while frequencies below that of the fish push the EOD frequency upwards, with a maximum change of about ±6.5 Hz. This behavior was given the name "jamming avoidance response" several years later in 1972, in a paper by Theodore Bullock, Robert Hamstra Jr., and Henning Scheich. In 1975, Walter Heiligenberg discovered a JAR in the distantly-related Gymnarchus niloticus, the African knifefish, showing that the behavior had convergently evolved in two separate lineages. Behavior Eigenmannia and other weakly electric fish use active electrolocation – they can locate objects by gene Document 3::: Electroreception and electrogenesis are the closely related biological abilities to perceive electrical stimuli and to generate electric fields. Both are used to locate prey; stronger electric discharges are used in a few groups of fishes (most famously the electric eel, which is not actually an eel but a knifefish) to stun prey. The capabilities are found almost exclusively in aquatic or amphibious animals, since water is a much better conductor of electricity than air. In passive electrolocation, objects such as prey are detected by sensing the electric fields they create. In active electrolocation, fish generate a weak electric field and sense the different distortions of that field created by objects that conduct or resist electricity. Active electrolocation is practised by two groups of weakly electric fish, the Gymnotiformes (knifefishes) and the Mormyridae (elephantfishes), and by Gymnarchus niloticus, the African knifefish. An electric fish generates an electric field using an electric organ, modified from muscles in its tail. The field is called weak if it is only enough to detect prey, and strong if it is powerful enough to stun or kill. The field may be in brief pulses, as in the elephantfishes, or a continuous wave, as in the knifefishes. Some strongly electric fish, such as the electric eel, locate prey by generating a weak electric field, and then discharge their electric organs strongly to stun the prey; other strongly electric fish, such as the electric ray, electrolocate passively. The stargazers are unique in being strongly electric but not using electrolocation. The electroreceptive ampullae of Lorenzini evolved early in the history of the vertebrates; they are found in both cartilaginous fishes such as sharks, and in bony fishes such as coelacanths and sturgeons, and must therefore be ancient. Most bony fishes have secondarily lost their ampullae of Lorenzini, but other non-homologous electroreceptors have repeatedly evolved, including in two gr Document 4::: The electric rays are a group of rays, flattened cartilaginous fish with enlarged pectoral fins, composing the order Torpediniformes . They are known for being capable of producing an electric discharge, ranging from 8 to 220 volts, depending on species, used to stun prey and for defense. There are 69 species in four families. Perhaps the best known members are those of the genus Torpedo. The torpedo undersea weapon is named after it. The name comes from the Latin , 'to be stiffened or paralyzed', from the effect on someone who touches the fish. Description Electric rays have a rounded pectoral disc with two moderately large rounded-angular (not pointed or hooked) dorsal fins (reduced in some Narcinidae), and a stout muscular tail with a well-developed caudal fin. The body is thick and flabby, with soft loose skin with no dermal denticles or thorns. A pair of kidney-shaped electric organs are at the base of the pectoral fins. The snout is broad, large in the Narcinidae, but reduced in all other families. The mouth, nostrils, and five pairs of gill slits are underneath the disc. Electric rays are found from shallow coastal waters down to at least deep. They are sluggish and slow-moving, propelling themselves with their tails, not by using their pectoral fins as other rays do. They feed on invertebrates and small fish. They lie in wait for prey below the sand or other substrate, using their electricity to stun and capture it. Relationship to humans History of research The electrogenic properties of electric rays have been known since antiquity, although their nature was not understood. The ancient Greeks used electric rays to numb the pain of childbirth and operations. In his dialogue Meno, Plato has the character Meno accuse Socrates of "stunning" people with his puzzling questions, in a manner similar to the way the torpedo fish stuns with electricity. Scribonius Largus, a Roman physician, recorded the use of torpedo fish for treatment of headaches and gout The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Why do sharks sense low levels of electricity? A. to locate prey B. to locate mates C. to reproduce D. to sleep Answer:
sciq-3491
multiple_choice
A plant that forms special tissues for storing water in an arid climate is an example of the plant evolving what?
[ "additions", "adaptations", "consciousness", "divergence" ]
B
Relavent Documents: Document 0::: Phytomorphology is the study of the physical form and external structure of plants. This is usually considered distinct from plant anatomy, which is the study of the internal structure of plants, especially at the microscopic level. Plant morphology is useful in the visual identification of plants. Recent studies in molecular biology started to investigate the molecular processes involved in determining the conservation and diversification of plant morphologies. In these studies transcriptome conservation patterns were found to mark crucial ontogenetic transitions during the plant life cycle which may result in evolutionary constraints limiting diversification. Scope Plant morphology "represents a study of the development, form, and structure of plants, and, by implication, an attempt to interpret these on the basis of similarity of plan and origin". There are four major areas of investigation in plant morphology, and each overlaps with another field of the biological sciences. First of all, morphology is comparative, meaning that the morphologist examines structures in many different plants of the same or different species, then draws comparisons and formulates ideas about similarities. When structures in different species are believed to exist and develop as a result of common, inherited genetic pathways, those structures are termed homologous. For example, the leaves of pine, oak, and cabbage all look very different, but share certain basic structures and arrangement of parts. The homology of leaves is an easy conclusion to make. The plant morphologist goes further, and discovers that the spines of cactus also share the same basic structure and development as leaves in other plants, and therefore cactus spines are homologous to leaves as well. This aspect of plant morphology overlaps with the study of plant evolution and paleobotany. Secondly, plant morphology observes both the vegetative (somatic) structures of plants, as well as the reproductive str Document 1::: Ecophenotypic variation ("ecophenotype") refers to phenotypical variation as a function of life station. In wide-ranging species, the contributions of heredity and environment are not always certain, but their interplay can sometimes be determined by experiment. Plants Plants display the most obvious examples of ecophenotypic variation. One example are trees growing in the woods developing long straight trunks, with branching crowns high in the canopy, while the same species growing alone in the open develops a spreading form, branching much lower to the ground. Genotypes often have much flexibility in the modification and expression of phenotypes; in many organisms these phenotypes are very different under varying environmental conditions. The plant Hieracium umbellatum is found growing in two different habitats in Sweden. One habitat is rocky sea-side cliffs, where the plants are bushy with broad leaves and expanded inflorescences; the other is among sand dunes where the plants grow prostrate with narrow leaves and compact inflorescences. These habitats alternate along the coast of Sweden and the habitat that the seeds of H. umbellatum land in determines the phenotype that grows. Invasive plants such as the honeysuckle can thrive by altering their morphology in response to changes in the environment, which gives them a competitive advantage. Another example of a plants phenotypic reaction and adaptation with its environment is how Thlaspi caerulescens can absorb the metals in the soil to use to its advantage in defending against harmful microbes and bacteria in its leaves. The more immediate responses shown by vascular plants to their environment, for instance a vine's ability to conform to the wall or tree upon which it grows, are not usually considered ecophenotypic, even though the mechanisms may be related. Animals Since animals are far less plastic than plants, ecophenotypic variation is noteworthy. When encountered, it can cause confusion in identification Document 2::: In plant biology, plant memory describes the ability of a plant to retain information from experienced stimuli and respond at a later time. For example, some plants have been observed to raise their leaves synchronously with the rising of the sun. Other plants produce new leaves in the spring after overwintering. Many experiments have been conducted into a plant's capacity for memory, including sensory, short-term, and long-term. The most basic learning and memory functions in animals have been observed in some plant species, and it has been proposed that the development of these basic memory mechanisms may have developed in an early organismal ancestor. Some plant species appear to have developed conserved ways to use functioning memory, and some species may have developed unique ways to use memory function depending on their environment and life history. The use of the term plant memory still sparks controversy. Some researchers believe the function of memory only applies to organisms with a brain and others believe that comparing plant functions resembling memory to humans and other higher division organisms may be too direct of a comparison. Others argue that the function of the two are essentially the same and this comparison can serve as the basis for further understanding into how memory in plants works. History Experiments involving the curling of pea tendrils were some of the first to explore the concept of plant memory. Mark Jaffe recognized that pea plants coil around objects that act as support to help them grow. Jaffe’s experiments included testing different stimuli to induce coiling behavior. One such stimulus was the effect of light on the coiling mechanism. When Jaffe rubbed the tendrils in light, he witnessed the expected coiling response. When subjected to perturbation in darkness, the pea plants did not exhibit coiling behavior. Tendrils from the dark experiment were brought back into light hours later, exhibiting a coiling response without a Document 3::: Mesophytes are terrestrial plants which are neither adapted to particularly dry nor particularly wet environments. An example of a mesophytic habitat would be a rural temperate meadow, which might contain goldenrod, clover, oxeye daisy, and Rosa multiflora. Mesophytes prefer soil and air of moderate humidity and avoid soil with standing water or containing a great abundance of salts. They make up the largest ecological group of terrestrial plants, and usually grow under moderate to hot and humid climatic regions. Morphological adaptations Mesophytes do not have any specific morphological adaptations. They usually have broad, flat and green leaves; an extensive fibrous root system to absorb water; and the ability to develop perennating organs such as corms, rhizomes and bulbs to store food and water for use during drought. Anatomical adaptations Mesophytes do not have any special internal structure. Epidermis is single layered usually with obvious stomata. Opening or closing of stomata is related to water availability. In sufficient supply of water stromata remain open while in limited supply of water stomata are closed to prevent excessive transpiration leading to wilting. Properties Mesophytes generally require a more or less continuous water supply. They usually have larger, thinner leaves compared to xerophytes, sometimes with a greater number of stomata on the undersides of leaves. Because of their lack of particular xeromorphic adaptations, when they are exposed to extreme conditions they lose water rapidly, and are not tolerant of drought. Mesophytes are intermediate in water use and needs. These plants are found in average conditions of temperature and moisture and grow in soil that has no water logging. The roots of mesophytes are well developed, branched and provided with a root cap. The shoot system is well organised. The stem is generally aerial, branched, straight, thick and hard. Leaves are thin, broad in middle, dark green and of variable shape and Document 4::: Vascular plants (), also called tracheophytes () or collectively Tracheophyta (), form a large group of land plants ( accepted known species) that have lignified tissues (the xylem) for conducting water and minerals throughout the plant. They also have a specialized non-lignified tissue (the phloem) to conduct products of photosynthesis. Vascular plants include the clubmosses, horsetails, ferns, gymnosperms (including conifers), and angiosperms (flowering plants). Scientific names for the group include Tracheophyta, Tracheobionta and Equisetopsida sensu lato. Some early land plants (the rhyniophytes) had less developed vascular tissue; the term eutracheophyte has been used for all other vascular plants, including all living ones. Historically, vascular plants were known as "higher plants", as it was believed that they were further evolved than other plants due to being more complex organisms. However, this is an antiquated remnant of the obsolete scala naturae, and the term is generally considered to be unscientific. Characteristics Botanists define vascular plants by three primary characteristics: Vascular plants have vascular tissues which distribute resources through the plant. Two kinds of vascular tissue occur in plants: xylem and phloem. Phloem and xylem are closely associated with one another and are typically located immediately adjacent to each other in the plant. The combination of one xylem and one phloem strand adjacent to each other is known as a vascular bundle. The evolution of vascular tissue in plants allowed them to evolve to larger sizes than non-vascular plants, which lack these specialized conducting tissues and are thereby restricted to relatively small sizes. In vascular plants, the principal generation or phase is the sporophyte, which produces spores and is diploid (having two sets of chromosomes per cell). (By contrast, the principal generation phase in non-vascular plants is the gametophyte, which produces gametes and is haploid - with The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. A plant that forms special tissues for storing water in an arid climate is an example of the plant evolving what? A. additions B. adaptations C. consciousness D. divergence Answer:
sciq-2095
multiple_choice
Humans have about 20,000 to 22,000 genes scattered among 23 of these?
[ "chromosomes", "atoms", "neutrons", "ribosomes" ]
A
Relavent Documents: Document 0::: Genetics (from Ancient Greek , “genite” and that from , “origin”), a discipline of biology, is the science of heredity and variation in living organisms. Articles (arranged alphabetically) related to genetics include: # A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Document 1::: The Oxford Centre for Gene Function is a multidisciplinary research institute in the University of Oxford, England. It is directed by Frances Ashcroft, Kay Davies and Peter Donnelly. It involves the departments of Human anatomy and genetics, Physiology, and Statistics. External links Oxford Centre for Gene Function website Wellcome Trust Centre for Human Genetics Departments of the University of Oxford Genetics in the United Kingdom Human genetics Research institutes in Oxford Document 2::: The Gene Wiki is a project within Wikipedia that aims to describe the relationships and functions of all human genes. It was established to transfer information from scientific resources to Wikipedia stub articles. The Gene Wiki project also initiated publication of gene-specific review articles in the journal Gene, together with the editing of the gene-specific pages in Wikipedia. The Gene Wiki project in collaboration with the journal Gene was terminated in May 2022, ten years after the project's initiation. A report by the project's leaders summarizes the project's achievements. Project goals and scope Number of gene articles The human genome contains an estimated 20,000–25,000 protein-coding genes. The goal of the Gene Wiki project is to create seed articles for every notable human gene, that is, every gene whose function has been assigned in the peer-reviewed scientific literature. Approximately half of human genes have assigned function, therefore the total number of articles seeded by the Gene Wiki project would be expected to be in the range of 10,000–15,000. To date, approximately 11,000 articles have been created or augmented to include Gene Wiki project content. Expansion Once seed articles have been established, the hope and expectation is that these will be annotated and expanded by editors ranging in experience from the lay audience to students to professionals and academics. Proteins encoded by genes Only a small portion of the genome actually encodes protein in the human genome. Understanding the function of a gene that codes for a protein generally requires understanding of the function of the corresponding protein. In addition to including basic information about the gene, the project therefore also includes information about the protein encoded by the gene. The function of other portions of the genome, non-coding DNA, also called "junk" DNA in the past because they had no apparent function, actually are thought to have regulatory functio Document 3::: A geneticist is a biologist or physician who studies genetics, the science of genes, heredity, and variation of organisms. A geneticist can be employed as a scientist or a lecturer. Geneticists may perform general research on genetic processes or develop genetic technologies to aid in the pharmaceutical or and agriculture industries. Some geneticists perform experiments in model organisms such as Drosophila, C. elegans, zebrafish, rodents or humans and analyze data to interpret the inheritance of biological traits. A basic science geneticist is a scientist who usually has earned a PhD in genetics and undertakes research and/or lectures in the field. A medical geneticist is a physician who has been trained in medical genetics as a specialization and evaluates, diagnoses, and manages patients with hereditary conditions or congenital malformations; and provides genetic risk calculations and mutation analysis. Education Geneticists participate in courses from many areas, such as biology, chemistry, physics, microbiology, cell biology, bioinformatics, and mathematics. They also participate in more specific genetics courses such as molecular genetics, transmission genetics, population genetics, quantitative genetics, ecological genetics, epigenetics, and genomics. Careers Geneticists can work in many different fields, doing a variety of jobs. There are many careers for geneticists in medicine, agriculture, wildlife, general sciences, or many other fields. Listed below are a few examples of careers a geneticist may pursue. Research and Development Genetic counseling Clinical Research Medical genetics Gene therapy Pharmacogenomics Molecular ecology Animal breeding Genomics Biotechnology Proteomics Microbial genetics Teaching Molecular diagnostics Sales and Marketing of scientific products Science Journalism Patent Law Paternity testing Forensic DNA Agriculture Document 4::: Human Heredity is a peer-reviewed scientific journal covering all aspects of human genetics. It was established in 1948 as Acta Genetica et Statistica Medica, obtaining its current name in 1969. It is published eight times per year by Karger Publishers and the editor-in-chief is Pak Sham (University of Hong Kong). According to the Journal Citation Reports, the journal has a 2017 impact factor of 0.542. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Humans have about 20,000 to 22,000 genes scattered among 23 of these? A. chromosomes B. atoms C. neutrons D. ribosomes Answer:
sciq-34
multiple_choice
All living things need air and this to survive?
[ "habitat", "ecosystem", "stimuli", "water" ]
D
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591. On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education. Format This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions. The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test. Preparation The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a Document 2::: Advanced Placement (AP) Biology (also known as AP Bio) is an Advanced Placement biology course and exam offered by the College Board in the United States. For the 2012–2013 school year, the College Board unveiled a new curriculum with a greater focus on "scientific practices". This course is designed for students who wish to pursue an interest in the life sciences. The College Board recommends successful completion of high school biology and high school chemistry before commencing AP Biology, although the actual prerequisites vary from school to school and from state to state. This course, nevertheless, is considered very challenging and one of the most difficult AP classes, as shown with AP Finals grade distributions. Topic outline The exam covers the following 8 units. The percentage indicates the portion of the multiple-choice section of the exam focused on each content area: The course is based on and tests six skills, called scientific practices which include: In addition to the topics above, students are required to be familiar with general lab procedure. Students should know how to collect data, analyze data to form conclusions, and apply those conclusions. Exam Students are allowed to use a four-function, scientific, or graphing calculator. The exam has two sections: a 90 minute multiple choice section and a 90 minute free response section. There are 60 multiple choice questions and six free responses, two long and four short. Both sections are worth 50% of the score. Score distribution Commonly used textbooks Biology, AP Edition by Sylvia Mader (2012, hardcover ) Life: The Science of Biology (Sadava, Heller, Orians, Purves, and Hillis, ) Campbell Biology AP Ninth Edition (Reece, Urry, Cain, Wasserman, Minorsky, and Andrew Jackson ) See also Glossary of biology A.P Bio (TV Show) Document 3::: GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test. Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95. After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17. Content specification Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below: Biochemistry (36%) A Chemical and Physical Foundations Thermodynamics and kinetics Redox states Water, pH, acid-base reactions and buffers Solutions and equilibria Solute-solvent interactions Chemical interactions and bonding Chemical reaction mechanisms B Structural Biology: Structure, Assembly, Organization and Dynamics Small molecules Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids) Supramolecular complexes (e.g. Document 4::: Controlled (or closed) ecological life-support systems (acronym CELSS) are a self-supporting life support system for space stations and colonies typically through controlled closed ecological systems, such as the BioHome, BIOS-3, Biosphere 2, Mars Desert Research Station, and Yuegong-1. Original concept CELSS was first pioneered by the Soviet Union during the famed "Space Race" in the 1950s–60s. Originated by Konstantin Tsiolkovsky and furthered by V.I. Vernadsky, the first forays into this science were the use of closed, unmanned ecosystems, expanding into the research facility known as the BIOS-3. Then in 1965, manned experiments began in the BIOS-3. Rationale Human presence in space, thus far, has been limited to our own Earth–Moon system. Also, everything that astronauts would need in the way of life support (air, water, and food) has been brought with them. This may be economical for short missions of spacecraft, but it is not the most viable solution when dealing with the life support systems of a long-term craft (such as a generation ship) or a settlement. The aim of CELSS is to create a regenerative environment that can support and maintain human life via agricultural means. Components of CELSS Air revitalization In non-CELSS environments, air replenishment and processing typically consists of stored air tanks and scrubbers. The drawback to this method lies in the fact that upon depletion the tanks would have to be refilled; the scrubbers would also require replacement after they become ineffective. There is also the issue of processing toxic fumes, which come from the synthetic materials used in the construction of habitats. Therefore, the issue of how air quality is maintained requires attention; in experiments, it was found that the plants also removed volatile organic compounds offgassed by synthetic materials used thus far to build and maintain all man-made habitats. In CELSS, air is initially supplied by external supply, but is maintained by The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. All living things need air and this to survive? A. habitat B. ecosystem C. stimuli D. water Answer:
scienceQA-11485
multiple_choice
Select the fish.
[ "African bullfrog", "blue-footed booby", "African elephant", "green moray eel" ]
D
An African elephant is a mammal. It has hair and feeds its young milk. Elephants live in groups called herds. The oldest female in the herd is usually the leader. A blue-footed booby is a bird. It has feathers, two wings, and a beak. Blue-footed boobies live on tropical islands in the Pacific Ocean. A green moray eel is a fish. It lives underwater. It has fins, not limbs. Eels are long and thin. They may have small fins. They look like snakes, but they are fish! An African bullfrog is an amphibian. It has moist skin and begins its life in water. Frogs live near water or in damp places. Most frogs lay their eggs in water.
Relavent Documents: Document 0::: Fish intelligence is the resultant of the process of acquiring, storing in memory, retrieving, combining, comparing, and using in new contexts information and conceptual skills" as it applies to fish. According to Culum Brown from Macquarie University, "Fish are more intelligent than they appear. In many areas, such as memory, their cognitive powers match or exceed those of ‘higher’ vertebrates including non-human primates." Fish hold records for the relative brain weights of vertebrates. Most vertebrate species have similar brain-to-body mass ratios. The deep sea bathypelagic bony-eared assfish has the smallest ratio of all known vertebrates. At the other extreme, the electrogenic elephantnose fish, an African freshwater fish, has one of the largest brain-to-body weight ratios of all known vertebrates (slightly higher than humans) and the highest brain-to-body oxygen consumption ratio of all known vertebrates (three times that for humans). Brain Fish typically have quite small brains relative to body size compared with other vertebrates, typically one-fifteenth the brain mass of a similarly sized bird or mammal. However, some fish have relatively large brains, most notably mormyrids and sharks, which have brains about as massive relative to body weight as birds and marsupials. The cerebellum of cartilaginous and bony fishes is large and complex. In at least one important respect, it differs in internal structure from the mammalian cerebellum: The fish cerebellum does not contain discrete deep cerebellar nuclei. Instead, the primary targets of Purkinje cells are a distinct type of cell distributed across the cerebellar cortex, a type not seen in mammals. The circuits in the cerebellum are similar across all classes of vertebrates, including fish, reptiles, birds, and mammals. There is also an analogous brain structure in cephalopods with well-developed brains, such as octopuses. This has been taken as evidence that the cerebellum performs functions important to Document 1::: Peters's elephant-nose fish (Gnathonemus petersii) is an African freshwater elephantfish in the genus Gnathonemus. Other names in English include elephantnose fish, long-nosed elephant fish, and Ubangi mormyrid, after the Ubangi River. The Latin name is probably for the German naturalist Wilhelm Peters. The fish uses electrolocation to find prey, and has the largest brain-to-body oxygen use ratio of all known vertebrates (around 0.6). Description Peters's elephantnose fish is native to the rivers of West and Central Africa, in particular the lower Niger River basin, the Ogun River basin and the upper Chari River. It prefers muddy, slowly moving rivers and pools with cover such as submerged branches. The fish is a dark brown to black in colour, laterally compressed (averaging ), with a rear dorsal fin and anal fin of the same length. Its caudal or tail fin is forked. It has two stripes on its lower pendicular. Its most striking feature, as its names suggest, is a trunk-like protrusion on the head. This is not actually a nose, but a sensitive extension of the mouth, that it uses for self-defense, communication, navigation, and finding worms and insects to eat. This organ, called the Schnauzenorgan, is covered in electroreceptors, as is much of the rest of its body. The elephantnose uses a weak electric field, which it generates with specialized cells called electrocytes, which evolved from muscle cells, to find food, to navigate in dark or turbid waters, and to find a mate. Peters's elephantnose fish live to about 6–10 years. Electrolocation The elephant nose fish is weakly electric, meaning that it can detect moving prey and worms in the substrate by generating brief electric pulses with the electric organ in its tail. The electroreceptors around its body are sensitive enough to detect the different distortions of the electric field made by objects that conduct or resist electricity. The weak electric fields generated by this fish can be made audible by pl Document 2::: The phylogenetic classification of bony fishes is a phylogenetic classification of bony fishes and is based on phylogenies inferred using molecular and genomic data for nearly 2000 fishes. The first version was published in 2013 and resolved 66 orders. The latest version (version 4) was published in 2017 and recognised 72 orders and 79 suborders. Phylogeny The following cladograms show the phylogeny of the Osteichthyes down to order level, with the number of families in parentheses. The 43 orders of spiny-rayed fishes are related as follows: Document 3::: Fisheries science is the academic discipline of managing and understanding fisheries. It is a multidisciplinary science, which draws on the disciplines of limnology, oceanography, freshwater biology, marine biology, meteorology, conservation, ecology, population dynamics, economics, statistics, decision analysis, management, and many others in an attempt to provide an integrated picture of fisheries. In some cases new disciplines have emerged, as in the case of bioeconomics and fisheries law. Because fisheries science is such an all-encompassing field, fisheries scientists often use methods from a broad array of academic disciplines. Over the most recent several decades, there have been declines in fish stocks (populations) in many regions along with increasing concern about the impact of intensive fishing on marine and freshwater biodiversity. Fisheries science is typically taught in a university setting, and can be the focus of an undergraduate, master's or Ph.D. program. Some universities offer fully integrated programs in fisheries science. Graduates of university fisheries programs typically find employment as scientists, fisheries managers of both recreational and commercial fisheries, researchers, aquaculturists, educators, environmental consultants and planners, conservation officers, and many others. Fisheries research Because fisheries take place in a diverse set of aquatic environments (i.e., high seas, coastal areas, large and small rivers, and lakes of all sizes), research requires different sampling equipment, tools, and techniques. For example, studying trout populations inhabiting mountain lakes requires a very different set of sampling tools than, say, studying salmon in the high seas. Ocean fisheries research vessels (FRVs) often require platforms which are capable of towing different types of fishing nets, collecting plankton or water samples from a range of depths, and carrying acoustic fish-finding equipment. Fisheries research vessels a Document 4::: Brochiraja aenigma, also known as the Enigma skate, is a skate known from a single specimen recently identified in 2006. Based on the single specimen, its range includes at least the Wanganella Bank on the Norfolk Ridge. It is rare with further searches finding no specimens, and while it is not commonly fished or reported in commercial distribution, it can be used for fish meal. Due to the limited knowledge of its biology and extent of capture in fisheries, this species is assessed as Data Deficient by the IUCN. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Select the fish. A. African bullfrog B. blue-footed booby C. African elephant D. green moray eel Answer:
sciq-1009
multiple_choice
What happens to charges whenever they are accelerated?
[ "they die", "they radiate", "they darken", "they fuse" ]
B
Relavent Documents: Document 0::: Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams. Course content E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are: Electrostatics Conductors, capacitors, and dielectrics Electric circuits Magnetic fields Electromagnetism. Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class. AP test The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution. Registration The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test. Format The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with Document 1::: In physics, charge conservation is the principle that the total electric charge in an isolated system never changes. The net quantity of electric charge, the amount of positive charge minus the amount of negative charge in the universe, is always conserved. Charge conservation, considered as a physical conservation law, implies that the change in the amount of electric charge in any volume of space is exactly equal to the amount of charge flowing into the volume minus the amount of charge flowing out of the volume. In essence, charge conservation is an accounting relationship between the amount of charge in a region and the flow of charge into and out of that region, given by a continuity equation between charge density and current density . This does not mean that individual positive and negative charges cannot be created or destroyed. Electric charge is carried by subatomic particles such as electrons and protons. Charged particles can be created and destroyed in elementary particle reactions. In particle physics, charge conservation means that in reactions that create charged particles, equal numbers of positive and negative particles are always created, keeping the net amount of charge unchanged. Similarly, when particles are destroyed, equal numbers of positive and negative charges are destroyed. This property is supported without exception by all empirical observations so far. Although conservation of charge requires that the total quantity of charge in the universe is constant, it leaves open the question of what that quantity is. Most evidence indicates that the net charge in the universe is zero; that is, there are equal quantities of positive and negative charge. History Charge conservation was first proposed by British scientist William Watson in 1746 and American statesman and scientist Benjamin Franklin in 1747, although the first convincing proof was given by Michael Faraday in 1843. Formal statement of the law Mathematically, we can state Document 2::: An ESD simulator, also known as an ESD gun, is a handheld unit used to test the immunity of devices to electrostatic discharge (ESD). These simulators are used in special electromagnetic compatibility (EMC) laboratories. ESD pulses are fast, high-voltage pulses created when two objects with different electrical charges come into close proximity or contact. Recreating them in a test environment helps to verify that the device under test is immune to static electricity discharges. ESD testing is necessary to receive a CE mark, and for most suppliers of components for motor vehicles as part of required electromagnetic compatibility testing. It is often useful to automate these tests to eliminate the human factor. There are three distinct test models for electrostatic discharge: human-body, machine, and charged-devices models. The human-body model emulates the action of a human body discharging static electricity, the machine model simulates static discharge from a machine, and the charged-device model simulates the charging and discharging events that occur in production processes and equipment. Many ESD guns have interchangeable modules containing different discharge Networks or RC Modules (Specific resistance and capacitance values) to simulate different discharges. These modules typically slide into the handle of the pistol portion of the ESD simulator, much like loading some handguns. They change the characteristics of the waveshape discharged from the pistol and are called out in general standards like IEC 61000-4-2, SAE J113 and industry specific standards like ISO 10605. Resistance is referred to in ohms (Ω), capacitance is referred to in picofarad (pF or "puff"). The most commonly used discharge network is for IEC 61000-4-2 and ISO 10605, expressed as 150pF/330Ω. There are over 50 combinations of resistance and capacitance depending on the standards and the applicable electronics. Test standards Standards that require ESD testing include: ISO 10605 Ford Document 3::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 4::: There are four Advanced Placement (AP) Physics courses administered by the College Board as part of its Advanced Placement program: the algebra-based Physics 1 and Physics 2 and the calculus-based Physics C: Mechanics and Physics C: Electricity and Magnetism. All are intended to be at the college level. Each AP Physics course has an exam for which high-performing students may receive credit toward their college coursework. AP Physics 1 and 2 AP Physics 1 and AP Physics 2 were introduced in 2015, replacing AP Physics B. The courses were designed to emphasize critical thinking and reasoning as well as learning through inquiry. They are algebra-based and do not require any calculus knowledge. AP Physics 1 AP Physics 1 covers Newtonian mechanics, including: Unit 1: Kinematics Unit 2: Dynamics Unit 3: Circular Motion and Gravitation Unit 4: Energy Unit 5: Momentum Unit 6: Simple Harmonic Motion Unit 7: Torque and Rotational Motion Until 2020, the course also covered topics in electricity (including Coulomb's Law and resistive DC circuits), mechanical waves, and sound. These units were removed because they are included in AP Physics 2. AP Physics 2 AP Physics 2 covers the following topics: Unit 1: Fluids Unit 2: Thermodynamics Unit 3: Electric Force, Field, and Potential Unit 4: Electric Circuits Unit 5: Magnetism and Electromagnetic Induction Unit 6: Geometric and Physical Optics Unit 7: Quantum, Atomic, and Nuclear Physics AP Physics C From 1969 to 1972, AP Physics C was a single course with a single exam that covered all standard introductory university physics topics, including mechanics, fluids, electricity and magnetism, optics, and modern physics. In 1973, the College Board split the course into AP Physics C: Mechanics and AP Physics C: Electricity and Magnetism. The exam was also split into two separate 90-minute tests, each equivalent to a semester-length calculus-based college course. Until 2006, both exams could be taken for a single The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What happens to charges whenever they are accelerated? A. they die B. they radiate C. they darken D. they fuse Answer:
sciq-3083
multiple_choice
Terrestrial animals lose water by evaporation from their skin and which surfaces?
[ "respiratory", "pulmonary", "anaerobic", "digestive" ]
A
Relavent Documents: Document 0::: Tissue hydration is the process of absorbing and retaining water in biological tissues. Plants Land plants maintain adequate tissue hydration by means of an outer waterproof layer. In soft or green tissues, this is usually a waxy cuticle over the outer epidermis. In older, woody tissues, waterproofing chemicals are present in the secondary cell wall that limit or inhibit the flow of water. Vascular plants also possess an internal vascular system that distributes fluids throughout the plant. Some xerophytes, such as cacti and other desert plants, have mucilage in their tissues. This is a sticky substance that holds water within the plant, reducing the rate of dehydration. Some seeds and spores remain dormant until adequate moisture is present, at which time the seed or spore begins to germinate. Animals Animals maintain adequate tissue hydration by means of (1) an outer skin, shell, or cuticle; (2) a fluid-filled coelom cavity; and (3) a circulatory system. Hydration of fat free tissues, ratio of total body water to fat free body mass, is stable at 0.73 in mammals. In humans, a significant drop in tissue hydration can lead to the medical condition of dehydration. This may result from loss of water itself, loss of electrolytes, or a loss of blood plasma. Administration of hydrational fluids as part of sound dehydration management is necessary to avoid severe complications, and in some cases, death. Some invertebrates are able to survive extreme desiccation of their tissues by entering a state of cryptobiosis. See also Osmoregulation Document 1::: Terrestrial animals are animals that live predominantly or entirely on land (e.g. cats, chickens, ants, spiders), as compared with aquatic animals, which live predominantly or entirely in the water (e.g. fish, lobsters, octopuses), and amphibians, which rely on aquatic and terrestrial habitats (e.g. frogs and newts). Some groups of insects are terrestrial, such as ants, butterflies, earwigs, cockroaches, grasshoppers and many others, while other groups are partially aquatic, such as mosquitoes and dragonflies, which pass their larval stages in water. Terrestrial classes The term "terrestrial" is typically applied to species that live primarily on the ground, in contrast to arboreal species, which live primarily in trees. There are other less common terms that apply to specific groups of terrestrial animals: Saxicolous creatures are rock dwelling. "Saxicolous" is derived from the Latin word saxum, meaning a rock. Arenicolous creatures live in the sand. Troglofauna predominantly live in caves. Taxonomy Terrestrial invasion is one of the most important events in the history of life. Terrestrial lineages evolved in several animal phyla, among which arthropods, vertebrates and mollusks are representatives of more successful groups of terrestrial animals. Terrestrial animals do not form a unified clade; rather, they share only the fact that they live on land. The transition from an aquatic to terrestrial life by various groups of animals has occurred independently and successfully many times. Most terrestrial lineages originated under a mild or tropical climate during the Paleozoic and Mesozoic, whereas few animals became fully terrestrial during the Cenozoic. If internal parasites are excluded, free living species in terrestrial environments are represented by the following eleven phyla: Gastrotrichs (hairy-backs) live in transient terrestrial water and go dormant during desiccation Rotifers (wheel animals) live in transient terrestrial water and go dormant durin Document 2::: Animals are multicellular eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, are able to move, reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Over 1.5 million living animal species have been described—of which around 1 million are insects—but it has been estimated there are over 7 million in total. Animals range in size from 8.5 millionths of a metre to long and have complex interactions with each other and their environments, forming intricate food webs. The study of animals is called zoology. Animals may be listed or indexed by many criteria, including taxonomy, status as endangered species, their geographical location, and their portrayal and/or naming in human culture. By common name List of animal names (male, female, young, and group) By aspect List of common household pests List of animal sounds List of animals by number of neurons By domestication List of domesticated animals By eating behaviour List of herbivorous animals List of omnivores List of carnivores By endangered status IUCN Red List endangered species (Animalia) United States Fish and Wildlife Service list of endangered species By extinction List of extinct animals List of extinct birds List of extinct mammals List of extinct cetaceans List of extinct butterflies By region Lists of amphibians by region Lists of birds by region Lists of mammals by region Lists of reptiles by region By individual (real or fictional) Real Lists of snakes List of individual cats List of oldest cats List of giant squids List of individual elephants List of historical horses List of leading Thoroughbred racehorses List of individual apes List of individual bears List of giant pandas List of individual birds List of individual bovines List of individual cetaceans List of individual dogs List of oldest dogs List of individual monkeys List of individual pigs List of w Document 3::: Evapotranspiration (ET) is the combined processes which move water from the Earth's surface into the atmosphere. It covers both water evaporation (movement of water to the air directly from soil, canopies, and water bodies) and transpiration (evaporation that occurs through the stomata, or openings, in plant leaves). Evapotranspiration is an important part of the local water cycle and climate, and measurement of it plays a key role in agricultural irrigation and water resource management. Definition of evapotranspiration Evapotranspiration is a combination of evaporation and transpiration, measured in order to better understand crop water requirements, irrigation scheduling, and watershed management. The two key components of evapotranspiration are: Evaporation: the movement of water directly to the air from sources such as the soil and water bodies. It can be affected by factors including heat, humidity, solar radiation and wind speed. Transpiration: the movement of water from root systems, through a plant, and exit into the air as water vapor. This exit occurs through stomata in the plant. Rate of transpiration can be influenced by factors including plant type, soil type, weather conditions and water content, and also cultivation practices. Evapotranspiration is typically measured in millimeters of water (i.e. volume of water moved per unit area of the Earth's surface) in a set unit of time. Globally, it is estimated that on average between three-fifths and three-quarters of land precipitation is returned to the atmosphere via evapotranspiration. Evapotranspiration does not, in general, account for other mechanisms which are involved in returning water to the atmosphere, though some of these, such as snow and ice sublimation in regions of high elevation or high latitude, can make a large contribution to atmospheric moisture even under standard conditions. Factors that impact evapotranspiration levels Primary factors Because evaporation and transpiration Document 4::: Roshd Biological Education is a quarterly science educational magazine covering recent developments in biology and biology education for a biology teacher Persian -speaking audience. Founded in 1985, it is published by The Teaching Aids Publication Bureau, Organization for Educational Planning and Research, Ministry of Education, Iran. Roshd Biological Education has an editorial board composed of Iranian biologists, experts in biology education, science journalists and biology teachers. It is read by both biology teachers and students, as a way of launching innovations and new trends in biology education, and helping biology teachers to teach biology in better and more effective ways. Magazine layout As of Autumn 2012, the magazine is laid out as follows: Editorial—often offering a view of point from editor in chief on an educational and/or biological topics. Explore— New research methods and results on biology and/or education. World— Reports and explores on biological education worldwide. In Brief—Summaries of research news and discoveries. Trends—showing how new technology is altering the way we live our lives. Point of View—Offering personal commentaries on contemporary topics. Essay or Interview—often with a pioneer of a biological and/or educational researcher or an influential scientific educational leader. Muslim Biologists—Short histories of Muslim Biologists. Environment—An article on Iranian environment and its problems. News and Reports—Offering short news and reports events on biology education. In Brief—Short articles explaining interesting facts. Questions and Answers—Questions about biology concepts and their answers. Book and periodical Reviews—About new publication on biology and/or education. Reactions—Letter to the editors. Editorial staff Mohammad Karamudini, editor in chief History Roshd Biological Education started in 1985 together with many other magazines in other science and art. The first editor was Dr. Nouri-Dalooi, th The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Terrestrial animals lose water by evaporation from their skin and which surfaces? A. respiratory B. pulmonary C. anaerobic D. digestive Answer:
sciq-3376
multiple_choice
Which temperatures cause particles of reactants to have more energy?
[ "reducing", "higher", "non-existant", "lower" ]
B
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: The heating value (or energy value or calorific value) of a substance, usually a fuel or food (see food energy), is the amount of heat released during the combustion of a specified amount of it. The calorific value is the total energy released as heat when a substance undergoes complete combustion with oxygen under standard conditions. The chemical reaction is typically a hydrocarbon or other organic molecule reacting with oxygen to form carbon dioxide and water and release heat. It may be expressed with the quantities: energy/mole of fuel energy/mass of fuel energy/volume of the fuel There are two kinds of enthalpy of combustion, called high(er) and low(er) heat(ing) value, depending on how much the products are allowed to cool and whether compounds like are allowed to condense. The high heat values are conventionally measured with a bomb calorimeter. Low heat values are calculated from high heat value test data. They may also be calculated as the difference between the heat of formation ΔH of the products and reactants (though this approach is somewhat artificial since most heats of formation are typically calculated from measured heats of combustion). By convention, the (higher) heat of combustion is defined to be the heat released for the complete combustion of a compound in its standard state to form stable products in their standard states: hydrogen is converted to water (in its liquid state), carbon is converted to carbon dioxide gas, and nitrogen is converted to nitrogen gas. That is, the heat of combustion, ΔH°comb, is the heat of reaction of the following process: (std.) + (c + - ) (g) → c (g) + (l) + (g) Chlorine and sulfur are not quite standardized; they are usually assumed to convert to hydrogen chloride gas and or gas, respectively, or to dilute aqueous hydrochloric and sulfuric acids, respectively, when the combustion is conducted in a bomb calorimeter containing some quantity of water. Ways of determination Gross and net Z Document 2::: Common thermodynamic equations and quantities in thermodynamics, using mathematical notation, are as follows: Definitions Many of the definitions below are also used in the thermodynamics of chemical reactions. General basic quantities General derived quantities Thermal properties of matter Thermal transfer Equations The equations in this article are classified by subject. Thermodynamic processes Kinetic theory Ideal gas Entropy , where kB is the Boltzmann constant, and Ω denotes the volume of macrostate in the phase space or otherwise called thermodynamic probability. , for reversible processes only Statistical physics Below are useful results from the Maxwell–Boltzmann distribution for an ideal gas, and the implications of the Entropy quantity. The distribution is valid for atoms or molecules constituting ideal gases. Corollaries of the non-relativistic Maxwell–Boltzmann distribution are below. Quasi-static and reversible processes For quasi-static and reversible processes, the first law of thermodynamics is: where δQ is the heat supplied to the system and δW is the work done by the system. Thermodynamic potentials The following energies are called the thermodynamic potentials, and the corresponding fundamental thermodynamic relations or "master equations" are: Maxwell's relations The four most common Maxwell's relations are: More relations include the following. Other differential equations are: Quantum properties Indistinguishable Particles where N is number of particles, h is Planck's constant, I is moment of inertia, and Z is the partition function, in various forms: Thermal properties of matter Thermal transfer Thermal efficiencies See also List of thermodynamic properties Antoine equation Bejan number Bowen ratio Bridgman's equations Clausius–Clapeyron relation Departure functions Duhem–Margules equation Ehrenfest equations Gibbs–Helmholtz equation Phase rule Kopp's law Noro–Frenkel law of corresponding states Onsager reci Document 3::: Activation energy asymptotics (AEA), also known as large activation energy asymptotics, is an asymptotic analysis used in the combustion field utilizing the fact that the reaction rate is extremely sensitive to temperature changes due to the large activation energy of the chemical reaction. History The techniques were pioneered by the Russian scientists Yakov Borisovich Zel'dovich, David A. Frank-Kamenetskii and co-workers in the 30s, in their study on premixed flames and thermal explosions (Frank-Kamenetskii theory), but not popular to western scientists until the 70s. In the early 70s, due to the pioneering work of Williams B. Bush, Francis E. Fendell, Forman A. Williams, Amable Liñán and John F. Clarke, it became popular in western community and since then it was widely used to explain more complicated problems in combustion. Method overview In combustion processes, the reaction rate is dependent on temperature in the following form (Arrhenius law), where is the activation energy, and is the universal gas constant. In general, the condition is satisfied, where is the burnt gas temperature. This condition forms the basis for activation energy asymptotics. Denoting for unburnt gas temperature, one can define the Zel'dovich number and heat release parameter as follows In addition, if we define a non-dimensional temperature such that approaching zero in the unburnt region and approaching unity in the burnt gas region (in other words, ), then the ratio of reaction rate at any temperature to reaction rate at burnt gas temperature is given by Now in the limit of (large activation energy) with , the reaction rate is exponentially small i.e., and negligible everywhere, but non-negligible when . In other words, the reaction rate is negligible everywhere, except in a small region very close to burnt gas temperature, where . Thus, in solving the conservation equations, one identifies two different regimes, at leading order, Outer convective-diffusive zone I Document 4::: Thermal decomposition (or thermolysis) is a chemical decomposition caused by heat. The decomposition temperature of a substance is the temperature at which the substance chemically decomposes. The reaction is usually endothermic as heat is required to break chemical bonds in the compound undergoing decomposition. If decomposition is sufficiently exothermic, a positive feedback loop is created producing thermal runaway and possibly an explosion or other chemical reaction. Decomposition temperature definition A simple substance (like water) may exist in equilibrium with its thermal decomposition products, effectively halting the decomposition. The equilibrium fraction of decomposed molecules increases with the temperature. Since thermal decomposition is a kinetic process, the observed temperature of its beginning in most instances will be a function of the experimental conditions and sensitivity of the experimental setup. For rigorous depiction of the process, the use of thermokinetic modeling is recommended. Examples Calcium carbonate (limestone or chalk) decomposes into calcium oxide and carbon dioxide when heated. The chemical reaction is as follows: CaCO3 → CaO + CO2 The reaction is used to make quick lime, which is an industrially important product. Another example of thermal decomposition is 2Pb(NO3)2 → 2PbO + O2 + 4NO2. Some oxides, especially of weakly electropositive metals decompose when heated to high enough temperature. A classical example is the decomposition of mercuric oxide to give oxygen and mercury metal. The reaction was used by Joseph Priestley to prepare samples of gaseous oxygen for the first time. When water is heated to well over 2000 °C, a small percentage of it will decompose into OH, monatomic oxygen, monatomic hydrogen, O2, and H2. The compound with the highest known decomposition temperature is carbon monoxide at ≈3870 °C (≈7000 °F). Decomposition of nitrates, nitrites and ammonium compounds Ammonium dichromate on heating yields nitro The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Which temperatures cause particles of reactants to have more energy? A. reducing B. higher C. non-existant D. lower Answer:
sciq-775
multiple_choice
Weathering is fundamental to the creation of what, which exists as a very thin layer over solid rock?
[ "soil", "moss", "fungus", "aquifers" ]
A
Relavent Documents: Document 0::: The Géotechnique lecture is an biennial lecture on the topic of soil mechanics, organised by the British Geotechnical Association named after its major scientific journal Géotechnique. This should not be confused with the annual BGA Rankine Lecture. List of Géotechnique Lecturers See also Named lectures Rankine Lecture Terzaghi Lecture External links ICE Géotechnique journal British Geotechnical Association Document 1::: USDA soil taxonomy (ST) developed by the United States Department of Agriculture and the National Cooperative Soil Survey provides an elaborate classification of soil types according to several parameters (most commonly their properties) and in several levels: Order, Suborder, Great Group, Subgroup, Family, and Series. The classification was originally developed by Guy Donald Smith, former director of the U.S. Department of Agriculture's soil survey investigations. Discussion A taxonomy is an arrangement in a systematic manner; the USDA soil taxonomy has six levels of classification. They are, from most general to specific: order, suborder, great group, subgroup, family and series. Soil properties that can be measured quantitatively are used in this classification system – they include: depth, moisture, temperature, texture, structure, cation exchange capacity, base saturation, clay mineralogy, organic matter content and salt content. There are 12 soil orders (the top hierarchical level) in soil taxonomy. The names of the orders end with the suffix -sol. The criteria for the different soil orders include properties that reflect major differences in the genesis of soils. The orders are: Alfisol – soils with aluminium and iron. They have horizons of clay accumulation, and form where there is enough moisture and warmth for at least three months of plant growth. They constitute 10% of soils worldwide. Andisol – volcanic ash soils. They are young soils. They cover 1% of the world's ice-free surface. Aridisol – dry soils forming under desert conditions which have fewer than 90 consecutive days of moisture during the growing season and are nonleached. They include nearly 12% of soils on Earth. Soil formation is slow, and accumulated organic matter is scarce. They may have subsurface zones of caliche or duripan. Many aridisols have well-developed Bt horizons showing clay movement from past periods of greater moisture. Entisol – recently formed soils that lack well-d Document 2::: Soil crusts are soil surface layers that are distinct from the rest of the bulk soil, often hardened with a platy surface. Depending on the manner of formation, soil crusts can be biological or physical. Biological soil crusts are formed by communities of microorganisms that live on the soil surface whereas physical crusts are formed by physical impact such as that of raindrops. Biological soil crusts Biological soil crusts are communities of living organisms on the soil surface in arid- and semi-arid ecosystems. They are found throughout the world with varying species composition and cover depending on topography, soil characteristics, climate, plant community, microhabitats, and disturbance regimes. Biological soil crusts perform important ecological roles including carbon fixation, nitrogen fixation, soil stabilization, alter soil albedo and water relations, and affect germination and nutrient levels in vascular plants. They can be damaged by fire, recreational activity, grazing, and other disturbance and can require long time periods to recover composition and function. Biological soil crusts are also known as cryptogamic, microbiotic, microphytic, or cryptobiotic soils. Physical soil crusts Physical (as opposed to biological) soil crusts results from raindrop or trampling impacts. They are often hardened relative to uncrusted soil due to the accumulation of salts and silica. These can coexist with biological soil crusts, but have different ecological impact due to their difference in formation and composition. Physical soil crusts often reduce water infiltration, can inhibit plant establishment, and when disrupted can be eroded rapidly. Document 3::: The World Reference Base for Soil Resources (WRB) is an international soil classification system for naming soils and creating legends for soil maps. The currently valid version is the fourth edition 2022. It is edited by a working group of the International Union of Soil Sciences (IUSS). Background History Since the 19th century, several countries developed national soil classification systems. During the 20th century, the need for an international soil classification system became more and more obvious. From 1971 to 1981, the Food and Agriculture Organization (FAO) and UNESCO published the Soil Map of the World, 10 volumes, scale 1 : 5 M). The Legend for this map, published in 1974 under the leadership of Rudi Dudal, became the FAO soil classification. Many ideas from national soil classification systems were brought together in this worldwide-applicable system, among them the idea of diagnostic horizons as established in the '7th approximation to the USDA soil taxonomy' from 1960. The next step was the Revised Legend of the Soil Map of the World, published in 1988. In 1982, the International Soil Science Society (ISSS; now: International Union of Soil Sciences, IUSS) established a working group named International Reference Base for Soil Classification (IRB). Chair of this working group was Ernst Schlichting. Its mandate was to develop an international soil classification system that should better consider soil-forming processes than the FAO soil classification. Drafts were presented in 1982 and 1990. In 1992, the IRB working group decided to develop a new system named World Reference Base for Soil Resources (WRB) that should further develop the Revised Legend of the FAO soil classification and include some ideas of the more systematic IRB approach. Otto Spaargaren (International Soil Reference and Information Centre) and Freddy Nachtergaele (FAO) were nominated to prepare a draft. This draft was presented at the 15th World Congress of Soil Science in Acapu Document 4::: The Polish Soil Classification () is a soil classification system used to describe, classify and organize the knowledge about soils in Poland. Overview Presented below the 5th edition of Polish Soil Classification was published by Soil Science Society of Poland in 2011 and was in use to 2019 when 6th edition of Polish Soil Classification was published. Previous ones were published in 1956, 1959, 1974 and 1989, and they, following Dokuchaiev's ideas, were relied mostly on the natural's criteria (quality) like soil forming processes and soil morphological features (4th edition was transient because diagnostic soil horizons appeared there). 5th edition of classification, where it was possible, was built on quantitative criteria, like quantitative described diagnostic horizons, diagnostic materials and diagnostic properties. Soil forming processes are not a part of classification but the relationship between the processes and their morphological effects was taken into account during creating differentiating criteria of diagnostic horizons, materials and properties. The classification derives much of international systems: USDA soil taxonomy (1999) and World Reference Base for Soil Resources - WRB (2006). Polish soil science intellectual tradition has always maintained a balance between genetical-geographic approach (typical for the Russian scientific school) and substantional-geological-petrographic approach (characteristic for Western Europe). Multilateral look at the soil manifested, in all editions of classification, that each soil was described by three types of characteristics: Genetical genesis described by type of soil – based on diagnostic horizons, materials and properties, Geological origin of bedrock described by what might be literally translated as "kind" or "sort" of soil, Soil texture described of what might be literally translated as "class" or "species" of soil. The Polish Soil Classification has a hierarchical construction. Type of soil is The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Weathering is fundamental to the creation of what, which exists as a very thin layer over solid rock? A. soil B. moss C. fungus D. aquifers Answer:
sciq-4974
multiple_choice
What is a segment of dna that carries a code for making a specific polypeptide chain called?
[ "a protein", "nucleotide", "a gene", "amino acid" ]
C
Relavent Documents: Document 0::: A nucleic acid sequence is a succession of bases within the nucleotides forming alleles within a DNA (using GACT) or RNA (GACU) molecule. This succession is denoted by a series of a set of five different letters that indicate the order of the nucleotides. By convention, sequences are usually presented from the 5' end to the 3' end. For DNA, with its double helix, there are two possible directions for the notated sequence; of these two, the sense strand is used. Because nucleic acids are normally linear (unbranched) polymers, specifying the sequence is equivalent to defining the covalent structure of the entire molecule. For this reason, the nucleic acid sequence is also termed the primary structure. The sequence represents biological information. Biological deoxyribonucleic acid represents the information which directs the functions of an organism. Nucleic acids also have a secondary structure and tertiary structure. Primary structure is sometimes mistakenly referred to as "primary sequence". However there is no parallel concept of secondary or tertiary sequence. Nucleotides Nucleic acids consist of a chain of linked units called nucleotides. Each nucleotide consists of three subunits: a phosphate group and a sugar (ribose in the case of RNA, deoxyribose in DNA) make up the backbone of the nucleic acid strand, and attached to the sugar is one of a set of nucleobases. The nucleobases are important in base pairing of strands to form higher-level secondary and tertiary structures such as the famed double helix. The possible letters are A, C, G, and T, representing the four nucleotide bases of a DNA strand – adenine, cytosine, guanine, thymine – covalently linked to a phosphodiester backbone. In the typical case, the sequences are printed abutting one another without gaps, as in the sequence AAAGTCTGAC, read left to right in the 5' to 3' direction. With regards to transcription, a sequence is on the coding strand if it has the same order as the transcribed RNA. Document 1::: Biomolecular structure is the intricate folded, three-dimensional shape that is formed by a molecule of protein, DNA, or RNA, and that is important to its function. The structure of these molecules may be considered at any of several length scales ranging from the level of individual atoms to the relationships among entire protein subunits. This useful distinction among scales is often expressed as a decomposition of molecular structure into four levels: primary, secondary, tertiary, and quaternary. The scaffold for this multiscale organization of the molecule arises at the secondary level, where the fundamental structural elements are the molecule's various hydrogen bonds. This leads to several recognizable domains of protein structure and nucleic acid structure, including such secondary-structure features as alpha helixes and beta sheets for proteins, and hairpin loops, bulges, and internal loops for nucleic acids. The terms primary, secondary, tertiary, and quaternary structure were introduced by Kaj Ulrik Linderstrøm-Lang in his 1951 Lane Medical Lectures at Stanford University. Primary structure The primary structure of a biopolymer is the exact specification of its atomic composition and the chemical bonds connecting those atoms (including stereochemistry). For a typical unbranched, un-crosslinked biopolymer (such as a molecule of a typical intracellular protein, or of DNA or RNA), the primary structure is equivalent to specifying the sequence of its monomeric subunits, such as amino acids or nucleotides. The primary structure of a protein is reported starting from the amino N-terminus to the carboxyl C-terminus, while the primary structure of DNA or RNA molecule is known as the nucleic acid sequence reported from the 5' end to the 3' end. The nucleic acid sequence refers to the exact sequence of nucleotides that comprise the whole molecule. Often, the primary structure encodes sequence motifs that are of functional importance. Some examples of such motif Document 2::: In molecular biology, a library is a collection of DNA fragments that is stored and propagated in a population of micro-organisms through the process of molecular cloning. There are different types of DNA libraries, including cDNA libraries (formed from reverse-transcribed RNA), genomic libraries (formed from genomic DNA) and randomized mutant libraries (formed by de novo gene synthesis where alternative nucleotides or codons are incorporated). DNA library technology is a mainstay of current molecular biology, genetic engineering, and protein engineering, and the applications of these libraries depend on the source of the original DNA fragments. There are differences in the cloning vectors and techniques used in library preparation, but in general each DNA fragment is uniquely inserted into a cloning vector and the pool of recombinant DNA molecules is then transferred into a population of bacteria (a Bacterial Artificial Chromosome or BAC library) or yeast such that each organism contains on average one construct (vector + insert). As the population of organisms is grown in culture, the DNA molecules contained within them are copied and propagated (thus, "cloned"). Terminology The term "library" can refer to a population of organisms, each of which carries a DNA molecule inserted into a cloning vector, or alternatively to the collection of all of the cloned vector molecules. cDNA libraries A cDNA library represents a sample of the mRNA purified from a particular source (either a collection of cells, a particular tissue, or an entire organism), which has been converted back to a DNA template by the use of the enzyme reverse transcriptase. It thus represents the genes that were being actively transcribed in that particular source under the physiological, developmental, or environmental conditions that existed when the mRNA was purified. cDNA libraries can be generated using techniques that promote "full-length" clones or under conditions that generate shorter f Document 3::: A sequence in biology is the one-dimensional ordering of monomers, covalently linked within a biopolymer; it is also referred to as the primary structure of a biological macromolecule. While it can refer to many different molecules, the term sequence is most often used to refer to a DNA sequence. See also Protein sequence DNA sequence Genotype Self-incompatibility in plants List of geneticists Human Genome Project Dot plot (bioinformatics) Multiplex Ligation-dependent Probe Amplification Sequence analysis Molecular biology Document 4::: Semantides (or semantophoretic molecules) are biological macromolecules that carry genetic information or a transcript thereof. Three different categories or semantides are distinguished: primary, secondary and tertiary. Primary Semantides are genes, which consist of DNA. Secondary semantides are chains of messenger RNA, which are transcribed from DNA. Tertiary semantides are polypeptides, which are translated from messenger RNA. In eukaryotic organisms, primary semantides may consist of nuclear, mitochondrial or plastid DNA. Not all primary semantides ultimately form tertiary semantides. Some primary semantides are not transcribed into mRNA (non-coding DNA) and some secondary semantides are not translated into polypeptides (non-coding RNA). The complexity of semantides varies greatly. For tertiary semantides, large globular polypeptide chains are most complex while structural proteins, consisting of repeating simple sequences, are least complex. The term semantide and related terms were coined by Linus Pauling and Emile Zuckerkandl. Although semantides are the major type of data used in modern phylogenetics, the term itself is not commonly used. Related terms Isosemantic DNA or RNA that differs in base sequence, but translate into identical polypeptide chains are referred to as being isosemantic. Episemantic Molecules that are synthesized by enzymes (tertiary semantides) are referred to as episemantic molecules. Episemantic molecules have a larger variety in types than semantides, which only consist of three types (DNA, RNA or polypeptides). Not all polypeptides are tertiary semantides. Some, mainly small polypeptides, can also be episemantic molecules. Asemantic Molecules that are not produced by an organism are referred to as asemantic molecules, because they do not contain any genetic information. Asementic molecules may be changed into episemantic molecules by anabolic processes. Asemantic molecules may also become semantic molecules when they integrate The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is a segment of dna that carries a code for making a specific polypeptide chain called? A. a protein B. nucleotide C. a gene D. amino acid Answer:
sciq-1705
multiple_choice
In which order does the reactivity of halogen group decline?
[ "right to left", "left to right", "bottom to top", "top to bottom" ]
D
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test. Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95. After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17. Content specification Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below: Biochemistry (36%) A Chemical and Physical Foundations Thermodynamics and kinetics Redox states Water, pH, acid-base reactions and buffers Solutions and equilibria Solute-solvent interactions Chemical interactions and bonding Chemical reaction mechanisms B Structural Biology: Structure, Assembly, Organization and Dynamics Small molecules Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids) Supramolecular complexes (e.g. Document 2::: The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered. The 1995 version has 30 five-way multiple choice questions. Example question (question 4): Gender differences The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher. Document 3::: The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields. Description The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions. The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.” Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers. Current efforts The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo Document 4::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. In which order does the reactivity of halogen group decline? A. right to left B. left to right C. bottom to top D. top to bottom Answer:
sciq-3027
multiple_choice
In plants and algae, photosynthesis takes place in which organelles?
[ "chloroplasts", "fibroblasts", "cells", "stems" ]
A
Relavent Documents: Document 0::: In contrast to the Cladophorales where nuclei are organized in regularly spaced cytoplasmic domains, the cytoplasm of Bryopsidales exhibits streaming, enabling transportation of organelles, transcripts and nutrients across the plant. The Sphaeropleales also contain several common freshwat Document 1::: Proteinoplasts (sometimes called proteoplasts, aleuroplasts, and aleuronaplasts) are specialized organelles found only in plant cells. Proteinoplasts belong to a broad category of organelles known as plastids. Plastids are specialized double-membrane organelles found in plant cells. Plastids perform a variety of functions such as metabolism of energy, and biological reactions. There are multiple types of plastids recognized including Leucoplasts, Chromoplasts, and Chloroplasts. Plastids are broken up into different categories based on characteristics such as size, function and physical traits. Chromoplasts help to synthesize and store large amounts of carotenoids. Chloroplasts are photosynthesizing structures that help to make light energy for the plant.  Leucoplasts are a colorless type of plastid which means that no photosynthesis occurs here. The colorless pigmentation of the leucoplast is due to not containing the structural components of thylakoids unlike what is found in chloroplasts and chromoplasts that gives them their pigmentation. From leucoplasts stems the subtype, proteinoplasts, which contain proteins for storage. They contain crystalline bodies of protein and can be the sites of enzyme activity involving those proteins. Proteinoplasts are found in many seeds, such as brazil nuts, peanuts and pulses. Although all plastids contain high concentrations of protein, proteinoplasts were identified in the 1960s and 1970s as having large protein inclusions that are visible with both light microscopes and electron microscopes. Other subtypes of Leucoplasts include amyloplast, and elaioplasts. Amyloplasts help to store and synthesize starch molecules found in plants, while elaioplasts synthesize and store lipids in plant cells. See also Chloroplast and etioplast Chromoplast Leucoplast Amyloplast Elaioplast Document 2::: Tannosomes are organelles found in plant cells of vascular plants. Formation and functions Tannosomes are formed when the chloroplast membrane forms pockets filled with tannin. Slowly, the pockets break off as tiny vacuoles that carry tannins to the large vacuole filled with acidic fluid. Tannins are then released into the vacuole and stored inside as tannin accretions. They are responsible for synthesizing and producing condensed tannins and polyphenols. Tannosomes condense tannins in chlorophyllous organs, providing defenses against herbivores and pathogens, and protection against UV radiation. Discovery Tannosomes were discovered in September 2013 by French and Hungarian researchers. The discovery of tannosomes showed how to get enough tannins to change the flavour of wine, tea, chocolate, and other food or snacks. See also Chloroplast Leucoplast Plastid Document 3::: Chloroplast DNA (cpDNA) is the DNA located in chloroplasts, which are photosynthetic organelles located within the cells of some eukaryotic organisms. Chloroplasts, like other types of plastid, contain a genome separate from that in the cell nucleus. The existence of chloroplast DNA was identified biochemically in 1959, and confirmed by electron microscopy in 1962. The discoveries that the chloroplast contains ribosomes and performs protein synthesis revealed that the chloroplast is genetically semi-autonomous. The first complete chloroplast genome sequences were published in 1986, Nicotiana tabacum (tobacco) by Sugiura and colleagues and Marchantia polymorpha (liverwort) by Ozeki et al. Since then, a great number of chloroplast DNAs from various species have been sequenced. Molecular structure Chloroplast DNAs are circular, and are typically 120,000–170,000 base pairs long. They can have a contour length of around 30–60 micrometers, and have a mass of about 80–130 million daltons. Most chloroplasts have their entire chloroplast genome combined into a single large ring, though those of dinophyte algae are a notable exception—their genome is broken up into about forty small plasmids, each 2,000–10,000 base pairs long. Each minicircle contains one to three genes, but blank plasmids, with no coding DNA, have also been found. Chloroplast DNA has long been thought to have a circular structure, but some evidence suggests that chloroplast DNA more commonly takes a linear shape. Over 95% of the chloroplast DNA in corn chloroplasts has been observed to be in branched linear form rather than individual circles. Inverted repeats Many chloroplast DNAs contain two inverted repeats, which separate a long single copy section (LSC) from a short single copy section (SSC). The inverted repeats vary wildly in length, ranging from 4,000 to 25,000 base pairs long each. Inverted repeats in plants tend to be at the upper end of this range, each being 20,000–25,000 base pairs long. T Document 4::: A stromule is a microscopic structure found in plant cells. Stromules (stroma-filled tubules) are highly dynamic structures extending from the surface of all plastid types, including proplastids, chloroplasts, etioplasts, leucoplasts, amyloplasts, and chromoplasts. Protrusions from and interconnections between plastids were observed in 1888 (Gottlieb Haberlandt) and 1908 (Gustav Senn) and have been described sporadically in the literature since then. Stromules were recently rediscovered in 1997 and have since been reported to exist in a number of angiosperm species including Arabidopsis thaliana, wheat, rice and tomato, but their role is not yet fully understood. This highly dynamic nature is caused by the close relationship between plastid stromules and actin microfilaments, which are anchored to the stromule extensions, either in a longitudinal fashion to pull from the stromule and guide the plastid in a given direction or in a hinge fashion allowing the plastid to rest anchored in a given place. The actin microfilaments also define the stromule shape through their interactions. This dynamic random walk-like movement is probably caused by Myosin XI proteins as a recent work found. Other organelles are also associated to stromules, as mitochondria, which have been observed associated and sliding over stromule tubes. Plastids and mitochondria need to be spatially close as some metabolic pathways like photorespiration require the association of both organelles to recycle glycolate and detoxify the ammonium produced during photorespiration. Stromules are usually 0.35–0.85 µm in diameter and of variable length, from short beak-like projections to linear or branched structures up to 220 µm long. They are enclosed by the inner and outer plastid envelope membranes and enable the transfer of molecules as large as RuBisCO (~560 kDa) between interconnected plastids. Stromules occur in all cell types, but stromule morphology and the proportion of plastids with stromules The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. In plants and algae, photosynthesis takes place in which organelles? A. chloroplasts B. fibroblasts C. cells D. stems Answer:
ai2_arc-802
multiple_choice
Jennifer and Mark prepared a layer cake using oil and water. After the cake baked in the oven, they added frosting. Which property could be measured with a balance?
[ "the temperature of the oven", "the mass of the frosting", "the height of the layers", "the volume of the oil" ]
B
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of Document 2::: Curriculum-based measurement, or CBM, is also referred to as a general outcomes measures (GOMs) of a student's performance in either basic skills or content knowledge. Early history CBM began in the mid-1970s with research headed by Stan Deno at the University of Minnesota. Over the course of 10 years, this work led to the establishment of measurement systems in reading, writing, and spelling that were: (a) easy to construct, (b) brief in administration and scoring, (c) had technical adequacy (reliability and various types of validity evidence for use in making educational decisions), and (d) provided alternate forms to allow time series data to be collected on student progress. This focus in the three language arts areas eventually was expanded to include mathematics, though the technical research in this area continues to lag that published in the language arts areas. An even later development was the application of CBM to middle-secondary areas: Espin and colleagues at the University of Minnesota developed a line of research addressing vocabulary and comprehension (with the maze) and by Tindal and colleagues at the University of Oregon developed a line of research on concept-based teaching and learning. Increasing importance Early research on the CBM quickly moved from monitoring student progress to its use in screening, normative decision-making, and finally benchmarking. Indeed, with the implementation of the No Child Left Behind Act in 2001, and its focus on large-scale testing and accountability, CBM has become increasingly important as a form of standardized measurement that is highly related to and relevant for understanding student's progress toward and achievement of state standards. Key feature Probably the key feature of CBM is its accessibility for classroom application and implementation. It was designed to provide an experimental analysis of the effects from interventions, which includes both instruction and curriculum. This is one of the most imp Document 3::: The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered. The 1995 version has 30 five-way multiple choice questions. Example question (question 4): Gender differences The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher. Document 4::: The Texas Math and Science Coaches Association or TMSCA is an organization for coaches of academic University Interscholastic League teams in Texas middle schools and high schools, specifically those that compete in mathematics and science-related tests. Events There are four events in the TMSCA at both the middle and high school level: Number Sense, General Mathematics, Calculator Applications, and General Science. Number Sense is an 80-question exam that students are given only 10 minutes to solve. Additionally, no scratch work or paper calculations are allowed. These questions range from simple calculations such as 99+98 to more complicated operations such as 1001×1938. Each calculation is able to be done with a certain trick or shortcut that makes the calculations easier. The high school exam includes calculus and other difficult topics in the questions also with the same rules applied as to the middle school version. It is well known that the grading for this event is particularly stringent as errors such as writing over a line or crossing out potential answers are considered as incorrect answers. General Mathematics is a 50-question exam that students are given only 40 minutes to solve. These problems are usually more challenging than questions on the Number Sense test, and the General Mathematics word problems take more thinking to figure out. Every problem correct is worth 5 points, and for every problem incorrect, 2 points are deducted. Tiebreakers are determined by the person that misses the first problem and by percent accuracy. Calculator Applications is an 80-question exam that students are given only 30 minutes to solve. This test requires practice on the calculator, knowledge of a few crucial formulas, and much speed and intensity. Memorizing formulas, tips, and tricks will not be enough. In this event, plenty of practice is necessary in order to master the locations of the keys and develop the speed necessary. All correct questions are worth 5 The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Jennifer and Mark prepared a layer cake using oil and water. After the cake baked in the oven, they added frosting. Which property could be measured with a balance? A. the temperature of the oven B. the mass of the frosting C. the height of the layers D. the volume of the oil Answer:
scienceQA-6783
multiple_choice
Which of these pictures shows a natural resource?
[ "footballs", "cookies", "trees", "jump ropes" ]
C
The picture of trees shows a natural resource. The trees come directly from nature, and people can use them in many ways. The other answers are not correct. They show things that do not come directly from nature. They show things made by people.
Relavent Documents: Document 0::: Braiding Sweetgrass: Indigenous Wisdom, Scientific Knowledge, and the Teachings of Plants is a 2013 nonfiction book by Potawatomi professor Robin Wall Kimmerer, about the role of Indigenous knowledge as an alternative or complementary approach to Western mainstream scientific methodologies. Braiding Sweetgrass explores reciprocal relationships between humans and the land, with a focus on the role of plants and botany in both Native American and Western traditions. The book received largely positive reviews, and has appeared on several bestseller lists. Kimmerer is known for her scholarship on traditional ecological knowledge, ethnobotany, and moss ecology. Contents Braiding Sweetgrass: Indigenous Wisdom, Scientific Knowledge, and the Teachings of Plants is about botany and the relationship to land in Native American traditions. Kimmerer, who is an enrolled member of the Citizen Potawatomi Nation, writes about her personal experiences working with plants and reuniting with her people's cultural traditions. She also presents the history of the plants and botany from a scientific perspective. The series of essays in five sections begins with "Planting Sweetgrass", and progresses through "Tending," "Picking," "Braiding," and "Burning Sweetgrass." Environmental Philosophy says that this progression of headings "signals how Kimmerer's book functions not only as natural history but also as ceremony, the latter of which plays a decisive role in how Kimmerer comes to know the living world." Kimmerer describes Braiding Sweetgrass as "[A] braid of stories ... woven from three strands: indigenous ways of knowing, scientific knowledge, and the story of an Anishinabeckwe scientist trying to bring them together in service to what matters most." She also calls the work "an intertwining of science, spirit, and story." American Indian Quarterly writes that Braiding Sweetgrass is a book about traditional ecological knowledge and environmental humanities. Kimmerer combines her Document 1::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 2::: The Crop Science Centre (informally known as 3CS during its planning) is an alliance between the University of Cambridge and National Institute of Agricultural Botany (NIAB). History The Crop Science Centre development plans began in 2015, between the University of Cambridge's Department of Plant Sciences, NIAB (National Institute of Agricultural Botany) and the Sainsbury Laboratory. The research institute received £16.9m funding in 2017 from the UK Research Partnership Investment Fund (UKRPIF) from Research England (United Kingdom Research and Innovation or UKRI) to build a new state-of-the-art building, designed exclusively for crop research, which opened on the 1st October 2020. Structure The Crop Science Centre is based at NIAB's Lawrence Weaver Road HQ site in Cambridge. Document 3::: The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work. History It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council. Function Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres. STEM ambassadors To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell. Funding STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments. See also The WISE Campaign Engineering and Physical Sciences Research Council National Centre for Excellence in Teaching Mathematics Association for Science Education Glossary of areas of mathematics Glossary of astronomy Glossary of biology Glossary of chemistry Glossary of engineering Glossary of physics Document 4::: The northern riverine forest is a type of forest ecology most dominant along waterways in the northeastern and north-central United States and bordering areas of Canada. Key species include willow, elm, American sycamore, painted trillium, goldthread, common wood-sorrel, pink lady's-slipper, wild sarsaparilla, and cottonwood. One of the distinct ecosystems is the Riverine Forest. These are found on the lower flood plains along the rivers edge. The main species found here is one of the deciduous species; the Balsam Poplar. These trees like a high volume of moisture and are able to tolerate flooding. They are distinguishable by their thick, gnarly bark and their larger, pointed leaves. These leaves have a distinct drip tip. The trees supply homes for the many native species of fauna. Other Key trees include yellow birch, white birch, sugar maple, American beech, eastern hemlock, white pine, red pine, northern red oak, pin cherry, and red spruce. Key shrubs include striped maple and hobblebush. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Which of these pictures shows a natural resource? A. footballs B. cookies C. trees D. jump ropes Answer:
sciq-1362
multiple_choice
Temporal and spatial summation at the axon hillock determines whether a neuron generates what?
[ "action potential", "false potential", "change potential", "hidden potential" ]
A
Relavent Documents: Document 0::: Spike directivity is a vector that quantifies changes in transient charge density during action potential propagation. The digital-like uniformity of action potentials is contradicted by experimental data. Electrophysiologists have observed that the shape of recorded action potentials changes in time. Recent experimental evidence has shown that action potentials in neurons are subject to waveform modulation while they travel down axons or dendrites. The action potential waveform can be modulated by neuron geometry, local alterations in the ion conductance, and other biophysical properties including neurotransmitter release. See also Cellular neuroscience Neuron NeuroElectroDynamics Document 1::: Gregory A. Clark is a professor in the Departments of Biomedical Engineering and Computer Science at the University of Utah; he is also the Director for the Center for Neural Interfaces at the University of Utah. Dr. Clark’s current research is in neuroprostheses, bioengineering, sensory information processing, and electrophysiological and computational analyses of neuronal plasticity in simple systems. Education Dr. Clark studied Psychology at Brown University; after receiving his B.A., Dr.Clark completed his Ph.D. with the Department of Psychobiology at the University of California, Irvine. Career and research In 1981 Dr. Clark lectured for the Department of Psychology at Stanford University. Upon leaving Stanford, Dr. Clark went to the College of Physicians and Surgeons at Columbia University. While there, he completed a postdoctoral fellowship at the Center for Neurobiology and Behavior between 1982 and 1984, continued on as a research associate at the Howard Hughes Medical Institute for Molecular Neurobiology between 1984 and 1988, and became an instructor of clinical neurobiology for the Department of Psychiatry and Center for Neurobiology and Behavior from 1986 to 1988. Following his time at Columbia, Dr. Clark became an assistant professor in the Department of Psychology at Princeton University from 1988 to 1996. After his time at Princeton, he became an associate professor in the Department of Biomedical Engineering at the University of Utah in 1996, gaining tenure in 2001. In 2009 he became an adjunct associate professor in the Department of Computer Science at the University of Utah. In 2015, he became the director of the Center for Neural Interfaces. Dr. Clark has made contributions to a variety of research areas, including neuroprostheses, bioengineering, sensory information processing, and electrophysiological and computational analyses of neuronal plasticity in simple systems (Aplysia and Hermissenda). Specifically, these contributions have in Document 2::: Computational neuroscience (also known as theoretical neuroscience or mathematical neuroscience) is a branch of neuroscience which employs mathematics, computer science, theoretical analysis and abstractions of the brain to understand the principles that govern the development, structure, physiology and cognitive abilities of the nervous system. Computational neuroscience employs computational simulations to validate and solve mathematical models, and so can be seen as a sub-field of theoretical neuroscience; however, the two fields are often synonymous. The term mathematical neuroscience is also used sometimes, to stress the quantitative nature of the field. Computational neuroscience focuses on the description of biologically plausible neurons (and neural systems) and their physiology and dynamics, and it is therefore not directly concerned with biologically unrealistic models used in connectionism, control theory, cybernetics, quantitative psychology, machine learning, artificial neural networks, artificial intelligence and computational learning theory; although mutual inspiration exists and sometimes there is no strict limit between fields, with model abstraction in computational neuroscience depending on research scope and the granularity at which biological entities are analyzed. Models in theoretical neuroscience are aimed at capturing the essential features of the biological system at multiple spatial-temporal scales, from membrane currents, and chemical coupling via network oscillations, columnar and topographic architecture, nuclei, all the way up to psychological faculties like memory, learning and behavior. These computational models frame hypotheses that can be directly tested by biological or psychological experiments. History The term 'computational neuroscience' was introduced by Eric L. Schwartz, who organized a conference, held in 1985 in Carmel, California, at the request of the Systems Development Foundation to provide a summary of the curr Document 3::: In computational neuroscience, the Wilson–Cowan model describes the dynamics of interactions between populations of very simple excitatory and inhibitory model neurons. It was developed by Hugh R. Wilson and Jack D. Cowan and extensions of the model have been widely used in modeling neuronal populations. The model is important historically because it uses phase plane methods and numerical solutions to describe the responses of neuronal populations to stimuli. Because the model neurons are simple, only elementary limit cycle behavior, i.e. neural oscillations, and stimulus-dependent evoked responses are predicted. The key findings include the existence of multiple stable states, and hysteresis, in the population response. The model was inspired as the neural analog of Rayleigh–Bénard convection cloud patterns in fluid thermodynamics. Mathematical description The Wilson–Cowan model considers a homogeneous population of interconnected neurons of excitatory and inhibitory subtypes. All cells receive the same number of excitatory and inhibitory afferents, that is, all cells receive the same average excitation, x(t). The target is to analyze the evolution in time of number of excitatory and inhibitory cells firing at time t, and respectively. The equations that describes this evolution are the Wilson-Cowan model: where: and are functions of sigmoid form that depends on the distribution of the trigger thresholds (see below) is the stimulus decay function and are respectively the connectivity coefficient giving the average number of excitatory and inhibitory synapses per excitatory cell; and its counterparts for inhibitory cells and are the external input to the excitatory/inhibitory populations. If denotes a cell's threshold potential and is the distribution of thresholds in all cells, then the expected proportion of neurons receiving an excitation at or above threshold level per unit time is: , that is a function of sigmoid form if is un Document 4::: Plateau potentials, caused by persistent inward currents (PICs), are a type of electrical behavior seen in neurons. Spinal Cord Plateau potentials are of particular importance to spinal cord motor systems. PICs are set up by the influence of descending monoaminergic reticulospinal pathways. Metabotropic neurotransmitters, via monoaminergic input such as 5-HT and norepinephrine, modulate the activity of dendritic L-type Calcium channels that allow a sustained, positive, inward current into the cell. This leads to a lasting depolarisation. In this state, the cell fires action potentials independent of synaptic input. The PICs can be turned off via the activation of high-frequency inhibitory input at which point the cell returns to a resting state. Olfactory Bulb Periglomerular cells, inhibitory interneurons that surround and innervate olfactory glomeruli, have also been shown to exhibit plateau potentials. Cortex and Hippocampus Plateau potentials are also seen in the cortical, and hippocampal pyramidal neurons. Using iontophoretic, or two-photon glutamate uncaging experiments, it has been discovered that these plateau potentials include activities of voltage dependent calcium channels and NMDA receptors. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Temporal and spatial summation at the axon hillock determines whether a neuron generates what? A. action potential B. false potential C. change potential D. hidden potential Answer:
sciq-10790
multiple_choice
When bacteria enter the bloodstream, the result is what condition?
[ "diarrhea", "hypertension", "hypoxia", "septicemia" ]
D
Relavent Documents: Document 0::: Exogenous bacteria are microorganisms introduced to closed biological systems from the external world. They exist in aquatic and terrestrial environments, as well as the atmosphere. Microorganisms in the external environment have existed on Earth for 3.5 billion years. Exogenous bacteria can be either benign or pathogenic. Pathogenic exogenous bacteria can enter a closed biological system and cause disease such as Cholera, which is induced by a waterborne microbe that infects the human intestine. Exogenous bacteria can be introduced into a closed ecosystem as well, and have mutualistic benefits for both the microbe and the host. A prominent example of this concept is bacterial flora, which consists of exogenous bacteria ingested and endogenously colonized during the early stages of life. Bacteria that are part of normal internal ecosystems, also known as bacterial flora, are called Endogenous Bacteria. A significant amount of prominent diseases are induced by exogenous bacteria such as gonorrhea, meningitis, tetanus, and syphilis. Pathogenic exogenous bacteria can enter a host via cutaneous transmission, inhalation, and consumption. Difference with endogenous bacteria Only a minority of bacteria species cause disease in humans; and many species colonize in the human body to create an ecosystem known as microbiota. Bacterial flora is endogenous bacteria, which is defined as bacteria that naturally reside in a closed system. Disease can occur when microbes included in normal bacteria flora enter a sterile area of the body such as the brain or muscle. This is considered an endogenous infection. A prime example of this is when the residential bacterium E. coli of the GI tract enters the urinary tract. This causes a urinary tract infection. Infections caused by exogenous bacteria occurs when microbes that are noncommensal enter a host. These microbes can enter a host via inhalation of aerosolized bacteria, ingestion of contaminated or ill-prepared foods, sexual activi Document 1::: A blood-borne disease is a disease that can be spread through contamination by blood and other body fluids. Blood can contain pathogens of various types, chief among which are microorganisms, like bacteria and parasites, and non-living infectious agents such as viruses. Three blood-borne pathogens in particular, all viruses, are cited as of primary concern to health workers by the CDC-NIOSH: HIV, hepatitis B (HVB), & hepatitis C (HVC). Diseases that are not usually transmitted directly by blood contact, but rather by insect or other vector, are more usefully classified as vector-borne disease, even though the causative agent can be found in blood. Vector-borne diseases include West Nile virus, zika fever and malaria. Many blood-borne diseases can also be contracted by other means, including high-risk sexual behavior or intravenous drug use. These diseases have also been identified in sports medicine. Since it is difficult to determine what pathogens any given sample of blood contains, and some blood-borne diseases are lethal, standard medical practice regards all blood (and any body fluid) as potentially infectious. "Blood and body fluid precautions" are a type of infection control practice that seeks to minimize this sort of disease transmission. Occupational exposure Blood poses the greatest threat to health in a laboratory or clinical setting due to needlestick injuries (e.g., lack of proper needle disposal techniques and/or safety syringes). Needles are not the only issue, as direct splashes of blood also cause transmission. These risks are greatest among healthcare workers, including: nurses, surgeons, laboratory assistants, doctors, phlebotomists, and laboratory technicians. These roles often require the use of syringes for blood draws or to administer medications. The Occupational Safety and Health Administration (OSHA) prescribes 5 rules that are required for a healthcare facility to follow in order to reduce the risk of employee exposure to blood-bor Document 2::: Sepsis, also known as septicemia, septicaemia, or blood poisoning, is a potentially life-threatening condition that arises when the body's response to infection causes injury to its own tissues and organs. This initial stage of sepsis is followed by suppression of the immune system. Common signs and symptoms include fever, increased heart rate, increased breathing rate, and confusion. There may also be symptoms related to a specific infection, such as a cough with pneumonia, or painful urination with a kidney infection. The very young, old, and people with a weakened immune system may have no symptoms of a specific infection, and the body temperature may be low or normal instead of having a fever. Severe sepsis causes poor organ function or blood flow. The presence of low blood pressure, high blood lactate, or low urine output may suggest poor blood flow. Septic shock is low blood pressure due to sepsis that does not improve after fluid replacement. Sepsis is caused by many organisms including bacteria, viruses and fungi. Common locations for the primary infection include the lungs, brain, urinary tract, skin, and abdominal organs. Risk factors include being very young or old, a weakened immune system from conditions such as cancer or diabetes, major trauma, and burns. Previously, a sepsis diagnosis required the presence of at least two systemic inflammatory response syndrome (SIRS) criteria in the setting of presumed infection. In 2016, a shortened sequential organ failure assessment score (SOFA score), known as the quick SOFA score (qSOFA), replaced the SIRS system of diagnosis. qSOFA criteria for sepsis include at least two of the following three: increased breathing rate, change in the level of consciousness, and low blood pressure. Sepsis guidelines recommend obtaining blood cultures before starting antibiotics; however, the diagnosis does not require the blood to be infected. Medical imaging is helpful when looking for the possible location of the infection. Document 3::: Bacteriuria is the presence of bacteria in urine. Bacteriuria accompanied by symptoms is a urinary tract infection while that without is known as asymptomatic bacteriuria. Diagnosis is by urinalysis or urine culture. Escherichia coli is the most common bacterium found. People without symptoms should generally not be tested for the condition. Differential diagnosis include contamination. If symptoms are present, treatment is generally with antibiotics. Bacteriuria without symptoms generally does not require treatment. Exceptions may include pregnant women, those who have had a recent kidney transplant, young children with significant vesicoureteral reflux, and those undergoing surgery of the urinary tract. Bacteriuria without symptoms is present in about 3% of otherwise healthy middle aged women. In nursing homes rates are as high as 50% among women and 40% in men. In those with a long term indwelling urinary catheter rates are 100%. Up to 10% of women have a urinary tract infection in a given year and half of all women have at least one infection at some point in their lives. There is an increased risk of asymptomatic or symptomatic bacteriuria in pregnancy due to physiological changes that occur in a pregnant women which promotes unwanted pathogen growth in the urinary tract. Signs and symptoms Asymptomatic Asymptomatic bacteriuria is bacteriuria without accompanying symptoms of a urinary tract infection and is commonly caused by the bacterium Escherichia coli. Other potential pathogens are Klebsiella spp., and group B streptococci. It is more common in women, in the elderly, in residents of long-term care facilities, and in people with diabetes, bladder catheters, and spinal cord injuries. People with a long-term Foley catheter always show bacteriuria. Chronic asymptomatic bacteriuria occurs in as many as 50% of the population in long-term care. There is an association between asymptomatic bacteriuria in pregnant women with low birth weight, preterm delivery Document 4::: An infection rate (or incident rate) is the probability or risk of an infection in a population. It is used to measure the frequency of occurrence of new instances of infection within a population during a specific time period. The number of infections equals the cases identified in the study or observed. An example would be HIV infection during a specific time period in the defined population. The population at risk are the cases appearing in the population during the same time period. An example would be all the people in a city during a specific time period. The constant, or K is assigned a value of 100 to represent a percentage. An example would be to find the percentage of people in a city who are infected with HIV: 6,000 cases in March divided by the population of a city (one million) multiplied by the constant (K) would give an infection rate of 0.6%. Calculating the infection rate is used to analyze trends for the purpose of infection and disease control. An online infection rate calculator has been developed by the Centers for Disease Control and Prevention that allows the determination of the Streptococcal A infection rate in a population. Clinical applications Health care facilities routinely track their infection rates according to the guidelines issued by the Joint Commission. The healthcare-associated infection (HAI) rates measure infection of patients in a particular hospital. This allows rates to compared with other hospitals. These infections can often be prevented when healthcare facilities follow guidelines for safe care. To get payment from Medicare, hospitals are required to report data about some infections to the Centers for Disease Control and Prevention's (CDC's) National Healthcare Safety Network (NHSN). Hospitals currently submit information on central line-associated bloodstream infections (CLABSIs), catheter-associated urinary tract infections (CAUTIs), surgical site infections (SSIs), MRSA Bacteremia, and C. difficile laboratory-i The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. When bacteria enter the bloodstream, the result is what condition? A. diarrhea B. hypertension C. hypoxia D. septicemia Answer:
sciq-5729
multiple_choice
What type of roads and parking lots prevent rainwater from soaking into the ground?
[ "dirt", "paved", "shaded", "gravel" ]
B
Relavent Documents: Document 0::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 1::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 2::: The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields. Description The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions. The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.” Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers. Current efforts The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo Document 3::: The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work. History It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council. Function Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres. STEM ambassadors To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell. Funding STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments. See also The WISE Campaign Engineering and Physical Sciences Research Council National Centre for Excellence in Teaching Mathematics Association for Science Education Glossary of areas of mathematics Glossary of astronomy Glossary of biology Glossary of chemistry Glossary of engineering Glossary of physics Document 4::: Tech City College (Formerly STEM Academy) is a free school sixth form located in the Islington area of the London Borough of Islington, England. It originally opened in September 2013, as STEM Academy Tech City and specialised in Science, Technology, Engineering and Maths (STEM) and the Creative Application of Maths and Science. In September 2015, STEM Academy joined the Aspirations Academy Trust was renamed Tech City College. Tech City College offers A-levels and BTECs as programmes of study for students. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What type of roads and parking lots prevent rainwater from soaking into the ground? A. dirt B. paved C. shaded D. gravel Answer:
sciq-7630
multiple_choice
What landform occurs most often along plate boundaries?
[ "sinkholes", "geysers", "dunes", "volcanoes" ]
D
Relavent Documents: Document 0::: Maui Nui is a modern geologists' name given to a prehistoric Hawaiian island and the corresponding modern biogeographic region. Maui Nui is composed of four modern islands: Maui, Molokaʻi, Lānaʻi, and Kahoʻolawe. Administratively, the four modern islands comprise Maui County (and a tiny part of Molokaʻi called Kalawao County). Long after the breakup of Maui Nui, the four modern islands retained plant and animal life similar to each other. Thus, Maui Nui is not only a prehistoric island but also a modern biogeographic region. Geology Maui Nui formed and broke up during the Pleistocene Epoch, which lasted from about 2.58 million to 11,700 years ago. Maui Nui is built from seven shield volcanoes. The three oldest are Penguin Bank, West Molokaʻi, and East Molokaʻi, which probably range from slightly over to slightly less than 2 million years old. The four younger volcanoes are Lāna‘i, West Maui, Kaho‘olawe, and Haleakalā, which probably formed between 1.5 and 2 million years ago. At its prime 1.2 million years ago, Maui Nui was , 50% larger than today's Hawaiʻi Island. The island of Maui Nui included four modern islands (Maui, Molokaʻi, Lānaʻi, and Kahoʻolawe) and landmass west of Molokaʻi called Penguin Bank, which is now completely submerged. Maui Nui broke up as rising sea levels flooded the connections between the volcanoes. The breakup was complex because global sea levels rose and fell intermittently during the Quaternary glaciation. About 600,000 years ago, the connection between Molokaʻi and the island of Lāna‘i/Maui/Kahoʻolawe became intermittent. About 400,000 years ago, the connection between Lāna‘i and Maui/Kahoʻolawe also became intermittent. The connection between Maui and Kahoʻolawe was permanently broken between 200,000 and 150,000 years ago. Maui, Lāna‘i, and Molokaʻi were connected intermittently thereafter, most recently about 18,000 years ago during the Last Glacial Maximum. Today, the sea floor between these four islands is relatively shallow Document 1::: In structural geology, a suture is a joining together along a major fault zone, of separate terranes, tectonic units that have different plate tectonic, metamorphic and paleogeographic histories. The suture is often represented on the surface by an orogen or mountain range. Overview In plate tectonics, sutures are the remains of subduction zones, and the terranes that are joined together are interpreted as fragments of different palaeocontinents or tectonic plates. Outcrops of sutures can vary in width from a few hundred meters to a couple of kilometers. They can be networks of mylonitic shear zones or brittle fault zones, but are usually both. Sutures are usually associated with igneous intrusions and tectonic lenses with varying kinds of lithologies from plutonic rocks to ophiolitic fragments. An example from Great Britain is the Iapetus Suture which, though now concealed beneath younger rocks, has been determined by geophysical means to run along a line roughly parallel with the Anglo-Scottish border and represents the joint between the former continent of Laurentia to the north and the former micro-continent of Avalonia to the south. Avalonia is in fact a plain which dips steeply northwestwards through the crust, underthrusting Laurentia. Paleontological use When used in paleontology, suture can also refer to fossil exoskeletons, as in the suture line, a division on a trilobite between the free cheek and the fixed cheek; this suture line allowed the trilobite to perform ecdysis (the shedding of its skin). Document 2::: The geologic record in stratigraphy, paleontology and other natural sciences refers to the entirety of the layers of rock strata. That is, deposits laid down by volcanism or by deposition of sediment derived from weathering detritus (clays, sands etc.). This includes all its fossil content and the information it yields about the history of the Earth: its past climate, geography, geology and the evolution of life on its surface. According to the law of superposition, sedimentary and volcanic rock layers are deposited on top of each other. They harden over time to become a solidified (competent) rock column, that may be intruded by igneous rocks and disrupted by tectonic events. Correlating the rock record At a certain locality on the Earth's surface, the rock column provides a cross section of the natural history in the area during the time covered by the age of the rocks. This is sometimes called the rock history and gives a window into the natural history of the location that spans many geological time units such as ages, epochs, or in some cases even multiple major geologic periods—for the particular geographic region or regions. The geologic record is in no one place entirely complete for where geologic forces one age provide a low-lying region accumulating deposits much like a layer cake, in the next may have uplifted the region, and the same area is instead one that is weathering and being torn down by chemistry, wind, temperature, and water. This is to say that in a given location, the geologic record can be and is quite often interrupted as the ancient local environment was converted by geological forces into new landforms and features. Sediment core data at the mouths of large riverine drainage basins, some of which go deep thoroughly support the law of superposition. However using broadly occurring deposited layers trapped within differently located rock columns, geologists have pieced together a system of units covering most of the geologic time scale Document 3::: The core–mantle boundary (CMB) of Earth lies between the planet's silicate mantle and its liquid iron–nickel outer core, at a depth of below Earth's surface. The boundary is observed via the discontinuity in seismic wave velocities at that depth due to the differences between the acoustic impedances of the solid mantle and the molten outer core. P-wave velocities are much slower in the outer core than in the deep mantle while S-waves do not exist at all in the liquid portion of the core. Recent evidence suggests a distinct boundary layer directly above the CMB possibly made of a novel phase of the basic perovskite mineralogy of the deep mantle named post-perovskite. Seismic tomography studies have shown significant irregularities within the boundary zone and appear to be dominated by the African and Pacific Large Low-Shear-Velocity Provinces (LLSVP). The uppermost section of the outer core is thought to be about 500–1,800 K hotter than the overlying mantle, creating a thermal boundary layer. The boundary is thought to harbor topography, much like Earth's surface, that is supported by solid-state convection within the overlying mantle. Variations in the thermal properties of the core-mantle boundary may affect how the outer core's iron-rich fluids flow, which are ultimately responsible for Earth's magnetic field. The D″ region The approx. 200 km thick layer of the lower mantle directly above the boundary is referred to as the D″ region ("D double-prime" or "D prime prime") and is sometimes included in discussions regarding the core–mantle boundary zone. The D″ name originates from geophysicist Keith Bullen's designations for the Earth's layers. His system was to label each layer alphabetically, A through G, with the crust as 'A' and the inner core as 'G'. In his 1942 publication of his model, the entire lower mantle was the D layer. In 1949, Bullen found his 'D' layer to actually be two different layers. The upper part of the D layer, about 1800 km thick, was r Document 4::: In geodynamics lower crustal flow is the mainly lateral movement of material within the lower part of the continental crust by a ductile flow mechanism. It is thought to be an important process during both continental collision and continental break-up. Rheology The tendency of the lower crust to flow is controlled by its rheology. Ductile flow in the lower crust is assumed to be controlled by the deformation of quartz and/or plagioclase feldspar as its composition is thought to be granodioritic to dioritic. With normal thickness continental crust and a normal geothermal gradient, the lower crust, below the brittle–ductile transition zone, exhibits ductile flow behaviour under geological strain rates. Factors that can vary this behaviour include: water content, thickness, heat flow and strain-rate. Collisional belts In some areas of continental collision, the lower part of the thickened crust that results is interpreted to flow laterally, such as in the Tibetan plateau, and the Altiplano in the Bolivian Andes. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What landform occurs most often along plate boundaries? A. sinkholes B. geysers C. dunes D. volcanoes Answer:
scienceQA-10520
multiple_choice
How long is a tennis racket?
[ "70 meters", "70 centimeters", "70 millimeters", "70 kilometers" ]
B
The best estimate for the length of a tennis racket is 70 centimeters. 70 millimeters is too short. 70 meters and 70 kilometers are too long.
Relavent Documents: Document 0::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of Document 1::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 2::: Further Mathematics is the title given to a number of advanced secondary mathematics courses. The term "Higher and Further Mathematics", and the term "Advanced Level Mathematics", may also refer to any of several advanced mathematics courses at many institutions. In the United Kingdom, Further Mathematics describes a course studied in addition to the standard mathematics AS-Level and A-Level courses. In the state of Victoria in Australia, it describes a course delivered as part of the Victorian Certificate of Education (see § Australia (Victoria) for a more detailed explanation). Globally, it describes a course studied in addition to GCE AS-Level and A-Level Mathematics, or one which is delivered as part of the International Baccalaureate Diploma. In other words, more mathematics can also be referred to as part of advanced mathematics, or advanced level math. United Kingdom Background A qualification in Further Mathematics involves studying both pure and applied modules. Whilst the pure modules (formerly known as Pure 4–6 or Core 4–6, now known as Further Pure 1–3, where 4 exists for the AQA board) build on knowledge from the core mathematics modules, the applied modules may start from first principles. The structure of the qualification varies between exam boards. With regard to Mathematics degrees, most universities do not require Further Mathematics, and may incorporate foundation math modules or offer "catch-up" classes covering any additional content. Exceptions are the University of Warwick, the University of Cambridge which requires Further Mathematics to at least AS level; University College London requires or recommends an A2 in Further Maths for its maths courses; Imperial College requires an A in A level Further Maths, while other universities may recommend it or may promise lower offers in return. Some schools and colleges may not offer Further mathematics, but online resources are available Although the subject has about 60% of its cohort obtainin Document 3::: The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591. On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education. Format This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions. The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test. Preparation The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a Document 4::: The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work. History It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council. Function Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres. STEM ambassadors To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell. Funding STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments. See also The WISE Campaign Engineering and Physical Sciences Research Council National Centre for Excellence in Teaching Mathematics Association for Science Education Glossary of areas of mathematics Glossary of astronomy Glossary of biology Glossary of chemistry Glossary of engineering Glossary of physics The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. How long is a tennis racket? A. 70 meters B. 70 centimeters C. 70 millimeters D. 70 kilometers Answer:
sciq-1200
multiple_choice
Deficiency of what is symptomized by nausea, fatigue and dizziness, and can be triggered by excessive sweating?
[ "electrolytes", "impurities", "calories", "salts" ]
A
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test. Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95. After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17. Content specification Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below: Biochemistry (36%) A Chemical and Physical Foundations Thermodynamics and kinetics Redox states Water, pH, acid-base reactions and buffers Solutions and equilibria Solute-solvent interactions Chemical interactions and bonding Chemical reaction mechanisms B Structural Biology: Structure, Assembly, Organization and Dynamics Small molecules Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids) Supramolecular complexes (e.g. Document 2::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 3::: Exercise-induced nausea is a feeling of sickness or vomiting which can occur shortly after exercise has stopped as well as during exercise itself. It may be a symptom of either over-exertion during exercise, or from too abruptly ending an exercise session. People engaged in high-intensity exercise such as aerobics and bicycling have reported experiencing exercise-induced nausea. Cause A study of 20 volunteers conducted at Nagoya University in Japan associated a higher degree of exercise-induced nausea after eating. Lack of hydration during exercise is a well known cause of headache and nausea. Exercising at a heavy rate causes blood flow to be taken away from the stomach, causing nausea. Another possible cause of exercise induced nausea is overhydration. Drinking too much water before, during, or after extreme exercise (such as a marathon) can cause nausea, diarrhea, confusion, and muscle tremors. Excessive water consumption reduces or dilutes electrolyte levels in the body causing hyponatremia. See also Exercise intolerance Exercise-induced bronchoconstriction Exercise-induced urticaria Exercise-associated hyponatremia Heat intolerance Ventilatory threshold Document 4::: The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields. Description The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions. The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.” Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers. Current efforts The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Deficiency of what is symptomized by nausea, fatigue and dizziness, and can be triggered by excessive sweating? A. electrolytes B. impurities C. calories D. salts Answer:
sciq-5041
multiple_choice
What is the particular sequence of amino acids in a longer chain called?
[ "carbon sequence", "amino acid sequence", "atomic sequence", "molecular sequence" ]
B
Relavent Documents: Document 0::: A sequence in biology is the one-dimensional ordering of monomers, covalently linked within a biopolymer; it is also referred to as the primary structure of a biological macromolecule. While it can refer to many different molecules, the term sequence is most often used to refer to a DNA sequence. See also Protein sequence DNA sequence Genotype Self-incompatibility in plants List of geneticists Human Genome Project Dot plot (bioinformatics) Multiplex Ligation-dependent Probe Amplification Sequence analysis Molecular biology Document 1::: Biomolecular structure is the intricate folded, three-dimensional shape that is formed by a molecule of protein, DNA, or RNA, and that is important to its function. The structure of these molecules may be considered at any of several length scales ranging from the level of individual atoms to the relationships among entire protein subunits. This useful distinction among scales is often expressed as a decomposition of molecular structure into four levels: primary, secondary, tertiary, and quaternary. The scaffold for this multiscale organization of the molecule arises at the secondary level, where the fundamental structural elements are the molecule's various hydrogen bonds. This leads to several recognizable domains of protein structure and nucleic acid structure, including such secondary-structure features as alpha helixes and beta sheets for proteins, and hairpin loops, bulges, and internal loops for nucleic acids. The terms primary, secondary, tertiary, and quaternary structure were introduced by Kaj Ulrik Linderstrøm-Lang in his 1951 Lane Medical Lectures at Stanford University. Primary structure The primary structure of a biopolymer is the exact specification of its atomic composition and the chemical bonds connecting those atoms (including stereochemistry). For a typical unbranched, un-crosslinked biopolymer (such as a molecule of a typical intracellular protein, or of DNA or RNA), the primary structure is equivalent to specifying the sequence of its monomeric subunits, such as amino acids or nucleotides. The primary structure of a protein is reported starting from the amino N-terminus to the carboxyl C-terminus, while the primary structure of DNA or RNA molecule is known as the nucleic acid sequence reported from the 5' end to the 3' end. The nucleic acid sequence refers to the exact sequence of nucleotides that comprise the whole molecule. Often, the primary structure encodes sequence motifs that are of functional importance. Some examples of such motif Document 2::: A nucleic acid sequence is a succession of bases within the nucleotides forming alleles within a DNA (using GACT) or RNA (GACU) molecule. This succession is denoted by a series of a set of five different letters that indicate the order of the nucleotides. By convention, sequences are usually presented from the 5' end to the 3' end. For DNA, with its double helix, there are two possible directions for the notated sequence; of these two, the sense strand is used. Because nucleic acids are normally linear (unbranched) polymers, specifying the sequence is equivalent to defining the covalent structure of the entire molecule. For this reason, the nucleic acid sequence is also termed the primary structure. The sequence represents biological information. Biological deoxyribonucleic acid represents the information which directs the functions of an organism. Nucleic acids also have a secondary structure and tertiary structure. Primary structure is sometimes mistakenly referred to as "primary sequence". However there is no parallel concept of secondary or tertiary sequence. Nucleotides Nucleic acids consist of a chain of linked units called nucleotides. Each nucleotide consists of three subunits: a phosphate group and a sugar (ribose in the case of RNA, deoxyribose in DNA) make up the backbone of the nucleic acid strand, and attached to the sugar is one of a set of nucleobases. The nucleobases are important in base pairing of strands to form higher-level secondary and tertiary structures such as the famed double helix. The possible letters are A, C, G, and T, representing the four nucleotide bases of a DNA strand – adenine, cytosine, guanine, thymine – covalently linked to a phosphodiester backbone. In the typical case, the sequences are printed abutting one another without gaps, as in the sequence AAAGTCTGAC, read left to right in the 5' to 3' direction. With regards to transcription, a sequence is on the coding strand if it has the same order as the transcribed RNA. Document 3::: Semantides (or semantophoretic molecules) are biological macromolecules that carry genetic information or a transcript thereof. Three different categories or semantides are distinguished: primary, secondary and tertiary. Primary Semantides are genes, which consist of DNA. Secondary semantides are chains of messenger RNA, which are transcribed from DNA. Tertiary semantides are polypeptides, which are translated from messenger RNA. In eukaryotic organisms, primary semantides may consist of nuclear, mitochondrial or plastid DNA. Not all primary semantides ultimately form tertiary semantides. Some primary semantides are not transcribed into mRNA (non-coding DNA) and some secondary semantides are not translated into polypeptides (non-coding RNA). The complexity of semantides varies greatly. For tertiary semantides, large globular polypeptide chains are most complex while structural proteins, consisting of repeating simple sequences, are least complex. The term semantide and related terms were coined by Linus Pauling and Emile Zuckerkandl. Although semantides are the major type of data used in modern phylogenetics, the term itself is not commonly used. Related terms Isosemantic DNA or RNA that differs in base sequence, but translate into identical polypeptide chains are referred to as being isosemantic. Episemantic Molecules that are synthesized by enzymes (tertiary semantides) are referred to as episemantic molecules. Episemantic molecules have a larger variety in types than semantides, which only consist of three types (DNA, RNA or polypeptides). Not all polypeptides are tertiary semantides. Some, mainly small polypeptides, can also be episemantic molecules. Asemantic Molecules that are not produced by an organism are referred to as asemantic molecules, because they do not contain any genetic information. Asementic molecules may be changed into episemantic molecules by anabolic processes. Asemantic molecules may also become semantic molecules when they integrate Document 4::: In biology, a sequence motif is a nucleotide or amino-acid sequence pattern that is widespread and usually assumed to be related to biological function of the macromolecule. For example, an N-glycosylation site motif can be defined as Asn, followed by anything but Pro, followed by either Ser or Thr, followed by anything but Pro residue. Overview When a sequence motif appears in the exon of a gene, it may encode the "structural motif" of a protein; that is a stereotypical element of the overall structure of the protein. Nevertheless, motifs need not be associated with a distinctive secondary structure. "Noncoding" sequences are not translated into proteins, and nucleic acids with such motifs need not deviate from the typical shape (e.g. the "B-form" DNA double helix). Outside of gene exons, there exist regulatory sequence motifs and motifs within the "junk", such as satellite DNA. Some of these are believed to affect the shape of nucleic acids (see for example RNA self-splicing), but this is only sometimes the case. For example, many DNA binding proteins that have affinity for specific DNA binding sites bind DNA in only its double-helical form. They are able to recognize motifs through contact with the double helix's major or minor groove. Short coding motifs, which appear to lack secondary structure, include those that label proteins for delivery to particular parts of a cell, or mark them for phosphorylation. Within a sequence or database of sequences, researchers search and find motifs using computer-based techniques of sequence analysis, such as BLAST. Such techniques belong to the discipline of bioinformatics. See also consensus sequence. Motif Representation Consider the N-glycosylation site motif mentioned above: Asn, followed by anything but Pro, followed by either Ser or Thr, followed by anything but Pro This pattern may be written as N{P}[ST]{P} where N = Asn, P = Pro, S = Ser, T = Thr; {X} means any amino acid except X; and [XY] means either X o The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the particular sequence of amino acids in a longer chain called? A. carbon sequence B. amino acid sequence C. atomic sequence D. molecular sequence Answer:
sciq-7259
multiple_choice
Most of the heat that enters the mesosphere comes from where?
[ "Earth's surface", "the stratosphere", "Exosphere", "Troposphere" ]
B
Relavent Documents: Document 0::: Atmospheric temperature is a measure of temperature at different levels of the Earth's atmosphere. It is governed by many factors, including incoming solar radiation, humidity and altitude. When discussing surface air temperature, the annual atmospheric temperature range at any geographical location depends largely upon the type of biome, as measured by the Köppen climate classification Temperature versus altitude Temperature varies greatly at different heights relative to Earth's surface and this variation in temperature characterizes the four layers that exist in the atmosphere. These layers include the troposphere, stratosphere, mesosphere, and thermosphere. The troposphere is the lowest of the four layers, extending from the surface of the Earth to about into the atmosphere where the tropopause (the boundary between the troposphere stratosphere) is located. The width of the troposphere can vary depending on latitude, for example, the troposphere is thicker in the tropics (about ) because the tropics are generally warmer, and thinner at the poles (about ) because the poles are colder. Temperatures in the atmosphere decrease with height at an average rate of 6.5°C (11.7°F) per kilometer. Because the troposphere experiences its warmest temperatures closer to Earth's surface, there is great vertical movement of heat and water vapour, causing turbulence. This turbulence, in conjunction with the presence of water vapour, is the reason that weather occurs within the troposphere. Following the tropopause is the stratosphere. This layer extends from the tropopause to the stratopause which is located at an altitude of about . Temperatures remain constant with height from the tropopause to an altitude of , after which they start to increase with height. This happening is referred to as an inversion and It is because of this inversion that the stratosphere is not characterised as turbulent. The stratosphere receives its warmth from the sun and the ozone layer which ab Document 1::: Thermophysics is the application of thermodynamics to geophysics and to planetary science more broadly. It may also be used to refer to the field of thermodynamic and transport properties. Remote sensing Earth thermophysics is a branch of geophysics that uses the naturally occurring surface temperature as a function of the cyclical variation in solar radiation to characterise planetary material properties. Thermophysical properties are characteristics that control the diurnal, seasonal, or climatic surface and subsurface temperature variations (or thermal curves) of a material. The most important thermophysical property is thermal inertia, which controls the amplitude of the thermal curve and albedo (or reflectivity), which controls the average temperature. This field of observations and computer modeling was first applied to Mars due to the ideal atmospheric pressure for characterising granular materials based upon temperature. The Mariner 6, Mariner 7, and Mariner 9 spacecraft carried thermal infrared radiometers, and a global map of thermal inertia was produced from modeled surface temperatures collected by the Infrared Thermal Mapper Instruments (IRTM) on board the Viking 1 and 2 Orbiters. The original thermophysical models were based upon the studies of lunar temperature variations. Further development of the models for Mars included surface-atmosphere energy transfer, atmospheric back-radiation, surface emissivity variations, CO2 frost and blocky surfaces, variability of atmospheric back-radiation, effects of a radiative-convective atmosphere, and single-point temperature observations. Document 2::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 3::: An urban thermal plume describes rising air in the lower altitudes of the Earth's atmosphere caused by urban areas being warmer than surrounding areas. Over the past thirty years there has been increasing interest in what have been called urban heat islands (UHI), but it is only since 2007 that thought has been given to the rising columns of warm air, or ‘thermal plumes’ that they produce. Common on-shore breezes at the seaside on a warm day, and off-shore breezes at night are caused by the land heating up faster on a sunny day and cooling faster after sunset, respectively. Thermals, or warm airs, that rise from the land and sea affect the local microscale meteorology; and perhaps at times the mesometeorology. Urban thermal plumes have as powerful although less localized an effect. London is generally 3 to 9 Celsius hotter than the Home Counties. London’s meteorological aberrations were first studied by Luke Howard, FRS in the 1810s, but the notion that this large warm area would produce a significant urban thermal plume was not seriously proposed until very recently. Microscale thermal plumes, whose diameters may be measured in tens of metres, such as those produced by industrial chimney stacks, have been extensively investigated, but largely from the point of view of the plumes dispersal by local micrometeorology. Though their velocity is generally less, their very much greater magnitude (diameter) means that urban thermal plumes will have a more significant effect upon the mesometeorology and even continental macrometeorology. Climate change Decreasing Arctic sea ice cover is one of the most visible manifestations of climate change, often linked to rising global temperatures. However, there are several reports that shrinking polar ice is due more to changes in ambient wind direction than to increasing environmental temperatures per se. In 2006-07, a team led by Son Nghiem of NASA Jet Propulsion Laboratory, Pasadena, California, studied trends in Arctic perenn Document 4::: The Hadley cell, also known as the Hadley circulation, is a global-scale tropical atmospheric circulation that features air rising near the equator, flowing poleward near the tropopause at a height of above the Earth's surface, cooling and descending in the subtropics at around 25 degrees latitude, and then returning equatorward near the surface. It is a thermally direct circulation within the troposphere that emerges due to differences in insolation and heating between the tropics and the subtropics. On a yearly average, the circulation is characterized by a circulation cell on each side of the equator. The Southern Hemisphere Hadley cell is slightly stronger on average than its northern counterpart, extending slightly beyond the equator into the Northern Hemisphere. During the summer and winter months, the Hadley circulation is dominated by a single, cross-equatorial cell with air rising in the summer hemisphere and sinking in the winter hemisphere. Analogous circulations may occur in extraterrestrial atmospheres, such as on Venus and Mars. Global climate is greatly influenced by the structure and behavior of the Hadley circulation. The prevailing trade winds are a manifestation of the lower branches of the Hadley circulation, converging air and moisture in the tropics to form the Intertropical Convergence Zone (ITCZ) where the Earth's heaviest rains are located. Shifts in the ITCZ associated with the seasonal variability of the Hadley circulation cause monsoons. The sinking branches of the Hadley cells give rise to the oceanic subtropical ridges and suppress rainfall; many of the Earth's deserts and arid regions are located in the subtropics coincident with the position of the sinking branches. The Hadley circulation is also a key mechanism for the meridional transport of heat, angular momentum, and moisture, contributing to the subtropical jet stream, the moist tropics, and maintaining a global thermal equilibrium. The Hadley circulation is named after George The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Most of the heat that enters the mesosphere comes from where? A. Earth's surface B. the stratosphere C. Exosphere D. Troposphere Answer:
sciq-4780
multiple_choice
Glucagon and insulin are produced in what organ?
[ "thyroid", "thymus", "pancreas", "hypothalamus" ]
C
Relavent Documents: Document 0::: Hypothalamic-pituitary axis Hypothalamus Pineal body (epiphysis) Pituitary gland (hypophysis) The pituitary gland (or hypophysis) is an endocrine gland about the size of a pea and weighing in humans. It is a protrusion off the bottom of the hypothalamus at the base of the brain, and rests in a small, bony cavity (sella turcica) covered by a dural fold (diaphragma sellae). The pituitary is functionally connected to the hypothalamus by the median eminence via a small tube called the infundibular stem or pituitary stalk. The anterior pituitary (adenohypophysis) is connected to the hypothalamus via the hypothalamo–hypophyseal portal vessels, which allows for quicker and more efficient communication between the hypothalamus and the pituitary. Anterior pituitary lobe (adenohypophysis) Posterior pituitary lobe (neurohypophysis) Oxytocin and anti-diuretic hormone are not secreted in the posterior lobe, merely stored. Thyroid Digestive system Stomach Duodenum (small intestine) Liver Pancreas The pancreas is a heterocrine gland as it functions both as an endocrine and as an exocrine gland. Kidney Adrenal glands Adrenal cortex Adrenal medulla Reproductive Testes Ovarian follicle and corpus luteum Placenta (when pregnant) Uterus (when pregnant) Calcium regulation Parathyroid Skin Other Heart Bone Skeletal muscle In 1998, skeletal muscle was identified as an endocrine organ due to its now well-established role in the secretion of myokines. The use of the term myokine to describe cytokines and other peptides produced by muscle as signalling molecules was proposed in 2003. Adipose tissue Signalling molecules released by adipose tissue are referred to as adipokines. Document 1::: The insulin transduction pathway is a biochemical pathway by which insulin increases the uptake of glucose into fat and muscle cells and reduces the synthesis of glucose in the liver and hence is involved in maintaining glucose homeostasis. This pathway is also influenced by fed versus fasting states, stress levels, and a variety of other hormones. When carbohydrates are consumed, digested, and absorbed the pancreas senses the subsequent rise in blood glucose concentration and releases insulin to promote uptake of glucose from the bloodstream. When insulin binds to the insulin receptor, it leads to a cascade of cellular processes that promote the usage or, in some cases, the storage of glucose in the cell. The effects of insulin vary depending on the tissue involved, e.g., insulin is most important in the uptake of glucose by muscle and adipose tissue. This insulin signal transduction pathway is composed of trigger mechanisms (e.g., autophosphorylation mechanisms) that serve as signals throughout the cell. There is also a counter mechanism in the body to stop the secretion of insulin beyond a certain limit. Namely, those counter-regulatory mechanisms are glucagon and epinephrine. The process of the regulation of blood glucose (also known as glucose homeostasis) also exhibits oscillatory behavior. On a pathological basis, this topic is crucial to understanding certain disorders in the body such as diabetes, hyperglycemia and hypoglycemia. Transduction pathway The functioning of a signal transduction pathway is based on extra-cellular signaling that in turn creates a response that causes other subsequent responses, hence creating a chain reaction, or cascade. During the course of signaling, the cell uses each response for accomplishing some kind of a purpose along the way. Insulin secretion mechanism is a common example of signal transduction pathway mechanism. Insulin is produced by the pancreas in a region called Islets of Langerhans. In the islets of Langerha Document 2::: Pathophysiology of obesity is the study of disordered physiological processes that cause, result from, or are otherwise associated with obesity. A number of possible pathophysiological mechanisms have been identified which may contribute in the development and maintenance of obesity. Research This field of research had been almost unapproached until the leptin gene was discovered in 1994 by J. M. Friedman's laboratory. These investigators postulated that leptin was a satiety factor. In the ob/ob mouse, mutations in the leptin gene resulted in the obese phenotype opening the possibility of leptin therapy for human obesity. However, soon thereafter J. F. Caro's laboratory could not detect any mutations in the leptin gene in humans with obesity. On the contrary, leptin expression was increased, proposing the possibility of leptin-resistance in human obesity. Since this discovery, many other hormonal mechanisms have been elucidated that participate in the regulation of appetite and food intake, storage patterns of adipose tissue, and development of insulin resistance. Since leptin's discovery, ghrelin, insulin, orexin, PYY 3-36, cholecystokinin, adiponectin, as well as many other mediators have been studied. The adipokines are mediators produced by adipose tissue; their action is thought to modify many obesity-related diseases. Appetite Leptin and ghrelin are considered to be complementary in their influence on appetite, with ghrelin produced by the stomach modulating short-term appetitive control (i.e. to eat when the stomach is empty and to stop when the stomach is stretched). Leptin is produced by adipose tissue to signal fat storage reserves in the body, and mediates long-term appetitive controls (i.e. to eat more when fat storages are low and less when fat storages are high). Although administration of leptin may be effective in a small subset of obese individuals who are leptin-deficient, most obese individuals are thought to be leptin resistant and have been f Document 3::: The ob/ob or obese mouse is a mutant mouse that eats excessively due to mutations in the gene responsible for the production of leptin and becomes profoundly obese. It is an animal model of type II diabetes. Identification of the gene mutated in ob led to the discovery of the hormone leptin, which is important in the control of appetite. The first ob/ob mouse arose by chance in a colony at the Jackson Laboratory in 1949. The mutation is recessive. Mutant mice are phenotypically indistinguishable from their unaffected littermates at birth, but gain weight rapidly throughout their lives, reaching a weight three times that of unaffected mice. ob/ob mice develop high blood sugar, despite an enlargement of the pancreatic islets and increased levels of insulin. The gene affected by the ob mutation was identified by positional cloning. The gene produces a hormone, called leptin, that is produced predominantly in adipose tissue. One role of leptin is to regulate appetite by signalling to the brain that the animal has had enough to eat. Since the ob/ob mouse cannot produce leptin, its food intake is uncontrolled by this mechanism. A positional cloning approach in the Lepob mouse allows to identify the locus of the gene encoding for the ob protein. Clones were used to construct a contig across most of the 650-kb critical region of ob. Exons from this interval were trapped using exon trapping method and each was afterward sequenced and searched in the GenBank. One of the exons was hybridized to a Northern blot of mouse white adipose tissue (WAT). This allowed to investigate the levels of ob gene expression which seemed to be markedly increased in WAT of Lepob mice. This is consistent with a biologically inactive truncated protein. See also Zucker rat Document 4::: In the human endocrine system, a spongiocyte is a cell in the zona fasciculata of the adrenal cortex containing lipid droplets that show pronounced vacuolization, due to the way the cells are prepared for microscopic examination. The lipid droplets contain neutral fats, fatty acids, cholesterol, and phospholipids; all of which are precursors to the steroid hormones secreted by the adrenal glands. The principal hormone secreted from the cells of the zona fasciculata are glucocorticoids, but some androgens are produced as well. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Glucagon and insulin are produced in what organ? A. thyroid B. thymus C. pancreas D. hypothalamus Answer:
sciq-4034
multiple_choice
What in hemoglobin gives red blood cells their red color?
[ "barium", "lead", "iron", "calcium" ]
C
Relavent Documents: Document 0::: 2,3-Bisphosphoglyceric acid (conjugate base 2,3-bisphosphoglycerate) (2,3-BPG), also known as 2,3-diphosphoglyceric acid (conjugate base 2,3-diphosphoglycerate) (2,3-DPG), is a three-carbon isomer of the glycolytic intermediate 1,3-bisphosphoglyceric acid (1,3-BPG). -2,3-BPG is present in human red blood cells (RBC; erythrocyte) at approximately 5 mmol/L. It binds with greater affinity to deoxygenated hemoglobin (e.g., when the red blood cell is near respiring tissue) than it does to oxygenated hemoglobin (e.g., in the lungs) due to conformational differences: 2,3-BPG (with an estimated size of about 9 Å) fits in the deoxygenated hemoglobin conformation (with an 11-Angstrom pocket), but not as well in the oxygenated conformation (5 Angstroms). It interacts with deoxygenated hemoglobin beta subunits and decreases the affinity for oxygen and allosterically promotes the release of the remaining oxygen molecules bound to the hemoglobin. Therefore, it enhances the ability of RBCs to release oxygen near tissues that need it most. 2,3-BPG is thus an allosteric effector. Its function was discovered in 1967 by Reinhold Benesch and Ruth Benesch. Metabolism 2,3-BPG is formed from 1,3-BPG by the enzyme BPG mutase. It can then be broken down by 2,3-BPG phosphatase to form 3-phosphoglycerate. Its synthesis and breakdown are, therefore, a way around a step of glycolysis, with the net expense of one ATP per molecule of 2,3-BPG generated as the high-energy carboxylic acid-phosphate mixed anhydride bond is cleaved by bisphosphoglycerate mutase. Document 1::: In hemocytometry, Türk's solution (or Türk's fluid) is a hematological stain (either crystal violet or aqueous methylene blue) prepared in 99% acetic acid (glacial) and distilled water. The solution destroys the red blood cells and platelets within a blood sample (acetic acid being the main lyzing agent), and stains the nuclei of the white blood cells, making them easier to see and count. Türk's solution is intended for use in determining total leukocyte count in a defined volume of blood. Erythrocytes are hemolyzed while leukocytes are stained for easy visualization. Composition of Türk's solution is as follows: Document 2::: – platelet factor 3 – platelet factor 4 – prothrombin – thrombin – thromboplastin – von willebrand factor – fibrin – fibrin fibrinogen degradation products – fibrin foam – fibrin tissue adhesive – fibrinopeptide a – fibrinopeptide b – glycophorin – hemocyanin – hemoglobins – carboxyhemoglobin – erythrocruorins – fetal hemoglobi Document 3::: Blood is a body fluid in the circulatory system of humans and other vertebrates that delivers necessary substances such as nutrients and oxygen to the cells, and transports metabolic waste products away from those same cells. Blood in the circulatory system is also known as peripheral blood, and the blood cells it carries, peripheral blood cells. Blood is composed of blood cells suspended in blood plasma. Plasma, which constitutes 55% of blood fluid, is mostly water (92% by volume), and contains proteins, glucose, mineral ions, hormones, carbon dioxide (plasma being the main medium for excretory product transportation), and blood cells themselves. Albumin is the main protein in plasma, and it functions to regulate the colloidal osmotic pressure of blood. The blood cells are mainly red blood cells (also called RBCs or erythrocytes), white blood cells (also called WBCs or leukocytes), and in mammals platelets (also called thrombocytes). The most abundant cells in vertebrate blood are red blood cells. These contain hemoglobin, an iron-containing protein, which facilitates oxygen transport by reversibly binding to this respiratory gas thereby increasing its solubility in blood. In contrast, carbon dioxide is mostly transported extracellularly as bicarbonate ion transported in plasma. Vertebrate blood is bright red when its hemoglobin is oxygenated and dark red when it is deoxygenated. Some animals, such as crustaceans and mollusks, use hemocyanin to carry oxygen, instead of hemoglobin. Insects and some mollusks use a fluid called hemolymph instead of blood, the difference being that hemolymph is not contained in a closed circulatory system. In most insects, this "blood" does not contain oxygen-carrying molecules such as hemoglobin because their bodies are small enough for their tracheal system to suffice for supplying oxygen. Jawed vertebrates have an adaptive immune system, based largely on white blood cells. White blood cells help to resist infections and parasite Document 4::: The human β-globin locus is composed of five genes located on a short region of chromosome 11, responsible for the creation of the beta parts (roughly half) of the oxygen transport protein Haemoglobin. This locus contains not only the beta globin gene but also delta, gamma-A, gamma-G, and epsilon globin. Expression of all of these genes is controlled by single locus control region (LCR), and the genes are differentially expressed throughout development. The order of the genes in the beta-globin cluster is: 5' - epsilon – gamma-G – gamma-A – delta – beta - 3'. The arrangement of the genes directly reflects the temporal differentiation of their expression during development, with the early-embryonic stage version of the gene located closest to the LCR. If the genes are rearranged, the gene products are expressed at improper stages of development. Expression of these genes is regulated in embryonic erythropoiesis by many transcription factors, including KLF1, which is associated with the upregulation of adult hemoglobin in adult definitive erythrocytes, and KLF2, which is vital to the expression of embryonic hemoglobin. HBB complex Many CRMs have been mapped within the cluster of genes encoding β-like globins expressed in embryonic (HBE1), fetal (HBG1 and HBG2), and adult (HBB and HBD) erythroid cells. All are marked by DNase I hypersensitive sites and footprints, and many are bound by GATA1 in peripheral blood derived erythroblasts (PBDEs). A DNA segment located between the HBG1 and HBD genes is one of the DNA segments bound by BCL11A and several other proteins to negatively regulate HBG1 and HBG2. It is sensitive to DNase I but is not conserved across mammals. An enhancer located 3′ of the HBG1 gene is bound by several proteins in PBDEs and K562 cells and is sensitive to DNase I, but shows almost no signal for mammalian constraint. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What in hemoglobin gives red blood cells their red color? A. barium B. lead C. iron D. calcium Answer:
sciq-8481
multiple_choice
Kangaroos, koala and opossums are part of what group?
[ "primates", "monotremes", "marsupials", "cephalopods" ]
C
Relavent Documents: Document 0::: Animal science is described as "studying the biology of animals that are under the control of humankind". It can also be described as the production and management of farm animals. Historically, the degree was called animal husbandry and the animals studied were livestock species, like cattle, sheep, pigs, poultry, and horses. Today, courses available look at a broader area, including companion animals, like dogs and cats, and many exotic species. Degrees in Animal Science are offered at a number of colleges and universities. Animal science degrees are often offered at land-grant universities, which will often have on-campus farms to give students hands-on experience with livestock animals. Education Professional education in animal science prepares students for careers in areas such as animal breeding, food and fiber production, nutrition, animal agribusiness, animal behavior, and welfare. Courses in a typical Animal Science program may include genetics, microbiology, animal behavior, nutrition, physiology, and reproduction. Courses in support areas, such as genetics, soils, agricultural economics and marketing, legal aspects, and the environment also are offered. Bachelor degree At many universities, a Bachelor of Science (BS) degree in Animal Science allows emphasis in certain areas. Typical areas are species-specific or career-specific. Species-specific areas of emphasis prepare students for a career in dairy management, beef management, swine management, sheep or small ruminant management, poultry production, or the horse industry. Other career-specific areas of study include pre-veterinary medicine studies, livestock business and marketing, animal welfare and behavior, animal nutrition science, animal reproduction science, or genetics. Youth programs are also an important part of animal science programs. Pre-veterinary emphasis Many schools that offer a degree option in Animal Science also offer a pre-veterinary emphasis such as Iowa State University, th Document 1::: Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research. Americas Human Biology major at Stanford University, Palo Alto (since 1970) Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government. Human and Social Biology (Caribbean) Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment. Human Biology Program at University of Toronto The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications. Asia BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002) BSc (honours) Human Biology at AIIMS (New Document 2::: Animals are multicellular, eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. As of 2022, 2.16 million living animal species have been described—of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are around 7.77 million animal species. Animals range in length from to . They have complex interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology. Most living animal species are in Bilateria, a clade whose members have a bilaterally symmetric body plan. The Bilateria include the protostomes, containing animals such as nematodes, arthropods, flatworms, annelids and molluscs, and the deuterostomes, containing the echinoderms and the chordates, the latter including the vertebrates. Life forms interpreted as early animals were present in the Ediacaran biota of the late Precambrian. Many modern animal phyla became clearly established in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago. Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on ad Document 3::: Eric Michael Johnson (20 March 2014). The Gap: The Science of What Separates Us From Other Animals, by Thomas Suddendorf. The Times Higher Education. Document 4::: Reviews Anil Ananthaswamy (27 January 2014). What separates us from other animals? New Scientist Retrieved October 5, 2014, from https://www.newscientist.com/article/mg22129531.100-what-separates-us-from-other-animals.html Robyn Williams (March 2014). The science of what separates us from other animals. Australian Book Review. Retrieved on October 5, 2014, from http://www.australianbookreview.com.au/abr-online/current-issue/113-march-2014-no-359/1859-the-gap Joseph Maldonado (2013). The Gap: The Science of What Separates Us from Other Animals. Psych Central. Retrieved on October 5, 2014, from http://psychcentral.com/lib/the-gap-the-science-of-what-separates-us-from-other-animals/00018372 Steven Mithen (3 April 2013). Most of Us Are Part Neanderthal. The New York Review of Books. Retrieved on October 5, 2014, from http://www.nybooks.com/articles/archives/2014/apr/03/most-us-are-part-neanderthal/?page=2 Wray Herbert (10 February 2014). Social Animals - Pondering the limits of anthropomorphism. The Weekly Standard Vol. 19, No. 21. Retrieved on October 5, 2014, from http://www.weeklystandard.com/articles/social-animals_775990.html David Barash (15 November 2013). Book Review: 'The Gap' by Thomas Suddendorf - What makes humans unique—tools? Language? Cooking?. The Wall Street Journal. Retrieved on October 5, 2014, from https://www.wsj.com/articles/SB10001424052702304527504579169670682265630 Nina Bai (17 October 2013). MIND Reviews: The Gap. Scientific American Mind volume 24 issue 5. Retrieved on October 5, 2014, from http://www.scientificamerican.com/article/mind-reviews-the-gap/ The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Kangaroos, koala and opossums are part of what group? A. primates B. monotremes C. marsupials D. cephalopods Answer:
sciq-4026
multiple_choice
What is a longitudinal, flexible rod located between the digestive tube and the nerve cord?
[ "the notochord", "tubular gland", "the oscillatory", "the underlain" ]
A
Relavent Documents: Document 0::: An internodal segment (or internode) is the portion of a nerve fiber between two Nodes of Ranvier. The neurolemma or primitive sheath is not interrupted at the nodes, but passes over them as a continuous membrane. Document 1::: H2.00.04.4.01001: Lymphoid tissue H2.00.05.0.00001: Muscle tissue H2.00.05.1.00001: Smooth muscle tissue H2.00.05.2.00001: Striated muscle tissue H2.00.06.0.00001: Nerve tissue H2.00.06.1.00001: Neuron H2.00.06.2.00001: Synapse H2.00.06.2.00001: Neuroglia h3.01: Bones h3.02: Joints h3.03: Muscles h3.04: Alimentary system h3.05: Respiratory system h3.06: Urinary system h3.07: Genital system h3.08: Document 2::: The gastrointestinal wall of the gastrointestinal tract is made up of four layers of specialised tissue. From the inner cavity of the gut (the lumen) outwards, these are: Mucosa Submucosa Muscular layer Serosa or adventitia The mucosa is the innermost layer of the gastrointestinal tract. It surrounds the lumen of the tract and comes into direct contact with digested food (chyme). The mucosa itself is made up of three layers: the epithelium, where most digestive, absorptive and secretory processes occur; the lamina propria, a layer of connective tissue, and the muscularis mucosae, a thin layer of smooth muscle. The submucosa contains nerves including the submucous plexus (also called Meissner's plexus), blood vessels and elastic fibres with collagen, that stretches with increased capacity but maintains the shape of the intestine. The muscular layer surrounds the submucosa. It comprises layers of smooth muscle in longitudinal and circular orientation that also helps with continued bowel movements (peristalsis) and the movement of digested material out of and along the gut. In between the two layers of muscle lies the myenteric plexus (also called Auerbach's plexus). The serosa/adventitia are the final layers. These are made up of loose connective tissue and coated in mucus so as to prevent any friction damage from the intestine rubbing against other tissue. The serosa is present if the tissue is within the peritoneum, and the adventitia if the tissue is retroperitoneal. Structure When viewed under the microscope, the gastrointestinal wall has a consistent general form, but with certain parts differing along its course. Mucosa The mucosa is the innermost layer of the gastrointestinal tract. It surrounds the cavity (lumen) of the tract and comes into direct contact with digested food (chyme). The mucosa is made up of three layers: The epithelium is the innermost layer. It is where most digestive, absorptive and secretory processes occur. The lamina propr Document 3::: The epipharyngeal groove is a ciliated groove along the dorsal side of the inside of the pharynx in some plankton-feeding early chordates, such as Amphioxus. It helps to carry a stream of mucus with plankton stuck in it, through the pharynx into the gut to be digested. The subnotochordal rod or hypochord is a transient structure that appears ventral to the notochord in the heads of embryos of some vertebrates. Its appearance is stimulated by a chemical secreted by the notochord. The subnotochordal rod helps to stimulate development of the dorsal aorta. There is an opinion that these two structures are homologous. Document 4::: The preaortic lymph nodes lie in front of the aorta, and may be divided into celiac lymph nodes, superior mesenteric lymph nodes, and inferior mesenteric lymph nodes groups, arranged around the origins of the corresponding arteries. The celiac lymph nodes are grouped into three sets: the gastric, hepatic and splenic lymph nodes. These groups also form their own subgroups. The superior mesenteric lymph nodes are grouped into three sets: the mesenteric, ileocolic and mesocolic lymph nodes. The inferior mesenteric lymph nodes have a subgroup of pararectal lymph nodes. The preaortic lymph nodes receive a few vessels from the lateral aortic lymph nodes, but their principal afferents are derived from the organs supplied by the three arteries with which they are associated–the celiac, superior and inferior mesenteric arteries. Some of their efferents pass to the retroaortic lymph nodes, but the majority unite to form the intestinal lymph trunk, which enters the cisterna chyli. Additional images The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is a longitudinal, flexible rod located between the digestive tube and the nerve cord? A. the notochord B. tubular gland C. the oscillatory D. the underlain Answer:
sciq-11022
multiple_choice
The vertebrate circulatory system enables blood to deliver ________ and remove wastes throughout the body.
[ "fluid and nutrients", "hydrogen and nutrients", "acid and nutrients", "oxygen and nutrients" ]
D
Relavent Documents: Document 0::: The blood circulatory system is a system of organs that includes the heart, blood vessels, and blood which is circulated throughout the entire body of a human or other vertebrate. It includes the cardiovascular system, or vascular system, that consists of the heart and blood vessels (from Greek kardia meaning heart, and from Latin vascula meaning vessels). The circulatory system has two divisions, a systemic circulation or circuit, and a pulmonary circulation or circuit. Some sources use the terms cardiovascular system and vascular system interchangeably with the circulatory system. The network of blood vessels are the great vessels of the heart including large elastic arteries, and large veins; other arteries, smaller arterioles, capillaries that join with venules (small veins), and other veins. The circulatory system is closed in vertebrates, which means that the blood never leaves the network of blood vessels. Some invertebrates such as arthropods have an open circulatory system. Diploblasts such as sponges, and comb jellies lack a circulatory system. Blood is a fluid consisting of plasma, red blood cells, white blood cells, and platelets; it is circulated around the body carrying oxygen and nutrients to the tissues and collecting and disposing of waste materials. Circulated nutrients include proteins and minerals and other components include hemoglobin, hormones, and gases such as oxygen and carbon dioxide. These substances provide nourishment, help the immune system to fight diseases, and help maintain homeostasis by stabilizing temperature and natural pH. In vertebrates, the lymphatic system is complementary to the circulatory system. The lymphatic system carries excess plasma (filtered from the circulatory system capillaries as interstitial fluid between cells) away from the body tissues via accessory routes that return excess fluid back to blood circulation as lymph. The lymphatic system is a subsystem that is essential for the functioning of the bloo Document 1::: The Starling principle holds that extracellular fluid movements between blood and tissues are determined by differences in hydrostatic pressure and colloid osmotic (oncotic) pressure between plasma inside microvessels and interstitial fluid outside them. The Starling Equation, proposed many years after the death of Starling, describes that relationship in mathematical form and can be applied to many biological and non-biological semipermeable membranes. The classic Starling principle and the equation that describes it have in recent years been revised and extended. Every day around 8 litres of water (solvent) containing a variety of small molecules (solutes) leaves the blood stream of an adult human and perfuses the cells of the various body tissues. Interstitial fluid drains by afferent lymph vessels to one of the regional lymph node groups, where around 4 litres per day is reabsorbed to the blood stream. The remainder of the lymphatic fluid is rich in proteins and other large molecules and rejoins the blood stream via the thoracic duct which empties into the great veins close to the heart. Filtration from plasma to interstitial (or tissue) fluid occurs in microvascular capillaries and post-capillary venules. In most tissues the micro vessels are invested with a continuous internal surface layer that includes a fibre matrix now known as the endothelial glycocalyx whose interpolymer spaces function as a system of small pores, radius circa 5 nm. Where the endothelial glycocalyx overlies a gap in the junction molecules that bind endothelial cells together (inter endothelial cell cleft), the plasma ultrafiltrate may pass to the interstitial space, leaving larger molecules reflected back into the plasma. A small number of continuous capillaries are specialised to absorb solvent and solutes from interstitial fluid back into the blood stream through fenestrations in endothelial cells, but the volume of solvent absorbed every day is small. Discontinuous capillaries as Document 2::: Body fluids, bodily fluids, or biofluids, sometimes body liquids, are liquids within the human body. In lean healthy adult men, the total body water is about 60% (60–67%) of the total body weight; it is usually slightly lower in women (52–55%). The exact percentage of fluid relative to body weight is inversely proportional to the percentage of body fat. A lean man, for example, has about 42 (42–47) liters of water in his body. The total body of water is divided into fluid compartments, between the intracellular fluid compartment (also called space, or volume) and the extracellular fluid (ECF) compartment (space, volume) in a two-to-one ratio: 28 (28–32) liters are inside cells and 14 (14–15) liters are outside cells. The ECF compartment is divided into the interstitial fluid volume – the fluid outside both the cells and the blood vessels – and the intravascular volume (also called the vascular volume and blood plasma volume) – the fluid inside the blood vessels – in a three-to-one ratio: the interstitial fluid volume is about 12 liters; the vascular volume is about 4 liters. The interstitial fluid compartment is divided into the lymphatic fluid compartment – about 2/3, or 8 (6–10) liters, and the transcellular fluid compartment (the remaining 1/3, or about 4 liters). The vascular volume is divided into the venous volume and the arterial volume; and the arterial volume has a conceptually useful but unmeasurable subcompartment called the effective arterial blood volume. Compartments by location intracellular fluid (ICF), which consist of cytosol and fluids in the cell nucleus Extracellular fluid Intravascular fluid (blood plasma) Interstitial fluid Lymphatic fluid (sometimes included in interstitial fluid) Transcellular fluid Health Body fluid is the term most often used in medical and health contexts. Modern medical, public health, and personal hygiene practices treat body fluids as potentially unclean. This is because they can be vectors for infectious Document 3::: The excretory system is a passive biological system that removes excess, unnecessary materials from the body fluids of an organism, so as to help maintain internal chemical homeostasis and prevent damage to the body. The dual function of excretory systems is the elimination of the waste products of metabolism and to drain the body of used up and broken down components in a liquid and gaseous state. In humans and other amniotes (mammals, birds and reptiles) most of these substances leave the body as urine and to some degree exhalation, mammals also expel them through sweating. Only the organs specifically used for the excretion are considered a part of the excretory system. In the narrow sense, the term refers to the urinary system. However, as excretion involves several functions that are only superficially related, it is not usually used in more formal classifications of anatomy or function. As most healthy functioning organs produce metabolic and other wastes, the entire organism depends on the function of the system. Breaking down of one of more of the systems is a serious health condition, for example kidney failure. Systems Urinary system The kidneys are large, bean-shaped organs which are present on each side of the vertebral column in the abdominal cavity. Humans have two kidneys and each kidney is supplied with blood from the renal artery. The kidneys remove from the blood the nitrogenous wastes such as urea, as well as salts and excess water, and excrete them in the form of urine. This is done with the help of millions of nephrons present in the kidney. The filtrated blood is carried away from the kidneys by the renal vein (or kidney vein). The urine from the kidney is collected by the ureter (or excretory tubes), one from each kidney, and is passed to the urinary bladder. The urinary bladder collects and stores the urine until urination. The urine collected in the bladder is passed into the external environment from the body through an opening called Document 4::: Osmoregulation is the active regulation of the osmotic pressure of an organism's body fluids, detected by osmoreceptors, to maintain the homeostasis of the organism's water content; that is, it maintains the fluid balance and the concentration of electrolytes (salts in solution which in this case is represented by body fluid) to keep the body fluids from becoming too diluted or concentrated. Osmotic pressure is a measure of the tendency of water to move into one solution from another by osmosis. The higher the osmotic pressure of a solution, the more water tends to move into it. Pressure must be exerted on the hypertonic side of a selectively permeable membrane to prevent diffusion of water by osmosis from the side containing pure water. Although there may be hourly and daily variations in osmotic balance, an animal is generally in an osmotic steady state over the long term. Organisms in aquatic and terrestrial environments must maintain the right concentration of solutes and amount of water in their body fluids; this involves excretion (getting rid of metabolic nitrogen wastes and other substances such as hormones that would be toxic if allowed to accumulate in the blood) through organs such as the skin and the kidneys. Regulators and conformers Two major types of osmoregulation are osmoconformers and osmoregulators. Osmoconformers match their body osmolarity to their environment actively or passively. Most marine invertebrates are osmoconformers, although their ionic composition may be different from that of seawater. In a strictly osmoregulating animal, the amounts of internal salt and water are held relatively constant in the face of environmental changes. It requires that intake and outflow of water and salts be equal over an extended period of time. Organisms that maintain an internal osmolarity different from the medium in which they are immersed have been termed osmoregulators. They tightly regulate their body osmolarity, maintaining constant internal c The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The vertebrate circulatory system enables blood to deliver ________ and remove wastes throughout the body. A. fluid and nutrients B. hydrogen and nutrients C. acid and nutrients D. oxygen and nutrients Answer:
sciq-6887
multiple_choice
What is the atomic number?
[ "number of protons", "Number of electrons", "Number of neutrons", "Speed of electrons" ]
A
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: The elementary charge, usually denoted by , is a fundamental physical constant, defined as the electric charge carried by a single proton or, equivalently, the magnitude of the negative electric charge carried by a single electron, which has charge −1 . In the SI system of units, the value of the elementary charge is exactly defined as  =  coulombs, or 160.2176634 zeptocoulombs (zC). Since the 2019 redefinition of SI base units, the seven SI base units are defined by seven fundamental physical constants, of which the elementary charge is one. In the centimetre–gram–second system of units (CGS), the corresponding quantity is . Robert A. Millikan and Harvey Fletcher's oil drop experiment first directly measured the magnitude of the elementary charge in 1909, differing from the modern accepted value by just 0.6%. Under assumptions of the then-disputed atomic theory, the elementary charge had also been indirectly inferred to ~3% accuracy from blackbody spectra by Max Planck in 1901 and (through the Faraday constant) at order-of-magnitude accuracy by Johann Loschmidt's measurement of the Avogadro number in 1865. As a unit In some natural unit systems, such as the system of atomic units, e functions as the unit of electric charge. The use of elementary charge as a unit was promoted by George Johnstone Stoney in 1874 for the first system of natural units, called Stoney units. Later, he proposed the name electron for this unit. At the time, the particle we now call the electron was not yet discovered and the difference between the particle electron and the unit of charge electron was still blurred. Later, the name electron was assigned to the particle and the unit of charge e lost its name. However, the unit of energy electronvolt (eV) is a remnant of the fact that the elementary charge was once called electron. In other natural unit systems, the unit of charge is defined as with the result that where is the fine-structure constant, is the speed of light, is Document 2::: In astrophysics, the Eddington number, , is the number of protons in the observable universe. Eddington originally calculated it as about ; current estimates make it approximately . The term is named for British astrophysicist Arthur Eddington, who in 1940 was the first to propose a value of and to explain why this number might be important for physical cosmology and the foundations of physics. History Eddington argued that the value of the fine-structure constant, α, could be obtained by pure deduction. He related α to the Eddington number, which was his estimate of the number of protons in the universe. This led him in 1929 to conjecture that α was exactly 1/136. He devised a "proof" that NEdd = 136 × 2256, or about 1.57×1079. Other physicists did not adopt this conjecture and did not accept his argument. In the late 1930s, the best experimental value of the fine-structure constant, α, was approximately 1/137. Eddington then argued, from aesthetic and numerological considerations, that α should be exactly 1/137. Current estimates of NEdd point to a value of about . These estimates assume that all matter can be taken to be hydrogen and require assumed values for the number and size of galaxies and stars in the universe. Attempts to find a mathematical basis for this dimensionless constant have continued up to the present time. During a course of lectures that he delivered in 1938 as Tarner Lecturer at Trinity College, Cambridge, Eddington averred that: This large number was soon named the "Eddington number". Shortly thereafter, improved measurements of α yielded values closer to 1/137, whereupon Eddington changed his "proof" to show that α had to be exactly 1/137. Recent theory The most precise value of α (obtained experimentally in 2012) is: Consequently, no reliable source any longer maintains that α is the reciprocal of an integer. Nor does anyone take seriously a mathematical relationship between α and NEdd. On possible roles for NEdd in contempor Document 3::: The atomic number of a material exhibits a strong and fundamental relationship with the nature of radiation interactions within that medium. There are numerous mathematical descriptions of different interaction processes that are dependent on the atomic number, . When dealing with composite media (i.e. a bulk material composed of more than one element), one therefore encounters the difficulty of defining . An effective atomic number in this context is equivalent to the atomic number but is used for compounds (e.g. water) and mixtures of different materials (such as tissue and bone). This is of most interest in terms of radiation interaction with composite materials. For bulk interaction properties, it can be useful to define an effective atomic number for a composite medium and, depending on the context, this may be done in different ways. Such methods include (i) a simple mass-weighted average, (ii) a power-law type method with some (very approximate) relationship to radiation interaction properties or (iii) methods involving calculation based on interaction cross sections. The latter is the most accurate approach (Taylor 2012), and the other more simplified approaches are often inaccurate even when used in a relative fashion for comparing materials. In many textbooks and scientific publications, the following - simplistic and often dubious - sort of method is employed. One such proposed formula for the effective atomic number, , is as follows: where is the fraction of the total number of electrons associated with each element, and is the atomic number of each element. An example is that of water (H2O), made up of two hydrogen atoms (Z=1) and one oxygen atom (Z=8), the total number of electrons is 1+1+8 = 10, so the fraction of electrons for the two hydrogens is (2/10) and for the one oxygen is (8/10). So the for water is: The effective atomic number is important for predicting how photons interact with a substance, as certain types of photon interactions Document 4::: Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams. Course content E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are: Electrostatics Conductors, capacitors, and dielectrics Electric circuits Magnetic fields Electromagnetism. Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class. AP test The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution. Registration The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test. Format The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the atomic number? A. number of protons B. Number of electrons C. Number of neutrons D. Speed of electrons Answer:
sciq-2142
multiple_choice
What is the term for the force exerted by circulating blood on the walls of blood vessels?
[ "blood energy", "circulation pressure", "heart pressure", "blood pressure" ]
D
Relavent Documents: Document 0::: Compliance is the ability of a hollow organ (vessel) to distend and increase volume with increasing transmural pressure or the tendency of a hollow organ to resist recoil toward its original dimensions on application of a distending or compressing force. It is the reciprocal of "elastance", hence elastance is a measure of the tendency of a hollow organ to recoil toward its original dimensions upon removal of a distending or compressing force. Blood vessels The terms elastance and compliance are of particular significance in cardiovascular physiology and respiratory physiology. In compliance, an increase in volume occurs in a vessel when the pressure in that vessel is increased. The tendency of the arteries and veins to stretch in response to pressure has a large effect on perfusion and blood pressure. This physically means that blood vessels with a higher compliance deform easier than lower compliance blood vessels under the same pressure and volume conditions. Venous compliance is approximately 30 times larger than arterial compliance. Compliance is calculated using the following equation, where ΔV is the change in volume (mL), and ΔP is the change in pressure (mmHg): Physiologic compliance is generally in agreement with the above and adds dP/dt as a common academic physiologic measurement of both pulmonary and cardiac tissues. Adaptation of equations initially applied to rubber and latex allow modeling of the dynamics of pulmonary and cardiac tissue compliance. Veins have a much higher compliance than arteries (largely due to their thinner walls.) Veins which are abnormally compliant can be associated with edema. Pressure stockings are sometimes used to externally reduce compliance, and thus keep blood from pooling in the legs. Vasodilation and vasoconstriction are complex phenomena; they are functions not merely of the fluid mechanics of pressure and tissue elasticity but also of active homeostatic regulation with hormones and cell signaling, in which Document 1::: Hemodynamics or haemodynamics are the dynamics of blood flow. The circulatory system is controlled by homeostatic mechanisms of autoregulation, just as hydraulic circuits are controlled by control systems. The hemodynamic response continuously monitors and adjusts to conditions in the body and its environment. Hemodynamics explains the physical laws that govern the flow of blood in the blood vessels. Blood flow ensures the transportation of nutrients, hormones, metabolic waste products, oxygen, and carbon dioxide throughout the body to maintain cell-level metabolism, the regulation of the pH, osmotic pressure and temperature of the whole body, and the protection from microbial and mechanical harm. Blood is a non-Newtonian fluid, and is most efficiently studied using rheology rather than hydrodynamics. Because blood vessels are not rigid tubes, classic hydrodynamics and fluids mechanics based on the use of classical viscometers are not capable of explaining haemodynamics. The study of the blood flow is called hemodynamics, and the study of the properties of the blood flow is called hemorheology. Blood Blood is a complex liquid. Blood is composed of plasma and formed elements. The plasma contains 91.5% water, 7% proteins and 1.5% other solutes. The formed elements are platelets, white blood cells, and red blood cells. The presence of these formed elements and their interaction with plasma molecules are the main reasons why blood differs so much from ideal Newtonian fluids. Viscosity of plasma Normal blood plasma behaves like a Newtonian fluid at physiological rates of shear. Typical values for the viscosity of normal human plasma at 37 °C is 1.4 mN·s/m2. The viscosity of normal plasma varies with temperature in the same way as does that of its solvent water; a 5 °C increase of temperature in the physiological range reduces plasma viscosity by about 10%. Osmotic pressure of plasma The osmotic pressure of solution is determined by the number of particles present Document 2::: The blood circulatory system is a system of organs that includes the heart, blood vessels, and blood which is circulated throughout the entire body of a human or other vertebrate. It includes the cardiovascular system, or vascular system, that consists of the heart and blood vessels (from Greek kardia meaning heart, and from Latin vascula meaning vessels). The circulatory system has two divisions, a systemic circulation or circuit, and a pulmonary circulation or circuit. Some sources use the terms cardiovascular system and vascular system interchangeably with the circulatory system. The network of blood vessels are the great vessels of the heart including large elastic arteries, and large veins; other arteries, smaller arterioles, capillaries that join with venules (small veins), and other veins. The circulatory system is closed in vertebrates, which means that the blood never leaves the network of blood vessels. Some invertebrates such as arthropods have an open circulatory system. Diploblasts such as sponges, and comb jellies lack a circulatory system. Blood is a fluid consisting of plasma, red blood cells, white blood cells, and platelets; it is circulated around the body carrying oxygen and nutrients to the tissues and collecting and disposing of waste materials. Circulated nutrients include proteins and minerals and other components include hemoglobin, hormones, and gases such as oxygen and carbon dioxide. These substances provide nourishment, help the immune system to fight diseases, and help maintain homeostasis by stabilizing temperature and natural pH. In vertebrates, the lymphatic system is complementary to the circulatory system. The lymphatic system carries excess plasma (filtered from the circulatory system capillaries as interstitial fluid between cells) away from the body tissues via accessory routes that return excess fluid back to blood circulation as lymph. The lymphatic system is a subsystem that is essential for the functioning of the bloo Document 3::: In biomechanics, the Moens–Korteweg equation models the relationship between wave speed or pulse wave velocity (PWV) and the incremental elastic modulus of the arterial wall or its distensibility. The equation was derived independently by Adriaan Isebree Moens and Diederik Korteweg. It is derived from Newton's second law of motion, using some simplifying assumptions, and reads: The Moens–Korteweg equation states that PWV is proportional to the square root of the incremental elastic modulus, (Einc), of the vessel wall given constant ratio of wall thickness, h, to vessel radius, r, and blood density, ρ, assuming that the artery wall is isotropic and experiences isovolumetric change with pulse pressure. Document 4::: Windkessel effect is a term used in medicine to account for the shape of the arterial blood pressure waveform in terms of the interaction between the stroke volume and the compliance of the aorta and large elastic arteries (Windkessel vessels) and the resistance of the smaller arteries and arterioles. Windkessel when loosely translated from German to English means 'air chamber', but is generally taken to imply an elastic reservoir. The walls of large elastic arteries (e.g. aorta, common carotid, subclavian, and pulmonary arteries and their larger branches) contain elastic fibers, formed of elastin. These arteries distend when the blood pressure rises during systole and recoil when the blood pressure falls during diastole. Since the rate of blood entering these elastic arteries exceeds that leaving them via the peripheral resistance, there is a net storage of blood in the aorta and large arteries during systole, which discharges during diastole. The compliance (or distensibility) of the aorta and large elastic arteries is therefore analogous to a capacitor (employing the hydraulic analogy); to put it another way, these arteries collectively act as a hydraulic accumulator. The Windkessel effect helps in damping the fluctuation in blood pressure (pulse pressure) over the cardiac cycle and assists in the maintenance of organ perfusion during diastole when cardiac ejection ceases. The idea of the Windkessel was alluded to by Giovanni Borelli, although Stephen Hales articulated the concept more clearly and drew the analogy with an air chamber used in fire engines in the 18th century. Otto Frank, an influential German physiologist, developed the concept and provided a firm mathematical foundation. Frank's model is sometimes called a two-element Windkessel to distinguish it from more recent and more elaborate Windkessel models (e.g. three- or four-element and non-linear Windkessel models). Model types Modeling of a Windkessel Windkessel physiology remains a relevant y The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the term for the force exerted by circulating blood on the walls of blood vessels? A. blood energy B. circulation pressure C. heart pressure D. blood pressure Answer:
sciq-2167
multiple_choice
Each vertebral body has a large hole in the center through which the nerves of what pass?
[ "steering cord", "layers cord", "Brain Cord", "spinal cord" ]
D
Relavent Documents: Document 0::: The following diagram is provided as an overview of and topical guide to the human nervous system: Human nervous system – the part of the human body that coordinates a person's voluntary and involuntary actions and transmits signals between different parts of the body. The human nervous system consists of two main parts: the central nervous system (CNS) and the peripheral nervous system (PNS). The CNS contains the brain and spinal cord. The PNS consists mainly of nerves, which are long fibers that connect the CNS to every other part of the body. The PNS includes motor neurons, mediating voluntary movement; the autonomic nervous system, comprising the sympathetic nervous system and the parasympathetic nervous system and regulating involuntary functions; and the enteric nervous system, a semi-independent part of the nervous system whose function is to control the gastrointestinal system. Evolution of the human nervous system Evolution of nervous systems Evolution of human intelligence Evolution of the human brain Paleoneurology Some branches of science that study the human nervous system Neuroscience Neurology Paleoneurology Central nervous system The central nervous system (CNS) is the largest part of the nervous system and includes the brain and spinal cord. Spinal cord Brain Brain – center of the nervous system. Outline of the human brain List of regions of the human brain Principal regions of the vertebrate brain: Peripheral nervous system Peripheral nervous system (PNS) – nervous system structures that do not lie within the CNS. Sensory system A sensory system is a part of the nervous system responsible for processing sensory information. A sensory system consists of sensory receptors, neural pathways, and parts of the brain involved in sensory perception. List of sensory systems Sensory neuron Perception Visual system Auditory system Somatosensory system Vestibular system Olfactory system Taste Pain Components of the nervous system Neuron I Document 1::: The lumbar nerves are the five pairs of spinal nerves emerging from the lumbar vertebrae. They are divided into posterior and anterior divisions. Structure The lumbar nerves are five spinal nerves which arise from either side of the spinal cord below the thoracic spinal cord and above the sacral spinal cord. They arise from the spinal cord between each pair of lumbar spinal vertebrae and travel through the intervertebral foramina. The nerves then split into an anterior branch, which travels forward, and a posterior branch, which travels backwards and supplies the area of the back. Posterior divisions The middle divisions of the posterior branches run close to the articular processes of the vertebrae and end in the multifidus muscle. The outer branches supply the erector spinae muscles. The nerves give off branches to the skin. These pierce the aponeurosis of the greater trochanter. Anterior divisions The anterior divisions of the lumbar nerves () increase in size from above downward. The anterior divisions communicate with the sympathetic trunk. Near the origin of the divisions, they are joined by gray rami communicantes from the lumbar ganglia of the sympathetic trunk. These rami consist of long, slender branches which accompany the lumbar arteries around the sides of the vertebral bodies, beneath the Psoas major. Their arrangement is somewhat irregular: one ganglion may give rami to two lumbar nerves, or one lumbar nerve may receive rami (branches) from two ganglia. The first and second, and sometimes the third and fourth lumbar nerves are each connected with the lumbar part of the sympathetic trunk by a white ramus communicans. The nerves pass obliquely outward behind the Psoas major, or between its fasciculi, distributing filaments to it and the Quadratus lumborum. As the nerves travel forward, they create nervous plexuses. The first three lumbar nerves, and the greater part of the fourth together form the lumbar plexus. The smaller part of the fourth Document 2::: A motor nerve is a nerve that transmits motor signals from the central nervous system (CNS) to the muscles of the body. This is different from the motor neuron, which includes a cell body and branching of dendrites, while the nerve is made up of a bundle of axons. Motor nerves act as efferent nerves which carry information out from the CNS to muscles, as opposed to afferent nerves (also called sensory nerves), which transfer signals from sensory receptors in the periphery to the CNS. Efferent nerves can also connect to glands or other organs/issues instead of muscles (and so motor nerves are not equivalent to efferent nerves). In addition, there are nerves that serve as both sensory and motor nerves called mixed nerves. Structure and function Motor nerve fibers transduce signals from the CNS to peripheral neurons of proximal muscle tissue. Motor nerve axon terminals innervate skeletal and smooth muscle, as they are heavily involved in muscle control. Motor nerves tend to be rich in acetylcholine vesicles because the motor nerve, a bundle of motor nerve axons that deliver motor signals and signal for movement and motor control. Calcium vesicles reside in the axon terminals of the motor nerve bundles. The high calcium concentration outside of presynaptic motor nerves increases the size of end-plate potentials (EPPs). Protective tissues Within motor nerves, each axon is wrapped by the endoneurium, which is a layer of connective tissue that surrounds the myelin sheath. Bundles of axons are called fascicles, which are wrapped in perineurium. All of the fascicles wrapped in the perineurium are wound together and wrapped by a final layer of connective tissue known as the epineurium. These protective tissues defend nerves from injury, pathogens and help to maintain nerve function. Layers of connective tissue maintain the rate at which nerves conduct action potentials. Spinal cord exit Most motor pathways originate in the motor cortex of the brain. Signals run down th Document 3::: The middle meningeal nerve (meningeal or dural branch) is given off from the maxillary nerve (CN V2) directly after its origin from the trigeminal ganglion, before CN V2 enters the foramen rotundum. It accompanies the middle meningeal artery and vein as the artery and vein enter the cranium through the foramen spinosum and supplies the dura mater. Additional images Document 4::: The costocervical trunk arises from the upper and back part of the second part of subclavian artery, behind the scalenus anterior on the right side, and medial to that muscle on the left side. Passing backward, it splits into the deep cervical artery and the supreme intercostal artery (highest intercostal artery), which descends behind the pleura in front of the necks of the first and second ribs, and anastomoses with the first aortic intercostal (3rd posterior intercostal artery). As it crosses the neck of the first rib it lies medial to the anterior division of the first thoracic nerve, and lateral to the first thoracic ganglion of the sympathetic trunk. In the first intercostal space, it gives off a branch which is distributed in a manner similar to the distribution of the aortic intercostals. The branch for the second intercostal space usually joins with one from the highest aortic intercostal artery. This branch is not constant, but is more commonly found on the right side; when absent, its place is supplied by an intercostal branch from the aorta. Each intercostal gives off a posterior branch which goes to the posterior vertebral muscles, and sends a small spinal branch through the corresponding intervertebral foramen to the medulla spinalis and its membranes. Branches Deep cervical artery supreme intercostal artery The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Each vertebral body has a large hole in the center through which the nerves of what pass? A. steering cord B. layers cord C. Brain Cord D. spinal cord Answer:
ai2_arc-346
multiple_choice
Young robins build the same kinds of nests their parents build even if the young birds have never seen their parents build a nest. This is an example of
[ "a learned behavior.", "an inherited behavior.", "a physical characteristic.", "acquired characteristic." ]
B
Relavent Documents: Document 0::: The difficulty of defining or measuring intelligence in non-human animals makes the subject difficult to study scientifically in birds. In general, birds have relatively large brains compared to their head size. The visual and auditory senses are well developed in most species, though the tactile and olfactory senses are well realized only in a few groups. Birds communicate using visual signals as well as through the use of calls and song. The testing of intelligence in birds is therefore usually based on studying responses to sensory stimuli. The corvids (ravens, crows, jays, magpies, etc.) and psittacines (parrots, macaws, and cockatoos) are often considered the most intelligent birds, and are among the most intelligent animals in general. Pigeons, finches, domestic fowl, and birds of prey have also been common subjects of intelligence studies. Studies Bird intelligence has been studied through several attributes and abilities. Many of these studies have been on birds such as quail, domestic fowl, and pigeons kept under captive conditions. It has, however, been noted that field studies have been limited, unlike those of the apes. Birds in the crow family (corvids) as well as parrots (psittacines) have been shown to live socially, have long developmental periods, and possess large forebrains, all of which have been hypothesized to allow for greater cognitive abilities. Counting has traditionally been considered an ability that shows intelligence. Anecdotal evidence from the 1960s has suggested that crows can count up to 3. Researchers need to be cautious, however, and ensure that birds are not merely demonstrating the ability to subitize, or count a small number of items quickly. Some studies have suggested that crows may indeed have a true numerical ability. It has been shown that parrots can count up to 6. Cormorants used by Chinese fishermen were given every eighth fish as a reward, and found to be able to keep count up to 7. E.H. Hoh wrote in Natural Histo Document 1::: Empathy in chickens is the ability of a chicken to understand and share the feelings of another chicken. The Biotechnology and Biological Sciences Research Council's (BBSRC) Animal Welfare Initiative defines and recognizes that "...hens possess a fundamental capacity to empathise..." These empathetic responses in animals are well documented and are usually discussed along with issues related to cognition. The difference between animal cognition and animal emotion is recognized by ethicists. The specific emotional attribute of empathy in chickens has not been only investigated in terms of its existence but it has applications that have resulted in the designed reduction of stress in farm-raised poultry. Definition The difference between animal cognition and animal emotion is recognized by ethicists. Animal cognition covers all aspects related to the thought processes in animals. Though the topics related to cognition such as self-recognition, memory, other emotions and problem-solving have been investigated, the ability to share the emotional state of another has now been established in hens. Chickens have the basic foundations of emotional empathy. Empathy is sometimes regarded as a form of emotional intelligence and is demonstrated when hens display signs of anxiety when they observed their chicks in distressful situations. The hens have been said to "feel their chicks' pain" and to "be affected by, and share, the emotional state of another." Scientific evidence A study funded by the BBSRC and published in 2011 was the first to demonstrate that chickens possess empathy and the first study to use both behavioral and physiological methods to measure these traits in birds. Chicks were exposed to a puff of air, which they find mildly distressing. During the exposure, their mother's behaviour and physiological responses were monitored non-invasively. The hens altered their behaviour by decreased preening, increased alertness, and an increased numbers of vocalisati Document 2::: Passerine birds produce song through the vocal organ, the syrinx, which is composed of bilaterally symmetric halves located where the trachea separates into the two bronchi. Using endoscopic techniques, it has been observed that song is produced by air passing between a set of medial and lateral labia on each side of the syrinx. Song is produced bilaterally, in both halves, through each separate set of labia unless air is prevented from flowing through one side of the syrinx. Birds regulate the airflow through the syrinx with muscles—M. syringealis dorsalis and M. tracheobronchialis dorsalis—that control the medial and lateral labia in the syrinx, whose action may close off airflow. Song may, hence, be produced unilaterally through one side of the syrinx when the labia are closed in the opposite side. Early experiments discover lateralization Lateral dominance of the hypoglossal nerve conveying messages from the brain to the syrinx was first observed in the 1970s. This lateral dominance was determined in a breed of canary, the waterschlager canary, bred for its long and complex song, by lesioning the ipsilateral tracheosyringeal branch of the hypoglossal nerve, disabling either the left or right syrinx. The numbers of song elements in the birds’ repertoires were greatly attenuated when the left side was cut, but only modestly attenuated when the right side was disabled, indicating left syringeal dominance of song production in these canaries. Similar lateralized effects have been observed in other species such as the white-crowned sparrow (Zonotrichia leucophrys), the Java sparrow (Lonchura oryzivora) and the zebra finch (Taeniopygia guttata), which is right-side dominant. However, denervation in these birds does not entirely silence the affected syllables but creates qualitative changes in phonology and frequency. Respiratory control and neurophysiology In waterslager canaries, which produce most syllables using the left syrinx, as soon as a unilaterally produced Document 3::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 4::: The cognitive ecology of individual recognition has been studied in many species, especially in primates or other mammalian species that exhibit complex social behaviours, but comparatively little research has been done on colonial birds. Colonial birds live in dense colonies in which many individuals interact with each other daily. For colonial birds, being able to identify and recognize individuals can be a crucial skill. Sociality and brain size Individual recognition is one of the most basic forms of social cognition. If we were to define individual recognition, it would imply that a given individual has the capacity to discriminate a familiar individual from another one at any given time. It is believed that in many species, group size is often a representation of social complexity, with higher social complexity demanding higher cognitive capabilities. This hypothesis is also known as the "social brain hypothesis" and has been supported by many researchers. The logic behind this hypothesis is based on the principle that larger group size will require a higher degree of complexity in their interactions. Many studies have looked at the effect of sociality on the brain development, mostly focussing on non-human primate species. In primates, it has been shown that relative brain size, when controlling for the size of the species and the phylogeny, seemed to correlate with the size of the social group. These results allowed for a direct correlation between sociality and cognition. However, when reproducing such experiments in non-primate species, like with reptiles, birds and even other mammalian species, the correlation between brain size and social group size does not seems to exist. A study done on mountain chickadees looking at the impact of sociality on the hippocampus size as well as on neurogenesis found no evidence of change related to group size, therefore rejecting the "social brain hypothesis" in birds. Further research looking at bird cognitive ecolo The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Young robins build the same kinds of nests their parents build even if the young birds have never seen their parents build a nest. This is an example of A. a learned behavior. B. an inherited behavior. C. a physical characteristic. D. acquired characteristic. Answer:
sciq-3742
multiple_choice
What is used for cooling detectors of infrared telescopes?
[ "carbon dioxide", "liquid nitrogen", "hand nitrogen", "material nitrogen" ]
B
Relavent Documents: Document 0::: Astronomy education or astronomy education research (AER) refers both to the methods currently used to teach the science of astronomy and to an area of pedagogical research that seeks to improve those methods. Specifically, AER includes systematic techniques honed in science and physics education to understand what and how students learn about astronomy and determine how teachers can create more effective learning environments. Education is important to astronomy as it impacts both the recruitment of future astronomers and the appreciation of astronomy by citizens and politicians who support astronomical research. Astronomy has been taught throughout much of recorded human history, and has practical application in timekeeping and navigation. Teaching astronomy contributes to an understanding of physics and the origin of the world around us, a shared cultural background, and a sense of wonder and exploration. It includes education of the general public through planetariums, books, and instructive presentations, plus programs and tools for amateur astronomy, and University-level degree programs for professional astronomers. Astronomy organizations provide educational functions and societies in about 100 nation states around the world. In schools, particularly at the collegiate level, astronomy is aligned with physics and the two are often combined to form a Department of Physics and Astronomy. Some parts of astronomy education overlap with physics education, however, astronomy education has its own arenas, practitioners, journals, and research. This can be demonstrated in the identified 20-year lag between the emergence of AER and physics education research. The body of research in this field are available through electronic sources such as the Searchable Annotated Bibliography of Education Research (SABER) and the American Astronomical Society's database of the contents of their journal "Astronomy Education Review" (see link below). The National Aeronautics and Document 1::: The Inverness Campus is an area in Inverness, Scotland. 5.5 hectares of the site have been designated as an enterprise area for life sciences by the Scottish Government. This designation is intended to encourage research and development in the field of life sciences, by providing incentives to locate at the site. The enterprise area is part of a larger site, over 200 acres, which will house Inverness College, Scotland's Rural College (SRUC), the University of the Highlands and Islands, a health science centre and sports and other community facilities. The purpose built research hub will provide space for up to 30 staff and researchers, allowing better collaboration. The Highland Science Academy will be located on the site, a collaboration formed by Highland Council, employers and public bodies. The academy will be aimed towards assisting young people to gain the necessary skills to work in the energy, engineering and life sciences sectors. History The site was identified in 2006. Work started to develop the infrastructure on the site in early 2012. A virtual tour was made available in October 2013 to help mark Doors Open Day. The construction had reached halfway stage in May 2014, meaning that it is on track to open doors to receive its first students in August 2015. In May 2014, work was due to commence on a building designed to provide office space and laboratories as part of the campus's "life science" sector. Morrison Construction have been appointed to undertake the building work. Scotland's Rural College (SRUC) will be able to relocate their Inverness-based activities to the Campus. SRUC's research centre for Comparative Epidemiology and Medicine, and Agricultural Business Consultancy services could co-locate with UHI where their activities have complementary themes. By the start of 2017, there were more than 600 people working at the site. In June 2021, a new bridge opened connecting Inverness Campus to Inverness Shopping Park. It crosses the Aberdeen Document 2::: Integrated Software for Imagers and Spectrometers (Isis) is a specialized software package developed by the USGS to process images and spectra collected by current and past NASA planetary missions sent to Earth's Moon, Mars, Jupiter, Saturn, and other solar system bodies. History The history of ISIS began in 1971 at the United States Geological Survey (USGS) in Flagstaff, Arizona. Isis was developed in 1989, primarily to support the Galileo NIMS instrument. It contains standard image processing capabilities (such as image algebra, filters, statistics) for both 2D images and 3D data cubes, as well as mission-specific data processing capabilities and cartographic rendering functions. Raster data format name Family of related formats that are used by the USGS Planetary Cartography group to store and distribute planetary imagery data. PDS, Planetary Data System ISIS2, USGS Astrogeology Isis cube (Version 2) ISIS3, USGS Astrogeology ISIS Cube (Version 3) See also Ames Stereo Pipeline Document 3::: The Radio Neutrino Observatory Greenland (or RNO-G) is a neutrino observatory deployed near Summit Camp on top of the Greenland ice sheet. The goal of the RNO-G experiment is detecting ultra-high energy neutrinos and estimating their flux. These particles could help to better understand the most violent events in the universe, including but not limited to active galactic nuclei (AGN) and gamma ray bursts (GRB). A neutrino detection by RNO-G would also extend the energy range at which neutrinos can be used for multi-messenger astronomy. Detector Layout Located at above sea level, the detector array is planned to consist of 35 station. By 2022 seven stations have been deployed and are taking data. Each station consists of three in-ice strings at 100m depth to measure particle cascades in ice induced by neutrinos and other particles and a surface component that is also sensitive to cosmic rays. The stations operate autonomous and are powered by renewable energies, such as solar panels and wind turbines. The communication is wireless via LTE. Detection principle An event view from simulations for RNO-G. The neutrino induced particle cascade creates radio emission via the Askaryan effect. This is strongest at the Cherenkov angle at 56°, here shown as a red cone. The radio signal will propagate to the detector according to the ice density (direct and reflected). On the right shown are signals in the surface antennas (upper panel), the reconstruction antennas (middle) and the phased array trigger (lower panel). See also Other Radio Neutrino Experiments: Radio Ice Cherenkov Experiment (RICE) Askaryan Radio Array (ARA) Antarctic Ross Ice-Shelf Antenna Neutrino Array (ARIANNA) Antarctic Impulsive Transient Antenna (ANITA) Document 4::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is used for cooling detectors of infrared telescopes? A. carbon dioxide B. liquid nitrogen C. hand nitrogen D. material nitrogen Answer:
sciq-9121
multiple_choice
What are chemical reactions represented by?
[ "carbon equations", "liquid equations", "chemical equations", "nuclear equations" ]
C
Relavent Documents: Document 0::: A chemical equation is the symbolic representation of a chemical reaction in the form of symbols and chemical formulas. The reactant entities are given on the left-hand side and the product entities are on the right-hand side with a plus sign between the entities in both the reactants and the products, and an arrow that points towards the products to show the direction of the reaction. The chemical formulas may be symbolic, structural (pictorial diagrams), or intermixed. The coefficients next to the symbols and formulas of entities are the absolute values of the stoichiometric numbers. The first chemical equation was diagrammed by Jean Beguin in 1615. Structure A chemical equation (see an example below) consists of a list of reactants (the starting substances) on the left-hand side, an arrow symbol, and a list of products (substances formed in the chemical reaction) on the right-hand side. Each substance is specified by its chemical formula, optionally preceded by a number called stoichiometric coefficient. The coefficient specifies how many entities (e.g. molecules) of that substance are involved in the reaction on a molecular basis. If not written explicitly, the coefficient is equal to 1. Multiple substances on any side of the equation are separated from each other by a plus sign. As an example, the equation for the reaction of hydrochloric acid with sodium can be denoted: Given the formulas are fairly simple, this equation could be read as "two H-C-L plus two N-A yields two N-A-C-L and H two." Alternately, and in general for equations involving complex chemicals, the chemical formulas are read using IUPAC nomenclature, which could verbalise this equation as "two hydrochloric acid molecules and two sodium atoms react to form two formula units of sodium chloride and a hydrogen gas molecule." Reaction types Different variants of the arrow symbol are used to denote the type of a reaction: {| | style="text-align: center; padding-right: 0.5em;" | -> || net forwa Document 1::: In chemistry, a reaction coordinate is an abstract one-dimensional coordinate chosen to represent progress along a reaction pathway. Where possible it is usually a geometric parameter that changes during the conversion of one or more molecular entities, such as bond length or bond angle. For example, in the homolytic dissociation of molecular hydrogen, an apt choice would be the coordinate corresponding to the bond length. Non-geometric parameters such as bond order are also used, but such direct representation of the reaction process can be difficult, especially for more complex reactions. In molecular dynamics simulations, a reaction coordinate is called a collective variable. A reaction coordinate parametrises reaction process at the level of the molecular entities involved. It differs from extent of reaction, which measures reaction progress in terms of the composition of the reaction system. (Free) energy is often plotted against reaction coordinate(s) to demonstrate in schematic form the potential energy profile (an intersection of a potential energy surface) associated with the reaction. In the formalism of transition-state theory the reaction coordinate for each reaction step is one of a set of curvilinear coordinates obtained from the conventional coordinates for the reactants, and leads smoothly among configurations, from reactants to products via the transition state. It is typically chosen to follow the path defined by potential energy gradient – shallowest ascent/steepest descent – from reactants to products. Notes and references Physical chemistry Quantum chemistry Theoretical chemistry Computational chemistry Molecular physics Chemical kinetics Document 2::: An elementary reaction is a chemical reaction in which one or more chemical species react directly to form products in a single reaction step and with a single transition state. In practice, a reaction is assumed to be elementary if no reaction intermediates have been detected or need to be postulated to describe the reaction on a molecular scale. An apparently elementary reaction may be in fact a stepwise reaction, i.e. a complicated sequence of chemical reactions, with reaction intermediates of variable lifetimes. In a unimolecular elementary reaction, a molecule dissociates or isomerises to form the products(s) At constant temperature, the rate of such a reaction is proportional to the concentration of the species In a bimolecular elementary reaction, two atoms, molecules, ions or radicals, and , react together to form the product(s) The rate of such a reaction, at constant temperature, is proportional to the product of the concentrations of the species and The rate expression for an elementary bimolecular reaction is sometimes referred to as the Law of Mass Action as it was first proposed by Guldberg and Waage in 1864. An example of this type of reaction is a cycloaddition reaction. This rate expression can be derived from first principles by using collision theory for ideal gases. For the case of dilute fluids equivalent results have been obtained from simple probabilistic arguments. According to collision theory the probability of three chemical species reacting simultaneously with each other in a termolecular elementary reaction is negligible. Hence such termolecular reactions are commonly referred as non-elementary reactions and can be broken down into a more fundamental set of bimolecular reactions, in agreement with the law of mass action. It is not always possible to derive overall reaction schemes, but solutions based on rate equations are often possible in terms of steady-state or Michaelis-Menten approximations. Notes Chemical kinetics Phy Document 3::: PottersWheel is a MATLAB toolbox for mathematical modeling of time-dependent dynamical systems that can be expressed as chemical reaction networks or ordinary differential equations (ODEs). It allows the automatic calibration of model parameters by fitting the model to experimental measurements. CPU-intensive functions are written or – in case of model dependent functions – dynamically generated in C. Modeling can be done interactively using graphical user interfaces or based on MATLAB scripts using the PottersWheel function library. The software is intended to support the work of a mathematical modeler as a real potter's wheel eases the modeling of pottery. Seven modeling phases The basic use of PottersWheel covers seven phases from model creation to the prediction of new experiments. Model creation The dynamical system is formalized into a set of reactions or differential equations using a visual model designer or a text editor. The model is stored as a MATLAB *.m ASCII file. Modifications can therefore be tracked using a version control system like subversion or git. Model import and export is supported for SBML. Custom import-templates may be used to import custom model structures. Rule-based modeling is also supported, where a pattern represents a set of automatically generated reactions. Example for a simple model definition file for a reaction network A → B → C → A with observed species A and C: function m = getModel() % Starting with an empty model m = pwGetEmtptyModel(); % Adding reactions m = pwAddR(m, 'A', 'B'); m = pwAddR(m, 'B', 'C'); m = pwAddR(m, 'C', 'A'); % Adding observables m = pwAddY(m, 'A'); m = pwAddY(m, 'C'); end Data import External data saved in *.xls or *.txt files can be added to a model creating a model-data-couple. A mapping dialog allows to connect data column names to observed species names. Meta information in the data files comprise information about the experimental setting. Measurement errors are either stored in the Document 4::: Chemical reaction network theory is an area of applied mathematics that attempts to model the behaviour of real-world chemical systems. Since its foundation in the 1960s, it has attracted a growing research community, mainly due to its applications in biochemistry and theoretical chemistry. It has also attracted interest from pure mathematicians due to the interesting problems that arise from the mathematical structures involved. History Dynamical properties of reaction networks were studied in chemistry and physics after the invention of the law of mass action. The essential steps in this study were introduction of detailed balance for the complex chemical reactions by Rudolf Wegscheider (1901), development of the quantitative theory of chemical chain reactions by Nikolay Semyonov (1934), development of kinetics of catalytic reactions by Cyril Norman Hinshelwood, and many other results. Three eras of chemical dynamics can be revealed in the flux of research and publications. These eras may be associated with leaders: the first is the van 't Hoff era, the second may be called the Semenov–Hinshelwood era and the third is definitely the Aris era. The "eras" may be distinguished based on the main focuses of the scientific leaders: van’t Hoff was searching for the general law of chemical reaction related to specific chemical properties. The term "chemical dynamics" belongs to van’t Hoff. The Semenov-Hinshelwood focus was an explanation of critical phenomena observed in many chemical systems, in particular in flames. A concept chain reactions elaborated by these researchers influenced many sciences, especially nuclear physics and engineering. Aris’ activity was concentrated on the detailed systematization of mathematical ideas and approaches. The mathematical discipline "chemical reaction network theory" was originated by Rutherford Aris, a famous expert in chemical engineering, with the support of Clifford Truesdell, the founder and editor-in-chief of the journ The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What are chemical reactions represented by? A. carbon equations B. liquid equations C. chemical equations D. nuclear equations Answer:
sciq-8262
multiple_choice
If a quantity of a reactant remains unconsumed after complete reaction has occurred, it is?
[ "missing", "static", "in excess", "reduced" ]
C
Relavent Documents: Document 0::: The limiting reagent (or limiting reactant or limiting agent) in a chemical reaction is a reactant that is totally consumed when the chemical reaction is completed. The amount of product formed is limited by this reagent, since the reaction cannot continue without it. If one or more other reagents are present in excess of the quantities required to react with the limiting reagent, they are described as excess reagents or excess reactants (sometimes abbreviated as "xs"), or to be in abundance. The limiting reagent must be identified in order to calculate the percentage yield of a reaction since the theoretical yield is defined as the amount of product obtained when the limiting reagent reacts completely. Given the balanced chemical equation, which describes the reaction, there are several equivalent ways to identify the limiting reagent and evaluate the excess quantities of other reagents. Method 1: Comparison of reactant amounts This method is most useful when there are only two reactants. One reactant (A) is chosen, and the balanced chemical equation is used to determine the amount of the other reactant (B) necessary to react with A. If the amount of B actually present exceeds the amount required, then B is in excess and A is the limiting reagent. If the amount of B present is less than required, then B is the limiting reagent. Example for two reactants Consider the combustion of benzene, represented by the following chemical equation: 2 C6H6(l) + 15 O2(g) -> 12 CO2(g) + 6 H2O(l) This means that 15 moles of molecular oxygen (O2) is required to react with 2 moles of benzene (C6H6) The amount of oxygen required for other quantities of benzene can be calculated using cross-multiplication (the rule of three). For example, if 1.5 mol C6H6 is present, 11.25 mol O2 is required: If in fact 18 mol O2 are present, there will be an excess of (18 - 11.25) = 6.75 mol of unreacted oxygen when all the benzene is consumed. Benzene is then the limiting reagent. This concl Document 1::: An elementary reaction is a chemical reaction in which one or more chemical species react directly to form products in a single reaction step and with a single transition state. In practice, a reaction is assumed to be elementary if no reaction intermediates have been detected or need to be postulated to describe the reaction on a molecular scale. An apparently elementary reaction may be in fact a stepwise reaction, i.e. a complicated sequence of chemical reactions, with reaction intermediates of variable lifetimes. In a unimolecular elementary reaction, a molecule dissociates or isomerises to form the products(s) At constant temperature, the rate of such a reaction is proportional to the concentration of the species In a bimolecular elementary reaction, two atoms, molecules, ions or radicals, and , react together to form the product(s) The rate of such a reaction, at constant temperature, is proportional to the product of the concentrations of the species and The rate expression for an elementary bimolecular reaction is sometimes referred to as the Law of Mass Action as it was first proposed by Guldberg and Waage in 1864. An example of this type of reaction is a cycloaddition reaction. This rate expression can be derived from first principles by using collision theory for ideal gases. For the case of dilute fluids equivalent results have been obtained from simple probabilistic arguments. According to collision theory the probability of three chemical species reacting simultaneously with each other in a termolecular elementary reaction is negligible. Hence such termolecular reactions are commonly referred as non-elementary reactions and can be broken down into a more fundamental set of bimolecular reactions, in agreement with the law of mass action. It is not always possible to derive overall reaction schemes, but solutions based on rate equations are often possible in terms of steady-state or Michaelis-Menten approximations. Notes Chemical kinetics Phy Document 2::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 3::: Conversion and its related terms yield and selectivity are important terms in chemical reaction engineering. They are described as ratios of how much of a reactant has reacted (X — conversion, normally between zero and one), how much of a desired product was formed (Y — yield, normally also between zero and one) and how much desired product was formed in ratio to the undesired product(s) (S — selectivity). There are conflicting definitions in the literature for selectivity and yield, so each author's intended definition should be verified. Conversion can be defined for (semi-)batch and continuous reactors and as instantaneous and overall conversion. Assumptions The following assumptions are made: The following chemical reaction takes place: , where and are the stoichiometric coefficients. For multiple parallel reactions, the definitions can also be applied, either per reaction or using the limiting reaction. Batch reaction assumes all reactants are added at the beginning. Semi-Batch reaction assumes some reactants are added at the beginning and the rest fed during the batch. Continuous reaction assumes reactants are fed and products leave the reactor continuously and in steady state. Conversion Conversion can be separated into instantaneous conversion and overall conversion. For continuous processes the two are the same, for batch and semi-batch there are important differences. Furthermore, for multiple reactants, conversion can be defined overall or per reactant. Instantaneous conversion Semi-batch In this setting there are different definitions. One definition regards the instantaneous conversion as the ratio of the instantaneously converted amount to the amount fed at any point in time: . with as the change of moles with time of species i. This ratio can become larger than 1. It can be used to indicate whether reservoirs are built up and it is ideally close to 1. When the feed stops, its value is not defined. In semi-batch polymerisation, Document 4::: Analysis (: analyses) is the process of breaking a complex topic or substance into smaller parts in order to gain a better understanding of it. The technique has been applied in the study of mathematics and logic since before Aristotle (384–322 B.C.), though analysis as a formal concept is a relatively recent development. The word comes from the Ancient Greek (analysis, "a breaking-up" or "an untying;" from ana- "up, throughout" and lysis "a loosening"). From it also comes the word's plural, analyses. As a formal concept, the method has variously been ascribed to Alhazen, René Descartes (Discourse on the Method), and Galileo Galilei. It has also been ascribed to Isaac Newton, in the form of a practical method of physical discovery (which he did not name). The converse of analysis is synthesis: putting the pieces back together again in a new or different whole. Applications Science The field of chemistry uses analysis in three ways: to identify the components of a particular chemical compound (qualitative analysis), to identify the proportions of components in a mixture (quantitative analysis), and to break down chemical processes and examine chemical reactions between elements of matter. For an example of its use, analysis of the concentration of elements is important in managing a nuclear reactor, so nuclear scientists will analyze neutron activation to develop discrete measurements within vast samples. A matrix can have a considerable effect on the way a chemical analysis is conducted and the quality of its results. Analysis can be done manually or with a device. Types of Analysis: A) Qualitative Analysis: It is concerned with which components are in a given sample or compound. Example: Precipitation reaction B) Quantitative Analysis: It is to determine the quantity of individual component present in a given sample or compound. Example: To find concentration by uv-spectrophotometer. Isotopes Chemists can use isotope analysis to assist analysts with i The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. If a quantity of a reactant remains unconsumed after complete reaction has occurred, it is? A. missing B. static C. in excess D. reduced Answer:
sciq-11339
multiple_choice
Nearly all weather occurs in the lower part of what?
[ "the mesosphere", "the atmosphere", "the ionosphere", "the lithosphere" ]
B
Relavent Documents: Document 0::: This is a list of meteorology topics. The terms relate to meteorology, the interdisciplinary scientific study of the atmosphere that focuses on weather processes and forecasting. (see also: List of meteorological phenomena) A advection aeroacoustics aerobiology aerography (meteorology) aerology air parcel (in meteorology) air quality index (AQI) airshed (in meteorology) American Geophysical Union (AGU) American Meteorological Society (AMS) anabatic wind anemometer annular hurricane anticyclone (in meteorology) apparent wind Atlantic Oceanographic and Meteorological Laboratory (AOML) Atlantic hurricane season atmometer atmosphere Atmospheric Model Intercomparison Project (AMIP) Atmospheric Radiation Measurement (ARM) (atmospheric boundary layer [ABL]) planetary boundary layer (PBL) atmospheric chemistry atmospheric circulation atmospheric convection atmospheric dispersion modeling atmospheric electricity atmospheric icing atmospheric physics atmospheric pressure atmospheric sciences atmospheric stratification atmospheric thermodynamics atmospheric window (see under Threats) B ball lightning balloon (aircraft) baroclinity barotropity barometer ("to measure atmospheric pressure") berg wind biometeorology blizzard bomb (meteorology) buoyancy Bureau of Meteorology (in Australia) C Canada Weather Extremes Canadian Hurricane Centre (CHC) Cape Verde-type hurricane capping inversion (in meteorology) (see "severe thunderstorms" in paragraph 5) carbon cycle carbon fixation carbon flux carbon monoxide (see under Atmospheric presence) ceiling balloon ("to determine the height of the base of clouds above ground level") ceilometer ("to determine the height of a cloud base") celestial coordinate system celestial equator celestial horizon (rational horizon) celestial navigation (astronavigation) celestial pole Celsius Center for Analysis and Prediction of Storms (CAPS) (in Oklahoma in the US) Center for the Study o Document 1::: The following outline is provided as an overview of and topical guide to the field of Meteorology. Meteorology The interdisciplinary, scientific study of the Earth's atmosphere with the primary focus being to understand, explain, and forecast weather events. Meteorology, is applied to and employed by a wide variety of diverse fields, including the military, energy production, transport, agriculture, and construction. Essence of meteorology Meteorology Climate – the average and variations of weather in a region over long periods of time. Meteorology – the interdisciplinary scientific study of the atmosphere that focuses on weather processes and forecasting (in contrast with climatology). Weather – the set of all the phenomena in a given atmosphere at a given time. Branches of meteorology Microscale meteorology – the study of atmospheric phenomena about 1 km or less, smaller than mesoscale, including small and generally fleeting cloud "puffs" and other small cloud features Mesoscale meteorology – the study of weather systems about 5 kilometers to several hundred kilometers, smaller than synoptic scale systems but larger than microscale and storm-scale cumulus systems, skjjoch as sea breezes, squall lines, and mesoscale convective complexes Synoptic scale meteorology – is a horizontal length scale of the order of 1000 kilometres (about 620 miles) or more Methods in meteorology Surface weather analysis – a special type of weather map that provides a view of weather elements over a geographical area at a specified time based on information from ground-based weather stations Weather forecasting Weather forecasting – the application of science and technology to predict the state of the atmosphere for a future time and a given location Data collection Pilot Reports Weather maps Weather map Surface weather analysis Forecasts and reporting of Atmospheric pressure Dew point High-pressure area Ice Black ice Frost Low-pressure area Precipitation Document 2::: Atmospheric temperature is a measure of temperature at different levels of the Earth's atmosphere. It is governed by many factors, including incoming solar radiation, humidity and altitude. When discussing surface air temperature, the annual atmospheric temperature range at any geographical location depends largely upon the type of biome, as measured by the Köppen climate classification Temperature versus altitude Temperature varies greatly at different heights relative to Earth's surface and this variation in temperature characterizes the four layers that exist in the atmosphere. These layers include the troposphere, stratosphere, mesosphere, and thermosphere. The troposphere is the lowest of the four layers, extending from the surface of the Earth to about into the atmosphere where the tropopause (the boundary between the troposphere stratosphere) is located. The width of the troposphere can vary depending on latitude, for example, the troposphere is thicker in the tropics (about ) because the tropics are generally warmer, and thinner at the poles (about ) because the poles are colder. Temperatures in the atmosphere decrease with height at an average rate of 6.5°C (11.7°F) per kilometer. Because the troposphere experiences its warmest temperatures closer to Earth's surface, there is great vertical movement of heat and water vapour, causing turbulence. This turbulence, in conjunction with the presence of water vapour, is the reason that weather occurs within the troposphere. Following the tropopause is the stratosphere. This layer extends from the tropopause to the stratopause which is located at an altitude of about . Temperatures remain constant with height from the tropopause to an altitude of , after which they start to increase with height. This happening is referred to as an inversion and It is because of this inversion that the stratosphere is not characterised as turbulent. The stratosphere receives its warmth from the sun and the ozone layer which ab Document 3::: In atmospheric science, an atmospheric model is a mathematical model constructed around the full set of primitive, dynamical equations which govern atmospheric motions. It can supplement these equations with parameterizations for turbulent diffusion, radiation, moist processes (clouds and precipitation), heat exchange, soil, vegetation, surface water, the kinematic effects of terrain, and convection. Most atmospheric models are numerical, i.e. they discretize equations of motion. They can predict microscale phenomena such as tornadoes and boundary layer eddies, sub-microscale turbulent flow over buildings, as well as synoptic and global flows. The horizontal domain of a model is either global, covering the entire Earth, or regional (limited-area), covering only part of the Earth. The different types of models run are thermotropic, barotropic, hydrostatic, and nonhydrostatic. Some of the model types make assumptions about the atmosphere which lengthens the time steps used and increases computational speed. Forecasts are computed using mathematical equations for the physics and dynamics of the atmosphere. These equations are nonlinear and are impossible to solve exactly. Therefore, numerical methods obtain approximate solutions. Different models use different solution methods. Global models often use spectral methods for the horizontal dimensions and finite-difference methods for the vertical dimension, while regional models usually use finite-difference methods in all three dimensions. For specific locations, model output statistics use climate information, output from numerical weather prediction, and current surface weather observations to develop statistical relationships which account for model bias and resolution issues. Types The main assumption made by the thermotropic model is that while the magnitude of the thermal wind may change, its direction does not change with respect to height, and thus the baroclinicity in the atmosphere can be simulated usi Document 4::: Aeronomy is the scientific study of the upper atmosphere of the Earth and corresponding regions of the atmospheres of other planets. It is a branch of both atmospheric chemistry and atmospheric physics. Scientists specializing in aeronomy, known as aeronomers, study the motions and chemical composition and properties of the Earth's upper atmosphere and regions of the atmospheres of other planets that correspond to it, as well as the interaction between upper atmospheres and the space environment. In atmospheric regions aeronomers study, chemical dissociation and ionization are important phenomena. History The mathematician Sydney Chapman introduced the term aeronomy to describe the study of the Earth's upper atmosphere in 1946 in a letter to the editor of Nature entitled "Some Thoughts on Nomenclature." The term became official in 1954 when the International Union of Geodesy and Geophysics adopted it. "Aeronomy" later also began to refer to the study of the corresponding regions of the atmospheres of other planets. Branches Aeronomy can be divided into three main branches: terrestrial aeronomy, planetary aeronomy, and comparative aeronomy. Terrestrial aeronomy Terrestrial aeronomy focuses on the Earth's upper atmosphere, which extends from the stratopause to the atmosphere's boundary with outer space and is defined as consisting of the mesosphere, thermosphere, and exosphere and their ionized component, the ionosphere. Terrestrial aeronomy contrasts with meteorology, which is the scientific study of the Earth's lower atmosphere, defined as the troposphere and stratosphere. Although terrestrial aeronomy and meteorology once were completely separate fields of scientific study, cooperation between terrestrial aeronomers and meteorologists has grown as discoveries made since the early 1990s have demonstrated that the upper and lower atmospheres have an impact on one another's physics, chemistry, and biology. Terrestrial aeronomers study atmospheric tides and upper- The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Nearly all weather occurs in the lower part of what? A. the mesosphere B. the atmosphere C. the ionosphere D. the lithosphere Answer:
sciq-9598
multiple_choice
The nitrogen cycle includes air, soil, and what?
[ "water", "decomposers", "living things", "heat" ]
C
Relavent Documents: Document 0::: The nitrogen cycle is the biogeochemical cycle by which nitrogen is converted into multiple chemical forms as it circulates among atmospheric, terrestrial, and marine ecosystems. The conversion of nitrogen can be carried out through both biological and physical processes. Important processes in the nitrogen cycle include fixation, ammonification, nitrification, and denitrification. The majority of Earth's atmosphere (78%) is atmospheric nitrogen, making it the largest source of nitrogen. However, atmospheric nitrogen has limited availability for biological use, leading to a scarcity of usable nitrogen in many types of ecosystems. The nitrogen cycle is of particular interest to ecologists because nitrogen availability can affect the rate of key ecosystem processes, including primary production and decomposition. Human activities such as fossil fuel combustion, use of artificial nitrogen fertilizers, and release of nitrogen in wastewater have dramatically altered the global nitrogen cycle. Human modification of the global nitrogen cycle can negatively affect the natural environment system and also human health. Processes Nitrogen is present in the environment in a wide variety of chemical forms including organic nitrogen, ammonium (), nitrite (), nitrate (), nitrous oxide (), nitric oxide (NO) or inorganic nitrogen gas (). Organic nitrogen may be in the form of a living organism, humus or in the intermediate products of organic matter decomposition. The processes in the nitrogen cycle is to transform nitrogen from one form to another. Many of those processes are carried out by microbes, either in their effort to harvest energy or to accumulate nitrogen in a form needed for their growth. For example, the nitrogenous wastes in animal urine are broken down by nitrifying bacteria in the soil to be used by plants. The diagram alongside shows how these processes fit together to form the nitrogen cycle. Nitrogen fixation The conversion of nitrogen gas () into nitrates Document 1::: Reactive nitrogen ("Nr"), also known as fixed nitrogen, refers to all forms of nitrogen present in the environment except for molecular nitrogen (). While nitrogen is an essential element for life on Earth, molecular nitrogen is comparatively unreactive, and must be converted to other chemical forms via nitrogen fixation before it can be used for growth. Common Nr species include nitrogen oxides (), ammonia (), nitrous oxide (), as well as the anion nitrate (). Biologically, nitrogen is "fixed" mainly by the microbes (eg., Bacteria and Archaea) of the soil that fix into mainly but also other species. Legumes, a type of plant in the Fabacae family, are symbionts to some of these microbes that fix . is a building block to Amino acids and proteins amongst other things essential for life. However, just over half of all reactive nitrogen entering the biosphere is attributable to anthropogenic activity such as industrial fertilizer production. While reactive nitrogen is eventually converted back into molecular nitrogen via denitrification, an excess of reactive nitrogen can lead to problems such as eutrophication in marine ecosystems. Reactive nitrogen compounds In the environmental context, reactive nitrogen compounds include the following classes: oxide gases: nitric oxide, nitrogen dioxide, nitrous oxide. Containing oxidized nitrogen, mainly the result of industrial processes and internal combustion engines. anions: nitrate, nitrite. Nitrate is a common component of fertilizers, e.g. ammonium nitrate. amine derivatives: ammonia and ammonium salts, urea. Containing reduced nitrogen, these compounds are components of fertilizers. All of these compounds enter into the nitrogen cycle. As a consequence, an excess of Nr can affect the environment relatively quickly. This also means that nitrogen-related problems need to be looked at in an integrated manner. See also Human impact on the nitrogen cycle Document 2::: Human impact on the nitrogen cycle is diverse. Agricultural and industrial nitrogen (N) inputs to the environment currently exceed inputs from natural N fixation. As a consequence of anthropogenic inputs, the global nitrogen cycle (Fig. 1) has been significantly altered over the past century. Global atmospheric nitrous oxide (N2O) mole fractions have increased from a pre-industrial value of ~270 nmol/mol to ~319 nmol/mol in 2005. Human activities account for over one-third of N2O emissions, most of which are due to the agricultural sector. This article is intended to give a brief review of the history of anthropogenic N inputs, and reported impacts of nitrogen inputs on selected terrestrial and aquatic ecosystems. History of anthropogenic nitrogen inputs Approximately 78% of earth's atmosphere is N gas (N2), which is an inert compound and biologically unavailable to most organisms. In order to be utilized in most biological processes, N2 must be converted to reactive nitrogen (Nr), which includes inorganic reduced forms (NH3 and NH4+), inorganic oxidized forms (NO, NO2, HNO3, N2O, and NO3−), and organic compounds (urea, amines, and proteins). N2 has a strong triple bond, and so a significant amount of energy (226 kcal mol−1) is required to convert N2 to Nr. Prior to industrial processes, the only sources of such energy were solar radiation and electrical discharges. Utilizing a large amount of metabolic energy and the enzyme nitrogenase, some bacteria and cyanobacteria convert atmospheric N2 to NH3, a process known as biological nitrogen fixation (BNF). The anthropogenic analogue to BNF is the Haber-Bosch process, in which H2 is reacted with atmospheric N2 at high temperatures and pressures to produce NH3. Lastly, N2 is converted to NO by energy from lightning, which is negligible in current temperate ecosystems, or by fossil fuel combustion. Until 1850, natural BNF, cultivation-induced BNF (e.g., planting of leguminous crops), and incorporated organic matter wer Document 3::: A biogeochemical cycle, or more generally a cycle of matter, is the movement and transformation of chemical elements and compounds between living organisms, the atmosphere, and the Earth's crust. Major biogeochemical cycles include the carbon cycle, the nitrogen cycle and the water cycle. In each cycle, the chemical element or molecule is transformed and cycled by living organisms and through various geological forms and reservoirs, including the atmosphere, the soil and the oceans. It can be thought of as the pathway by which a chemical substance cycles (is turned over or moves through) the biotic compartment and the abiotic compartments of Earth. The biotic compartment is the biosphere and the abiotic compartments are the atmosphere, lithosphere and hydrosphere. For example, in the carbon cycle, atmospheric carbon dioxide is absorbed by plants through photosynthesis, which converts it into organic compounds that are used by organisms for energy and growth. Carbon is then released back into the atmosphere through respiration and decomposition. Additionally, carbon is stored in fossil fuels and is released into the atmosphere through human activities such as burning fossil fuels. In the nitrogen cycle, atmospheric nitrogen gas is converted by plants into usable forms such as ammonia and nitrates through the process of nitrogen fixation. These compounds can be used by other organisms, and nitrogen is returned to the atmosphere through denitrification and other processes. In the water cycle, the universal solvent water evaporates from land and oceans to form clouds in the atmosphere, and then precipitates back to different parts of the planet. Precipitation can seep into the ground and become part of groundwater systems used by plants and other organisms, or can runoff the surface to form lakes and rivers. Subterranean water can then seep into the ocean along with river discharges, rich with dissolved and particulate organic matter and other nutrients. There are bio Document 4::: In agriculture, leaching is the loss of water-soluble plant nutrients from the soil, due to rain and irrigation. Soil structure, crop planting, type and application rates of fertilizers, and other factors are taken into account to avoid excessive nutrient loss. Leaching may also refer to the practice of applying a small amount of excess irrigation where the water has a high salt content to avoid salts from building up in the soil (salinity control). Where this is practiced, drainage must also usually be employed, to carry away the excess water. Leaching is a natural environment concern when it contributes to groundwater contamination. As water from rain, flooding, or other sources seeps into the ground, it can dissolve chemicals and carry them into the underground water supply. Of particular concern are hazardous waste dumps and landfills, and, in agriculture, excess fertilizer, improperly stored animal manure, and biocides (e.g. pesticides, fungicides, insecticides and herbicides). Nitrogen leaching Nitrogen is a common element in nature and an essential plant nutrient. Approximately 78% of Earth's atmosphere is nitrogen (N2). The strong bond between the atoms of N2 makes this gas quite inert and not directly usable by plants and animals. As nitrogen naturally cycles through the air, water and soil it undergoes various chemical and biological transformations. Nitrogen promotes plant growth. Livestock then eat the crops producing manure, which is returned to the soil, adding organic and mineral forms of nitrogen. The cycle is complete when the next crop uses the amended soil. To increase food production, fertilizers, such as nitrate (NO3–) and ammonium (NH4+), which are easily absorbed by plants, are introduced to the plant root zone. However, soils do not absorb the excess NO3– ions, which then move downward freely with drainage water, and are leached into groundwater, streams and oceans. The degree of leaching is affected by: soil type and structure. For exam The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The nitrogen cycle includes air, soil, and what? A. water B. decomposers C. living things D. heat Answer:
sciq-7707
multiple_choice
Where do you find the greatest biodiversity?
[ "in the tropics", "at the poles", "in the tundra", "in shallow lakes" ]
A
Relavent Documents: Document 0::: Biodiversity or biological diversity is the variety and variability of life on Earth. Biodiversity is a measure of variation at the genetic (genetic variability), species (species diversity), and ecosystem (ecosystem diversity) level. Biodiversity is not distributed evenly on Earth; it is usually greater in the tropics as a result of the warm climate and high primary productivity in the region near the equator. Tropical forest ecosystems cover less than 10% of earth's surface and contain about 90% of the world's species. Marine biodiversity is usually higher along coasts in the Western Pacific, where sea surface temperature is highest, and in the mid-latitudinal band in all oceans. There are latitudinal gradients in species diversity. Biodiversity generally tends to cluster in hotspots, and has been increasing through time, but will be likely to slow in the future as a primary result of deforestation. It encompasses the evolutionary, ecological, and cultural processes that sustain life. More than 99.9% of all species that ever lived on Earth, amounting to over five billion species, are estimated to be extinct. Estimates on the number of Earth's current species range from 10 million to 14 million, of which about 1.2 million have been documented and over 86% have not yet been described. The total amount of related DNA base pairs on Earth is estimated at 5.0 x 1037 and weighs 50 billion tonnes. In comparison, the total mass of the biosphere has been estimated to be as much as four trillion tons of carbon. In July 2016, scientists reported identifying a set of 355 genes from the last universal common ancestor (LUCA) of all organisms living on Earth. The age of Earth is about 4.54 billion years. The earliest undisputed evidence of life dates at least from 3.7 billion years ago, during the Eoarchean era after a geological crust started to solidify following the earlier molten Hadean eon. There are microbial mat fossils found in 3.48 billion-year-old sandstone discovered Document 1::: Species richness, or biodiversity, increases from the poles to the tropics for a wide variety of terrestrial and marine organisms, often referred to as the latitudinal diversity gradient. The latitudinal diversity gradient is one of the most widely recognized patterns in ecology. It has been observed to varying degrees in Earth's past. A parallel trend has been found with elevation (elevational diversity gradient), though this is less well-studied. Explaining the latitudinal diversity gradient has been called one of the great contemporary challenges of biogeography and macroecology (Willig et al. 2003, Pimm and Brown 2004, Cardillo et al. 2005). The question "What determines patterns of species diversity?" was among the 25 key research themes for the future identified in 125th Anniversary issue of Science (July 2005). There is a lack of consensus among ecologists about the mechanisms underlying the pattern, and many hypotheses have been proposed and debated. A recent review noted that among the many conundrums associated with the latitudinal diversity gradient (or latitudinal biodiversity gradient) the causal relationship between rates of molecular evolution and speciation has yet to be demonstrated. Understanding the global distribution of biodiversity is one of the most significant objectives for ecologists and biogeographers. Beyond purely scientific goals and satisfying curiosity, this understanding is essential for applied issues of major concern to humankind, such as the spread of invasive species, the control of diseases and their vectors, and the likely effects of global climate change on the maintenance of biodiversity (Gaston 2000). Tropical areas play prominent roles in the understanding of the distribution of biodiversity, as their rates of habitat degradation and biodiversity loss are exceptionally high. Patterns in the past The latitudinal diversity gradient is a noticeable pattern among modern organisms that has been described qualitatively and Document 2::: A biodiversity hotspot is a biogeographic region with significant levels of biodiversity that is threatened by human habitation. Norman Myers wrote about the concept in two articles in The Environmentalist in 1988 and 1990, after which the concept was revised following thorough analysis by Myers and others into “Hotspots: Earth’s Biologically Richest and Most Endangered Terrestrial Ecoregions” and a paper published in the journal Nature, both in 2000. To qualify as a biodiversity hotspot on Myers' 2000 edition of the hotspot map, a region must meet two strict criteria: it must contain at least 1,500 species of vascular plants (more than 0.5% of the world's total) as endemics, and it has to have lost at least 70% of its primary vegetation. Globally, 36 zones qualify under this definition. These sites support nearly 60% of the world's plant, bird, mammal, reptile, and amphibian species, with a high share of those species as endemics. Some of these hotspots support up to 15,000 endemic plant species, and some have lost up to 95% of their natural habitat. Biodiversity hotspots host their diverse ecosystems on just 2.4% of the planet's surface. Ten hotspots were originally identified by Myer; the current 36 used to cover more than 15.7% of all the land but have lost around 85% of their area. This loss of habitat is why approximately 60% of the world's terrestrial life lives on only 2.4% of the land surface area. Caribbean Islands like Haiti and Jamaica are facing serious pressures on the populations of endemic plants and vertebrates as a result of rapid deforestation. Other areas include the Tropical Andes, Philippines, Mesoamerica, and Sundaland, which, under the current levels at which deforestation is occurring, will likely lose most of their plant and vertebrate species. Hotspot conservation initiatives Only a small percentage of the total land area within biodiversity hotspots is now protected. Several international organizations are working to conserve biodiver Document 3::: The ecological and biogeographical concept of the species pool describes all species available that could potentially colonize and inhabit a focal habitat area. The concept lays emphasis on the fact that "local communities aren't closed systems, and that the species occupying any local site typically came from somewhere else", however, the species pool concept may suffer from the logical fallacy of composition. Most local communities, however, have just a fraction of its species pool present. It is derived from MacArthur and Wilson's Island Biogeography Theory that examines the factors that affect the species richness of isolated natural communities. It helps to understand the composition and richness of local communities and how they are influenced by biogeographic and evolutionary processes acting at large spatial and temporal scales. The absent portion of species pool—dark diversity—has been used to understand processes influencing local communities. Methods to estimate potential but absent species are developing. It has been hypothesized that there might be a direct correlation between species richness and the size of the species pool for plant communities. Elsewhere, it was reported that "trade-offs and species pool structure (size and trait distribution) determines the shape of the plant productivity-diversity relationship. Document 4::: Ecosystem diversity deals with the variations in ecosystems within a geographical location and its overall impact on human existence and the environment. Ecosystem diversity addresses the combined characteristics of biotic properties (biodiversity) and abiotic properties (geodiversity). It is a variation in the ecosystems found in a region or the variation in ecosystems over the whole planet. Ecological diversity includes the variation in both terrestrial and aquatic ecosystems. Ecological diversity can also take into account the variation in the complexity of a biological community, including the number of different niches, the number of and other ecological processes. An example of ecological diversity on a global scale would be the variation in ecosystems, such as deserts, forests, grasslands, wetlands and oceans. Ecological diversity is the largest scale of biodiversity, and within each ecosystem, there is a great deal of both species and genetic diversity. Impact Diversity in the ecosystem is significant to human existence for a variety of reasons. Ecosystem diversity boosts the availability of oxygen via the process of photosynthesis amongst plant organisms domiciled in the habitat. Diversity in an aquatic environment helps in the purification of water by plant varieties for use by humans. Diversity increases plant varieties which serves as a good source for medicines and herbs for human use. A lack of diversity in the ecosystem produces an opposite result. Examples Some examples of ecosystems that are rich in diversity are: Deserts Forests Large marine ecosystems Marine ecosystems Old-growth forests Rainforests Tundra Coral reefs Marine Ecosystem diversity as a result of evolutionary pressure Ecological diversity around the world can be directly linked to the evolutionary and selective pressures that constrain the diversity outcome of the ecosystems within different niches. Tundras, Rainforests, coral reefs and deciduous forests all are form The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Where do you find the greatest biodiversity? A. in the tropics B. at the poles C. in the tundra D. in shallow lakes Answer:
sciq-970
multiple_choice
The symbol for each what is usually the first letter or two of its name?
[ "state", "property", "material", "element" ]
D
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of Document 2::: The Mathematics Subject Classification (MSC) is an alphanumerical classification scheme that has collaboratively been produced by staff of, and based on the coverage of, the two major mathematical reviewing databases, Mathematical Reviews and Zentralblatt MATH. The MSC is used by many mathematics journals, which ask authors of research papers and expository articles to list subject codes from the Mathematics Subject Classification in their papers. The current version is MSC2020. Structure The MSC is a hierarchical scheme, with three levels of structure. A classification can be two, three or five digits long, depending on how many levels of the classification scheme are used. The first level is represented by a two-digit number, the second by a letter, and the third by another two-digit number. For example: 53 is the classification for differential geometry 53A is the classification for classical differential geometry 53A45 is the classification for vector and tensor analysis First level At the top level, 64 mathematical disciplines are labeled with a unique two-digit number. In addition to the typical areas of mathematical research, there are top-level categories for "History and Biography", "Mathematics Education", and for the overlap with different sciences. Physics (i.e. mathematical physics) is particularly well represented in the classification scheme with a number of different categories including: Fluid mechanics Quantum mechanics Geophysics Optics and electromagnetic theory All valid MSC classification codes must have at least the first-level identifier. Second level The second-level codes are a single letter from the Latin alphabet. These represent specific areas covered by the first-level discipline. The second-level codes vary from discipline to discipline. For example, for differential geometry, the top-level code is 53, and the second-level codes are: A for classical differential geometry B for local differential geometry C for glo Document 3::: Science, technology, engineering, and mathematics (STEM) is an umbrella term used to group together the distinct but related technical disciplines of science, technology, engineering, and mathematics. The term is typically used in the context of education policy or curriculum choices in schools. It has implications for workforce development, national security concerns (as a shortage of STEM-educated citizens can reduce effectiveness in this area), and immigration policy, with regard to admitting foreign students and tech workers. There is no universal agreement on which disciplines are included in STEM; in particular, whether or not the science in STEM includes social sciences, such as psychology, sociology, economics, and political science. In the United States, these are typically included by organizations such as the National Science Foundation (NSF), the Department of Labor's O*Net online database for job seekers, and the Department of Homeland Security. In the United Kingdom, the social sciences are categorized separately and are instead grouped with humanities and arts to form another counterpart acronym HASS (Humanities, Arts, and Social Sciences), rebranded in 2020 as SHAPE (Social Sciences, Humanities and the Arts for People and the Economy). Some sources also use HEAL (health, education, administration, and literacy) as the counterpart of STEM. Terminology History Previously referred to as SMET by the NSF, in the early 1990s the acronym STEM was used by a variety of educators, including Charles E. Vela, the founder and director of the Center for the Advancement of Hispanics in Science and Engineering Education (CAHSEE). Moreover, the CAHSEE started a summer program for talented under-represented students in the Washington, D.C., area called the STEM Institute. Based on the program's recognized success and his expertise in STEM education, Charles Vela was asked to serve on numerous NSF and Congressional panels in science, mathematics, and engineering edu Document 4::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The symbol for each what is usually the first letter or two of its name? A. state B. property C. material D. element Answer:
sciq-10567
multiple_choice
What do convergent plate boundaries with trenches have?
[ "geysers", "volcanoes", "earthquakes", "caves" ]
B
Relavent Documents: Document 0::: In structural geology, a suture is a joining together along a major fault zone, of separate terranes, tectonic units that have different plate tectonic, metamorphic and paleogeographic histories. The suture is often represented on the surface by an orogen or mountain range. Overview In plate tectonics, sutures are the remains of subduction zones, and the terranes that are joined together are interpreted as fragments of different palaeocontinents or tectonic plates. Outcrops of sutures can vary in width from a few hundred meters to a couple of kilometers. They can be networks of mylonitic shear zones or brittle fault zones, but are usually both. Sutures are usually associated with igneous intrusions and tectonic lenses with varying kinds of lithologies from plutonic rocks to ophiolitic fragments. An example from Great Britain is the Iapetus Suture which, though now concealed beneath younger rocks, has been determined by geophysical means to run along a line roughly parallel with the Anglo-Scottish border and represents the joint between the former continent of Laurentia to the north and the former micro-continent of Avalonia to the south. Avalonia is in fact a plain which dips steeply northwestwards through the crust, underthrusting Laurentia. Paleontological use When used in paleontology, suture can also refer to fossil exoskeletons, as in the suture line, a division on a trilobite between the free cheek and the fixed cheek; this suture line allowed the trilobite to perform ecdysis (the shedding of its skin). Document 1::: The core–mantle boundary (CMB) of Earth lies between the planet's silicate mantle and its liquid iron–nickel outer core, at a depth of below Earth's surface. The boundary is observed via the discontinuity in seismic wave velocities at that depth due to the differences between the acoustic impedances of the solid mantle and the molten outer core. P-wave velocities are much slower in the outer core than in the deep mantle while S-waves do not exist at all in the liquid portion of the core. Recent evidence suggests a distinct boundary layer directly above the CMB possibly made of a novel phase of the basic perovskite mineralogy of the deep mantle named post-perovskite. Seismic tomography studies have shown significant irregularities within the boundary zone and appear to be dominated by the African and Pacific Large Low-Shear-Velocity Provinces (LLSVP). The uppermost section of the outer core is thought to be about 500–1,800 K hotter than the overlying mantle, creating a thermal boundary layer. The boundary is thought to harbor topography, much like Earth's surface, that is supported by solid-state convection within the overlying mantle. Variations in the thermal properties of the core-mantle boundary may affect how the outer core's iron-rich fluids flow, which are ultimately responsible for Earth's magnetic field. The D″ region The approx. 200 km thick layer of the lower mantle directly above the boundary is referred to as the D″ region ("D double-prime" or "D prime prime") and is sometimes included in discussions regarding the core–mantle boundary zone. The D″ name originates from geophysicist Keith Bullen's designations for the Earth's layers. His system was to label each layer alphabetically, A through G, with the crust as 'A' and the inner core as 'G'. In his 1942 publication of his model, the entire lower mantle was the D layer. In 1949, Bullen found his 'D' layer to actually be two different layers. The upper part of the D layer, about 1800 km thick, was r Document 2::: Plume tectonics is a geoscientific theory that finds its roots in the mantle doming concept which was especially popular during the 1930s and initially did not accept major plate movements and continental drifting. It has survived from the 1970s until today in various forms and presentations. It has slowly evolved into a concept that recognises and accepts large-scale plate motions such as envisaged by plate tectonics, but placing them in a framework where large mantle plumes are the major driving force of the system. The initial followers of the concept during the first half of the 20th century are scientists like Beloussov and van Bemmelen, and recently the concept has gained interest especially in Japan, through new compiled work on palaeomagnetism, and is still advocated by the group of scientists elaboration upon Earth expansion. It is nowadays generally not accepted as the main theory to explain the driving forces of tectonic plate movements, although numerous modulations on the concept have been proposed. The theory focuses on the movements of mantle plumes under tectonic plates, viewing them as the major driving force of movements of (parts of) the Earth's crust. In its more modern form, conceived in the 1970s, it tries to reconcile in one single geodynamic model the horizontalistic concept of plate tectonics, and the verticalistic concepts of mantle plumes, by the gravitational movement of plates away from major domes of the Earth's crust. The existence of various supercontinents in Earth history and their break-up has been associated recently with major upwellings of the mantle. It is classified together with mantle convection as one of the mechanism that are used to explain the movements of tectonic plates. It also shows affinity with the concept of hot spots which is used in modern-day plate tectonics to generate a framework of specific mantle upwelling points that are relatively stable throughout time and are used to calibrate the plate movements usin Document 3::: Slab pull is a geophysical mechanism whereby the cooling and subsequent densifying of a subducting tectonic plate produces a downward force along the rest of the plate. In 1975 Forsyth and Uyeda used the inverse theory method to show that, of the many forces likely to be driving plate motion, slab pull was the strongest. Plate motion is partly driven by the weight of cold, dense plates sinking into the mantle at oceanic trenches. This force and slab suction account for almost all of the force driving plate tectonics. The ridge push at rifts contributes only 5 to 10%. Carlson et al. (1983) in Lallemandet al. (2005) defined the slab pull force as: Where: K is (gravitational acceleration = 9.81 m/s2) according to McNutt (1984); Δρ = 80 kg/m3 is the mean density difference between the slab and the surrounding asthenosphere; L is the slab length calculated only for the part above 670 km (the upper/lower mantle boundary); A is the slab age in Ma at the trench. The slab pull force manifests itself between two extreme forms: The aseismic back-arc extension as in the Izu–Bonin–Mariana Arc. And as the Aleutian and Chile tectonics with strong earthquakes and back-arc thrusting. Between these two examples there is the evolution of the Farallon Plate: from the huge slab width with the Nevada, the Sevier and Laramide orogenies; the Mid-Tertiary ignimbrite flare-up and later left as Juan de Fuca and Cocos plates, the Basin and Range Province under extension, with slab break off, smaller slab width, more edges and mantle return flow. Some early models of plate tectonics envisioned the plates riding on top of convection cells like conveyor belts. However, most scientists working today believe that the asthenosphere does not directly cause motion by the friction of such basal forces. The North American Plate is nowhere being subducted, yet it is in motion. Likewise the African, Eurasian and Antarctic Plates. Ridge push is thought responsible for the motion of these plates Document 4::: The geologic record in stratigraphy, paleontology and other natural sciences refers to the entirety of the layers of rock strata. That is, deposits laid down by volcanism or by deposition of sediment derived from weathering detritus (clays, sands etc.). This includes all its fossil content and the information it yields about the history of the Earth: its past climate, geography, geology and the evolution of life on its surface. According to the law of superposition, sedimentary and volcanic rock layers are deposited on top of each other. They harden over time to become a solidified (competent) rock column, that may be intruded by igneous rocks and disrupted by tectonic events. Correlating the rock record At a certain locality on the Earth's surface, the rock column provides a cross section of the natural history in the area during the time covered by the age of the rocks. This is sometimes called the rock history and gives a window into the natural history of the location that spans many geological time units such as ages, epochs, or in some cases even multiple major geologic periods—for the particular geographic region or regions. The geologic record is in no one place entirely complete for where geologic forces one age provide a low-lying region accumulating deposits much like a layer cake, in the next may have uplifted the region, and the same area is instead one that is weathering and being torn down by chemistry, wind, temperature, and water. This is to say that in a given location, the geologic record can be and is quite often interrupted as the ancient local environment was converted by geological forces into new landforms and features. Sediment core data at the mouths of large riverine drainage basins, some of which go deep thoroughly support the law of superposition. However using broadly occurring deposited layers trapped within differently located rock columns, geologists have pieced together a system of units covering most of the geologic time scale The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What do convergent plate boundaries with trenches have? A. geysers B. volcanoes C. earthquakes D. caves Answer:
sciq-6627
multiple_choice
What class of animals includes the subgroups rodents, carnivores, insectivores, bats, and primates?
[ "amphibians", "mammals", "insects", "reptiles" ]
B
Relavent Documents: Document 0::: Animals are multicellular eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, are able to move, reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Over 1.5 million living animal species have been described—of which around 1 million are insects—but it has been estimated there are over 7 million in total. Animals range in size from 8.5 millionths of a metre to long and have complex interactions with each other and their environments, forming intricate food webs. The study of animals is called zoology. Animals may be listed or indexed by many criteria, including taxonomy, status as endangered species, their geographical location, and their portrayal and/or naming in human culture. By common name List of animal names (male, female, young, and group) By aspect List of common household pests List of animal sounds List of animals by number of neurons By domestication List of domesticated animals By eating behaviour List of herbivorous animals List of omnivores List of carnivores By endangered status IUCN Red List endangered species (Animalia) United States Fish and Wildlife Service list of endangered species By extinction List of extinct animals List of extinct birds List of extinct mammals List of extinct cetaceans List of extinct butterflies By region Lists of amphibians by region Lists of birds by region Lists of mammals by region Lists of reptiles by region By individual (real or fictional) Real Lists of snakes List of individual cats List of oldest cats List of giant squids List of individual elephants List of historical horses List of leading Thoroughbred racehorses List of individual apes List of individual bears List of giant pandas List of individual birds List of individual bovines List of individual cetaceans List of individual dogs List of oldest dogs List of individual monkeys List of individual pigs List of w Document 1::: Animals are multicellular, eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. As of 2022, 2.16 million living animal species have been described—of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are around 7.77 million animal species. Animals range in length from to . They have complex interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology. Most living animal species are in Bilateria, a clade whose members have a bilaterally symmetric body plan. The Bilateria include the protostomes, containing animals such as nematodes, arthropods, flatworms, annelids and molluscs, and the deuterostomes, containing the echinoderms and the chordates, the latter including the vertebrates. Life forms interpreted as early animals were present in the Ediacaran biota of the late Precambrian. Many modern animal phyla became clearly established in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago. Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on ad Document 2::: Vertebrate zoology is the biological discipline that consists of the study of Vertebrate animals, i.e., animals with a backbone, such as fish, amphibians, reptiles, birds and mammals. Many natural history museums have departments named Vertebrate Zoology. In some cases whole museums bear this name, e.g. the Museum of Vertebrate Zoology at the University of California, Berkeley. Subdivisions This subdivision of zoology has many further subdivisions, including: Ichthyology - the study of fishes. Mammalogy - the study of mammals. Chiropterology - the study of bats. Primatology - the study of primates. Ornithology - the study of birds. Herpetology - the study of reptiles. Batrachology - the study of amphibians. These divisions are sometimes further divided into more specific specialties. Document 3::: In zoology, mammalogy is the study of mammals – a class of vertebrates with characteristics such as homeothermic metabolism, fur, four-chambered hearts, and complex nervous systems. Mammalogy has also been known as "mastology," "theriology," and "therology." The archive of number of mammals on earth is constantly growing, but is currently set at 6,495 different mammal species including recently extinct. There are 5,416 living mammals identified on earth and roughly 1,251 have been newly discovered since 2006. The major branches of mammalogy include natural history, taxonomy and systematics, anatomy and physiology, ethology, ecology, and management and control. The approximate salary of a mammalogist varies from $20,000 to $60,000 a year, depending on their experience. Mammalogists are typically involved in activities such as conducting research, managing personnel, and writing proposals. Mammalogy branches off into other taxonomically-oriented disciplines such as primatology (study of primates), and cetology (study of cetaceans). Like other studies, mammalogy is also a part of zoology which is also a part of biology, the study of all living things. Research purposes Mammalogists have stated that there are multiple reasons for the study and observation of mammals. Knowing how mammals contribute or thrive in their ecosystems gives knowledge on the ecology behind it. Mammals are often used in business industries, agriculture, and kept for pets. Studying mammals habitats and source of energy has led to aiding in survival. The domestication of some small mammals has also helped discover several different diseases, viruses, and cures. Mammalogist A mammalogist studies and observes mammals. In studying mammals, they can observe their habitats, contributions to the ecosystem, their interactions, and the anatomy and physiology. A mammalogist can do a broad variety of things within the realm of mammals. A mammalogist on average can make roughly $58,000 a year. This dep Document 4::: Mammals Alces alces (Linnaeus, 1758) — Eurasian elk, moose Axis axis (Erxleben, 1777) — chital, axis deer Bison bison (Linnaeus, 1758) — American bison, buffalo Capreolus capreolus (Linnaeus, 1758) — European roe deer, roe deer Caracal caracal (Schreber, 1776) — caracal Chinchilla chinchilla (Lichtenstein, 1829) — short-tailed chinchilla Chiropotes chiropotes (Humboldt, 1811) — red-backed bearded saki Cricetus cricetus (Linnaeus, 1758) — common hamster, European hamster Crocuta crocuta (Erxleben, 1777) — spotted hyena Dama dama (Linnaeus, 1758) — European fallow deer Feroculus feroculus (Kelaart, 1850) — Kelaart's long-clawed shrew Gazella gazella (Pallas, 1766) — mountain gazelle Genetta genetta (Linnaeus, 1758) — common genet Gerbillus gerbillus (Olivier, 1801) — lesser Egyptian gerbil Giraffa giraffa (von Schreber, 1784) — southern giraffe Glis glis (Linnaeus, 1766) — European edible dormouse, European fat dormouse Gorilla gorilla (Savage, 1847) — western gorilla Gulo gulo (Linnaeus, 1758) — wolverine Hoolock hoolock (Harlan, 1834) — western hoolock gibbon Hyaena hyaena (Linnaeus, 1758) — striped hyena Indri indri (Gmelin, 1788) — indri Jaculus jaculus (Linnaeus, 1758) — lesser Egyptian jerboa Lagurus lagurus (Pallas, 1773) — steppe vole, steppe lemming Lemmus lemmus (Linnaeus, 1758) — Norway lemming Lutra lutra (Linnaeus, 1758) — European otter Lynx lynx (Linnaeus, 1758) — Eurasian lynx Macrophyllum macrophyllum (Schinz, 1821) — long-legged bat Marmota marmota (Linnaeus, 1758) — Alpine marmot Martes martes (Linnaeus, 1758) — European pine marten, pine marten Meles meles (Linnaeus, 1758) — European badg The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What class of animals includes the subgroups rodents, carnivores, insectivores, bats, and primates? A. amphibians B. mammals C. insects D. reptiles Answer:
sciq-5785
multiple_choice
What kind of radiation is observed in honeycreeper birds?
[ "spontaneous", "destructive", "adaptive", "symbiotic" ]
C
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 2::: Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research. Americas Human Biology major at Stanford University, Palo Alto (since 1970) Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government. Human and Social Biology (Caribbean) Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment. Human Biology Program at University of Toronto The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications. Asia BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002) BSc (honours) Human Biology at AIIMS (New Document 3::: The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591. On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education. Format This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions. The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test. Preparation The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a Document 4::: Computerized adaptive testing (CAT) is a form of computer-based test that adapts to the examinee's ability level. For this reason, it has also been called tailored testing. In other words, it is a form of computer-administered test in which the next item or set of items selected to be administered depends on the correctness of the test taker's responses to the most recent items administered. How it works CAT successively selects questions for the purpose of maximizing the precision of the exam based on what is known about the examinee from previous questions. From the examinee's perspective, the difficulty of the exam seems to tailor itself to their level of ability. For example, if an examinee performs well on an item of intermediate difficulty, they will then be presented with a more difficult question. Or, if they performed poorly, they would be presented with a simpler question. Compared to static tests that nearly everyone has experienced, with a fixed set of items administered to all examinees, computer-adaptive tests require fewer test items to arrive at equally accurate scores. The basic computer-adaptive testing method is an iterative algorithm with the following steps: The pool of available items is searched for the optimal item, based on the current estimate of the examinee's ability The chosen item is presented to the examinee, who then answers it correctly or incorrectly The ability estimate is updated, based on all prior answers Steps 1–3 are repeated until a termination criterion is met Nothing is known about the examinee prior to the administration of the first item, so the algorithm is generally started by selecting an item of medium, or medium-easy, difficulty as the first item. As a result of adaptive administration, different examinees receive quite different tests. Although examinees are typically administered different tests, their ability scores are comparable to one another (i.e., as if they had received the same test, as is common The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What kind of radiation is observed in honeycreeper birds? A. spontaneous B. destructive C. adaptive D. symbiotic Answer:
sciq-7105
multiple_choice
Which process acts as a natural complement for cellular respiration?
[ "glycolysis", "atherosclerosis", "photosynthesis", "absorption" ]
C
Relavent Documents: Document 0::: Cellular respiration is the process by which biological fuels are oxidized in the presence of an inorganic electron acceptor, such as oxygen, to drive the bulk production of adenosine triphosphate (ATP), which contains energy. Cellular respiration may be described as a set of metabolic reactions and processes that take place in the cells of organisms to convert chemical energy from nutrients into ATP, and then release waste products. Cellular respiration is a vital process that happens in the cells of living organisms, including humans, plants, and animals. It's how cells produce energy to power all the activities necessary for life. The reactions involved in respiration are catabolic reactions, which break large molecules into smaller ones, producing large amounts of energy (ATP). Respiration is one of the key ways a cell releases chemical energy to fuel cellular activity. The overall reaction occurs in a series of biochemical steps, some of which are redox reactions. Although cellular respiration is technically a combustion reaction, it is an unusual one because of the slow, controlled release of energy from the series of reactions. Nutrients that are commonly used by animal and plant cells in respiration include sugar, amino acids and fatty acids, and the most common oxidizing agent is molecular oxygen (O2). The chemical energy stored in ATP (the bond of its third phosphate group to the rest of the molecule can be broken allowing more stable products to form, thereby releasing energy for use by the cell) can then be used to drive processes requiring energy, including biosynthesis, locomotion or transportation of molecules across cell membranes. Aerobic respiration Aerobic respiration requires oxygen (O2) in order to create ATP. Although carbohydrates, fats and proteins are consumed as reactants, aerobic respiration is the preferred method of pyruvate production in glycolysis, and requires pyruvate to the mitochondria in order to be fully oxidized by the c Document 1::: Cellular waste products are formed as a by-product of cellular respiration, a series of processes and reactions that generate energy for the cell, in the form of ATP. One example of cellular respiration creating cellular waste products are aerobic respiration and anaerobic respiration. Each pathway generates different waste products. Aerobic respiration When in the presence of oxygen, cells use aerobic respiration to obtain energy from glucose molecules. Simplified Theoretical Reaction: C6H12O6 (aq) + 6O2 (g) → 6CO2 (g) + 6H2O (l) + ~ 30ATP Cells undergoing aerobic respiration produce 6 molecules of carbon dioxide, 6 molecules of water, and up to 30 molecules of ATP (adenosine triphosphate), which is directly used to produce energy, from each molecule of glucose in the presence of surplus oxygen. In aerobic respiration, oxygen serves as the recipient of electrons from the electron transport chain. Aerobic respiration is thus very efficient because oxygen is a strong oxidant. Aerobic respiration proceeds in a series of steps, which also increases efficiency - since glucose is broken down gradually and ATP is produced as needed, less energy is wasted as heat. This strategy results in the waste products H2O and CO2 being formed in different amounts at different phases of respiration. CO2 is formed in Pyruvate decarboxylation, H2O is formed in oxidative phosphorylation, and both are formed in the citric acid cycle. The simple nature of the final products also indicates the efficiency of this method of respiration. All of the energy stored in the carbon-carbon bonds of glucose is released, leaving CO2 and H2O. Although there is energy stored in the bonds of these molecules, this energy is not easily accessible by the cell. All usable energy is efficiently extracted. Anaerobic respiration Anaerobic respiration is done by aerobic organisms when there is not sufficient oxygen in a cell to undergo aerobic respiration as well as by cells called anaerobes that Document 2::: Bioenergetics is a field in biochemistry and cell biology that concerns energy flow through living systems. This is an active area of biological research that includes the study of the transformation of energy in living organisms and the study of thousands of different cellular processes such as cellular respiration and the many other metabolic and enzymatic processes that lead to production and utilization of energy in forms such as adenosine triphosphate (ATP) molecules. That is, the goal of bioenergetics is to describe how living organisms acquire and transform energy in order to perform biological work. The study of metabolic pathways is thus essential to bioenergetics. Overview Bioenergetics is the part of biochemistry concerned with the energy involved in making and breaking of chemical bonds in the molecules found in biological organisms. It can also be defined as the study of energy relationships and energy transformations and transductions in living organisms. The ability to harness energy from a variety of metabolic pathways is a property of all living organisms. Growth, development, anabolism and catabolism are some of the central processes in the study of biological organisms, because the role of energy is fundamental to such biological processes. Life is dependent on energy transformations; living organisms survive because of exchange of energy between living tissues/ cells and the outside environment. Some organisms, such as autotrophs, can acquire energy from sunlight (through photosynthesis) without needing to consume nutrients and break them down. Other organisms, like heterotrophs, must intake nutrients from food to be able to sustain energy by breaking down chemical bonds in nutrients during metabolic processes such as glycolysis and the citric acid cycle. Importantly, as a direct consequence of the First Law of Thermodynamics, autotrophs and heterotrophs participate in a universal metabolic network—by eating autotrophs (plants), heterotrophs ha Document 3::: Reactions The reactions of the MEP pathway are as follows, taken primarily from Eisenreich and co-workers, excep Document 4::: Biochemistry or biological chemistry is the study of chemical processes within and relating to living organisms. A sub-discipline of both chemistry and biology, biochemistry may be divided into three fields: structural biology, enzymology, and metabolism. Over the last decades of the 20th century, biochemistry has become successful at explaining living processes through these three disciplines. Almost all areas of the life sciences are being uncovered and developed through biochemical methodology and research. Biochemistry focuses on understanding the chemical basis which allows biological molecules to give rise to the processes that occur within living cells and between cells, in turn relating greatly to the understanding of tissues and organs, as well as organism structure and function. Biochemistry is closely related to molecular biology, which is the study of the molecular mechanisms of biological phenomena. Much of biochemistry deals with the structures, bonding, functions, and interactions of biological macromolecules, such as proteins, nucleic acids, carbohydrates, and lipids. They provide the structure of cells and perform many of the functions associated with life. The chemistry of the cell also depends upon the reactions of small molecules and ions. These can be inorganic (for example, water and metal ions) or organic (for example, the amino acids, which are used to synthesize proteins). The mechanisms used by cells to harness energy from their environment via chemical reactions are known as metabolism. The findings of biochemistry are applied primarily in medicine, nutrition and agriculture. In medicine, biochemists investigate the causes and cures of diseases. Nutrition studies how to maintain health and wellness and also the effects of nutritional deficiencies. In agriculture, biochemists investigate soil and fertilizers, with the goal of improving crop cultivation, crop storage, and pest control. In recent decades, biochemical principles a The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Which process acts as a natural complement for cellular respiration? A. glycolysis B. atherosclerosis C. photosynthesis D. absorption Answer:
sciq-3275
multiple_choice
What types of drugs affect the brain and influence how a person can feel, think, or act?
[ "antibiotics", "psychoactive", "prescription", "analgesics" ]
B
Relavent Documents: Document 0::: Research into the mental disorder of schizophrenia, involves multiple animal models as a tool, including in the preclinical stage of drug development. Several models simulate schizophrenia defects. These fit into four basic categories: pharmacological models, developmental models, lesion models, and genetic models. Historically, pharmacological, or drug-induced models were the most widely used. These involve the manipulation of various neurotransmitter systems, including dopamine, glutamate, serotonin, and GABA. Lesion models, in which an area of an animal's brain is damaged, arose from theories that schizophrenia involves neurodegeneration, and that problems during neurodevelopment cause the disease. Traditionally, rodent models of schizophrenia mostly targeted symptoms analogous to the positive symptoms of schizophrenia, with some models also having symptoms similar to the negative symptoms. Recent developments in schizophrenia research, however, have targeted cognitive symptoms as some of the most debilitating and influential in patients' daily lives, and thus have become a larger target in animal models of schizophrenia. Animals used as models for schizophrenia include rats, mice, and primates. Uses and limitations The modelling of schizophrenia in animals can range from attempts to imitate the full extent of symptoms found in schizophrenia, to more specific modelling which investigate the efficacy of antipsychotic drugs. Each extreme has its limitations, with whole-syndrome modeling often failing due to the complexity and heterogeneous nature of schizophrenia, as well as difficulty translating human specific diagnostic criteria such as disorganized speech to animals. Antipsychotic-specific modelling faces similar issues, one of which is that it is not useful for discovering drugs with unique mechanisms of action, while traditional medications for schizophrenia have generalized effects (blocking of dopamine receptors) that make it difficult to attribute outcom Document 1::: The head-twitch response (HTR) is a rapid side-to-side head movement that occurs in mice and rats after the serotonin 5-HT2A receptor is activated. The prefrontal cortex may be the neuroanatomical locus mediating the HTR. Many serotonergic hallucinogens, including lysergic acid diethylamide (LSD), induce the head-twitch response, and so the HTR is used as a behavioral model of hallucinogen effects. However while there is generally a good correlation between compounds that induce head twitch in mice and compounds that are hallucinogenic in humans, it is unclear whether the head twitch response is primarily caused by 5-HT2A receptors, 5-HT2C receptors or both, though recent evidence shows that the HTR is mediated by the 5-HT2A receptor and modulated by the 5-HT2C receptor. Also, the effect can be non-specific, with head twitch responses also produced by some drugs that do not act through 5-HT2 receptors, such as phencyclidine, yohimbine, atropine and cannabinoid receptor antagonists. As well, compounds such as 5-HTP, fenfluramine, 1-Methylpsilocin, Ergometrine, and 3,4-di-methoxyphenethylamine (DMPEA) can also produce head twitch and do stimulate serotonin receptors, but are not hallucinogenic in humans. This means that while the head twitch response can be a useful indicator as to whether a compound is likely to display hallucinogenic activity in humans, the induction of a head twitch response does not necessarily mean that a compound will be hallucinogenic, and caution should be exercised when interpreting such results. Document 2::: Stearoylethanolamide (SEA) is an endocannabinoid neurotransmitter. Stearoylethanolamide (C20H41NO2; 18:0), also called N-(octadecanoyl)ethanolamine, is an N-acylethanolamine and the ethanolamide of octadecanoic acid (C18H36O2; 18:0) and ethanolamine (MEA: C2H7NO), and functionally related to an octadecanoic acid. Levels of SEA correlate with changes in pain intensity, indicating this SEA change, reflect the pain reduction effects of IPRP. Document 3::: Opioid rotation or opioid switching is the process of changing one opioid to another to improve pain control or reduce unwanted side effects. This technique was introduced in the 1990s to help manage severe chronic pain and improve the opioid response in cancer patients. In order to obtain adequate levels of pain relief, patients requiring chronic opioid therapy may require an increase in the original prescribed dose for a number of reasons, including increased pain or a worsening disease state. Over the course of long-term treatment, an increase in dosage cannot be continued indefinitely as unwanted side effects of treatment often become intolerable once a certain dose is reached, even though the pain may still not be properly managed. One strategy used to address this is to switch the patient between different opioid drugs over time, usually every few months. Opioid rotation requires strict monitoring in patients with ongoing levels of high opioid doses for extended periods of time, since long term opioid use can lead to a patient developing tolerance to the analgesic effects of the drug. Patients may also not respond to the first opioid prescribed to them at all, therefore needing to try another opioid to help manage their pain. A patient's specific response and sensitivity to opioids include many factors that include physiology, genetics and pharmacodynamic parameters, which together determine the amount of pain control and tolerance of a particular opioid. Mechanism Opioid analgesic drugs tend to exhibit incomplete cross-tolerance, so that even when a patient has developed a high level of tolerance to one drug from this class, they may find that a different opioid drug will still be effective. The reasons for this are still not completely understood, but are thought to result from variations in opioid receptor affinity and occupancy levels at equianalgesic doses, as well as additional mechanisms of action possessed by some drugs such as the NMDA antagonist ac Document 4::: Drug liking is a measure of the pleasurable (hedonic) experience when a person consumes drugs. It is commonly used to study the misuse liability of drugs. Drug liking is often measured using unipolar and bipolar visual analogue scales (VAS), such as the Drug Liking VAS, the High VAS, the Take Drug Again (TDA) VAS, and the Overall Drug Liking (ODL) VAS. There is a dissociation of drug liking from drug wanting (unconscious attribution of incentive salience). Drugs that increase scores on drug-liking measures include amphetamines, cocaine, methylphenidate, MDMA, opioids, benzodiazepines, Z-drugs, barbiturates, alcohol, nicotine, and caffeine (limitedly), among others. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What types of drugs affect the brain and influence how a person can feel, think, or act? A. antibiotics B. psychoactive C. prescription D. analgesics Answer:
sciq-10472
multiple_choice
What is the name of the outer part of the adrenal gland located above the kidneys?
[ "spleen", "cortex", "mitochondria", "nucleus" ]
B
Relavent Documents: Document 0::: A central or intermediate group of three or four large glands is imbedded in the adipose tissue near the base of the axilla. Its afferent lymphatic vessels are the efferent vessels of all the preceding groups of axillary glands; its efferents pass to the subclavicular group. Additional images Document 1::: In anatomy and zoology, the cortex (: cortices) is the outermost (or superficial) layer of an organ. Organs with well-defined cortical layers include kidneys, adrenal glands, ovaries, the thymus, and portions of the brain, including the cerebral cortex, the best-known of all cortices. Etymology The word is of Latin origin and means bark, rind, shell or husk. Notable examples The renal cortex, between the renal capsule and the renal medulla; assists in ultrafiltration The adrenal cortex, situated along the perimeter of the adrenal gland; mediates the stress response through the production of various hormones The thymic cortex, mainly composed of lymphocytes; functions as a site for somatic recombination of T cell receptors, and positive selection The cerebral cortex, the outer layer of the cerebrum, plays a key role in memory, attention, perceptual awareness, thought, language, and consciousness. Cortical bone is the hard outer layer of bone; distinct from the spongy, inner cancellous bone tissue Ovarian cortex is the outer layer of the ovary and contains the follicles. The lymph node cortex is the outer layer of the lymph node. Cerebral cortex The cerebral cortex is typically described as comprising three parts: the sensory, motor, and association areas. These sensory areas receive and process information from the senses. The senses of vision, audition, and touch are served by the primary visual cortex, the primary auditory cortex, and primary somatosensory cortex. The cerebellar cortex is the thin gray surface layer of the cerebellum, consisting of an outer molecular layer or stratum moleculare, a single layer of Purkinje cells (the ganglionic layer), and an inner granular layer or stratum granulosum. The cortex is the outer surface of the cerebrum and is composed of gray matter. The motor areas are located in both hemispheres of the cerebral cortex. Two areas of the cortex are commonly referred to as motor: the primary motor cortex, which executes v Document 2::: The ovarian cortex is the outer portion of the ovary. The ovarian follicles are located within the ovarian cortex. The ovarian cortex is made up of connective tissue. Ovarian cortex tissue transplant has been performed to treat infertility. Document 3::: H2.00.04.4.01001: Lymphoid tissue H2.00.05.0.00001: Muscle tissue H2.00.05.1.00001: Smooth muscle tissue H2.00.05.2.00001: Striated muscle tissue H2.00.06.0.00001: Nerve tissue H2.00.06.1.00001: Neuron H2.00.06.2.00001: Synapse H2.00.06.2.00001: Neuroglia h3.01: Bones h3.02: Joints h3.03: Muscles h3.04: Alimentary system h3.05: Respiratory system h3.06: Urinary system h3.07: Genital system h3.08: Document 4::: This table lists the epithelia of different organs of the human body Human anatomy The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the name of the outer part of the adrenal gland located above the kidneys? A. spleen B. cortex C. mitochondria D. nucleus Answer:
sciq-8047
multiple_choice
What forms as a result of decomposition when n2o is heated?
[ "ammonia and nitrogren", "calcium and oxygen", "nitrogen and hydrogen", "nitrogen and oxygen" ]
D
Relavent Documents: Document 0::: Nitrogen dioxide is a chemical compound with the formula and is one of several nitrogen oxides. is an intermediate in the industrial synthesis of nitric acid, millions of tons of which are produced each year for use (primarily in the production of fertilizers). At higher temperatures, nitrogen dioxide is a reddish-brown gas. It can be fatal if inhaled in large quantities. The LC50 (median lethal dose) for humans has been estimated to be 174 ppm for a 1-hour exposure. Nitrogen dioxide is a paramagnetic, bent molecule with C2v point group symmetry. It is included in the NOx family of atmospheric pollutants. Properties Nitrogen dioxide is a reddish-brown gas with a pungent, acrid odor above and becomes a yellowish-brown liquid below . It forms an equilibrium with its dimer, dinitrogen tetroxide (), and converts almost entirely to below . The bond length between the nitrogen atom and the oxygen atom is 119.7 pm. This bond length is consistent with a bond order between one and two. Unlike ozone () the ground electronic state of nitrogen dioxide is a doublet state, since nitrogen has one unpaired electron, which decreases the alpha effect compared with nitrite and creates a weak bonding interaction with the oxygen lone pairs. The lone electron in also means that this compound is a free radical, so the formula for nitrogen dioxide is often written as . The reddish-brown color is a consequence of preferential absorption of light in the blue region of the spectrum (400–500 nm), although the absorption extends throughout the visible (at shorter wavelengths) and into the infrared (at longer wavelengths). Absorption of light at wavelengths shorter than about 400 nm results in photolysis (to form , atomic oxygen); in the atmosphere the addition of the oxygen atom so formed to results in ozone. Preparation Nitrogen dioxide typically arises via the oxidation of nitric oxide by oxygen in air (e.g. as result of corona discharge): +   Nitrogen dioxide is formed in m Document 1::: Nitrous acid (molecular formula ) is a weak and monoprotic acid known only in solution, in the gas phase and in the form of nitrite () salts. Nitrous acid is used to make diazonium salts from amines. The resulting diazonium salts are reagents in azo coupling reactions to give azo dyes. Structure In the gas phase, the planar nitrous acid molecule can adopt both a syn and an anti form. The anti form predominates at room temperature, and IR measurements indicate it is more stable by around 2.3 kJ/mol. Preparation Nitrous acid is usually generated by acidification of aqueous solutions of sodium nitrite with a mineral acid. The acidification is usually conducted at ice temperatures, and the HNO2 is consumed in situ. Free nitrous acid is unstable and decomposes rapidly. Nitrous acid can also be produced by dissolving dinitrogen trioxide in water according to the equation N2O3 + H2O → 2 HNO2 Reactions Nitrous acid is the main chemphore in the Liebermann reagent, used to spot-test for alkaloids. Decomposition Gaseous nitrous acid, which is rarely encountered, decomposes into nitrogen dioxide, nitric oxide, and water: 2 HNO2 → NO2 + NO + H2O Nitrogen dioxide disproportionates into nitric acid and nitrous acid in aqueous solution: 2 NO2 + H2O → HNO3 + HNO2 In warm or concentrated solutions, the overall reaction amounts to production of nitric acid, water, and nitric oxide: 3 HNO2 → HNO3 + 2 NO + H2O The nitric oxide can subsequently be re-oxidized by air to nitric acid, making the overall reaction: 2 HNO2 + O2 → 2 HNO3 Reduction With I− and Fe2+ ions, NO is formed: 2 HNO2 + 2 KI + 2 H2SO4 → I2 + 2 NO + 2 H2O + 2 K2SO4 2 HNO2 + 2 FeSO4 + 2 H2SO4 → Fe2(SO4)3 + 2 NO + 2 H2O + K2SO4 With Sn2+ ions, N2O is formed: Document 2::: The nitrogen cycle is the biogeochemical cycle by which nitrogen is converted into multiple chemical forms as it circulates among atmospheric, terrestrial, and marine ecosystems. The conversion of nitrogen can be carried out through both biological and physical processes. Important processes in the nitrogen cycle include fixation, ammonification, nitrification, and denitrification. The majority of Earth's atmosphere (78%) is atmospheric nitrogen, making it the largest source of nitrogen. However, atmospheric nitrogen has limited availability for biological use, leading to a scarcity of usable nitrogen in many types of ecosystems. The nitrogen cycle is of particular interest to ecologists because nitrogen availability can affect the rate of key ecosystem processes, including primary production and decomposition. Human activities such as fossil fuel combustion, use of artificial nitrogen fertilizers, and release of nitrogen in wastewater have dramatically altered the global nitrogen cycle. Human modification of the global nitrogen cycle can negatively affect the natural environment system and also human health. Processes Nitrogen is present in the environment in a wide variety of chemical forms including organic nitrogen, ammonium (), nitrite (), nitrate (), nitrous oxide (), nitric oxide (NO) or inorganic nitrogen gas (). Organic nitrogen may be in the form of a living organism, humus or in the intermediate products of organic matter decomposition. The processes in the nitrogen cycle is to transform nitrogen from one form to another. Many of those processes are carried out by microbes, either in their effort to harvest energy or to accumulate nitrogen in a form needed for their growth. For example, the nitrogenous wastes in animal urine are broken down by nitrifying bacteria in the soil to be used by plants. The diagram alongside shows how these processes fit together to form the nitrogen cycle. Nitrogen fixation The conversion of nitrogen gas () into nitrates Document 3::: In chemistry, ammonolysis (/am·mo·nol·y·sis/) is the process of splitting ammonia into NH2- + H+. Ammonolysis reactions can be conducted with organic compounds to produce amines (molecules containing a nitrogen atom with a lone pair, :N), or with inorganic compounds to produce nitrides. This reaction is analogous to hydrolysis in which water molecules are split. Similar to water, liquid ammonia also undergoes auto-ionization, {2 NH3 ⇌ NH4+ + NH2- }, where the rate constant is k = 1.9 × 10-38. Organic compounds such as alkyl halides, hydroxyls (hydroxyl nitriles and carbohydrates), carbonyl (aldehydes/ketones/esters/alcohols), and sulfur (sulfonyl derivatives) can all undergo ammonolysis in liquid ammonia. Organic synthesis Mechanism: ammonolysis of esters This mechanism is similar to the hydrolysis of esters, the ammonia attacks the electrophilic carbonyl carbon forming a tetrahedral intermediate. The reformation of the C-O double bond ejects the ester. The alkoxide deprotonates the ammonia forming an alcohol and amide as products. Of haloalkanes On heating a haloalkane and concentrated ammonia in a sealed tube with ethanol, a series of amines are formed along with their salts. The tertiary amine is usually the major product. {NH3 ->[\ce{RX}] RNH2 ->[\ce{RX}] R2NH ->[\ce{RX}] R3N ->[\ce{RX}] R4N+} This is known as Hoffmann's ammonolysis. Of alcohols Alcohols can also undergo ammonolysis when in the presence of ammonia. An example is the conversion of phenol to aniline, catalyzed by stannic chloride. ROH + NH3 A ->[\ce{TnCl4}] RNH2 + H2O Document 4::: Microbiology of decomposition is the study of all microorganisms involved in decomposition, the chemical and physical processes during which organic matter is broken down and reduced to its original elements. Decomposition microbiology can be divided into two fields of interest, namely the decomposition of plant materials and the decomposition of cadavers and carcasses. The decomposition of plant materials is commonly studied in order to understand the cycling of carbon within a given environment and to understand the subsequent impacts on soil quality. Plant material decomposition is also often referred to as composting. The decomposition of cadavers and carcasses has become an important field of study within forensic taphonomy. Decomposition microbiology of plant materials The breakdown of vegetation is highly dependent on oxygen and moisture levels. During decomposition, microorganisms require oxygen for their respiration. If anaerobic conditions dominate the decomposition environment, microbial activity will be slow and thus decomposition will be slow. Appropriate moisture levels are required for microorganisms to proliferate and to actively decompose organic matter. In arid environments, bacteria and fungi dry out and are unable to take part in decomposition. In wet environments, anaerobic conditions will develop and decomposition can also be considerably slowed down. Decomposing microorganisms also require the appropriate plant substrates in order to achieve good levels of decomposition. This usually translates to having appropriate carbon to nitrogen ratios (C:N). The ideal composting carbon-to-nitrogen ratio is thought to be approximately 30:1. As in any microbial process, the decomposition of plant litter by microorganisms will also be dependent on temperature. For example, leaves on the ground will not undergo decomposition during the winter months where snow cover occurs as temperatures are too low to sustain microbial activities. Decomposition mi The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What forms as a result of decomposition when n2o is heated? A. ammonia and nitrogren B. calcium and oxygen C. nitrogen and hydrogen D. nitrogen and oxygen Answer:
ai2_arc-1095
multiple_choice
In a tropical rainforest, the difference between daytime and nighttime temperatures is relatively small. In the desert, the difference between daytime and nighttime temperatures is very large. Which factor contributes to the lower variation in rainforest temperatures?
[ "Foliage in the rainforest reflects heat.", "Organisms in the rainforest absorb moisture.", "Solar radiation in the rainforest is less intense.", "Cloud cover in the rainforest helps to retain heat." ]
D
Relavent Documents: Document 0::: Biometeorology is the interdisciplinary field of science that studies the interactions between the biosphere and the Earth's atmosphere on time scales of the order of seasons or shorter (in contrast with bioclimatology). Examples of relevant processes Weather events influence biological processes on short time scales. For instance, as the Sun rises above the horizon in the morning, light levels become sufficient for the process of photosynthesis to take place in plant leaves. Later on, during the day, air temperature and humidity may induce the partial or total closure of the stomata, a typical response of many plants to limit the loss of water through transpiration. More generally, the daily evolution of meteorological variables controls the circadian rhythm of plants and animals alike. Living organisms, for their part, can collectively affect weather patterns. The rate of evapotranspiration of forests, or of any large vegetated area for that matter, contributes to the release of water vapor in the atmosphere. This local, relatively fast and continuous process may contribute significantly to the persistence of precipitations in a given area. As another example, the wilting of plants results in definite changes in leaf angle distribution and therefore modifies the rates of reflection, transmission and absorption of solar light in these plants. That, in turn, changes the albedo of the ecosystem as well as the relative importance of the sensible and latent heat fluxes from the surface to the atmosphere. For an example in oceanography, consider the release of dimethyl sulfide by biological activity in sea water and its impact on atmospheric aerosols. Human biometeorology The methods and measurements traditionally used in biometeorology are not different when applied to study the interactions between human bodies and the atmosphere, but some aspects or applications may have been explored more extensively. For instance, wind chill has been investigated to determine th Document 1::: Climatic adaptation refers to adaptations of an organism that are triggered due to the patterns of variation of abiotic factors that determine a specific climate. Annual means, seasonal variation and daily patterns of abiotic factors are properties of a climate where organisms can be adapted to. Changes in behavior, physical structure, internal mechanisms and metabolism are forms of adaptation that is caused by climate properties. Organisms of the same species that occur in different climates can be compared to determine which adaptations are due to climate and which are influenced majorly by other factors. Climatic adaptations limits to adaptations that have been established, characterizing species that live within the specific climate. It is different from climate change adaptations which refers to the ability to adapt to gradual changes of a climate. Once a climate has changed, the climate change adaptation that led to the survival of the specific organisms as a species can be seen as a climatic adaptation. Climatic adaptation is constrained by the genetic variability of the species in question. Climate patterns The patterns of variation of abiotic factors determine a climate and thus climatic adaptation. There are many different climates around the world, each with its unique patterns. Because of this, the manner of climatic adaptation shows large differences between the climates. A subarctic climate, for instance, shows daylight time and temperature fluctuations as most important factors, while in rainforest climate, the most important factor is characterized by the stable high precipitation rate and high average temperature that doesn't fluctuate a lot. Humid continental climate is marked by seasonal temperature variances which commonly lead to seasonal climate adaptations. Because the variance of these abiotic factors differ depending on the type of climate, differences in the manner of climatic adaptation are expected. Research Research on climatic adaptat Document 2::: Growing degree days (GDD), also called growing degree units (GDUs), are a heuristic tool in phenology. GDD are a measure of heat accumulation used by horticulturists, gardeners, and farmers to predict plant and animal development rates such as the date that a flower will bloom, an insect will emerge from dormancy, or a crop will reach maturity. GDD is credited to be first defined by Reaumur in 1735. Introduction In the absence of extreme conditions such as unseasonal drought or disease, plants grow in a cumulative stepwise manner which is strongly influenced by the ambient temperature. Growing degree days take aspects of local weather into account and allow gardeners to predict (or, in greenhouses, even to control) the plants' pace toward maturity. Unless stressed by other environmental factors like moisture, the development rate from emergence to maturity for many plants depends upon the daily air temperature. Because many developmental events of plants and insects depend on the accumulation of specific quantities of heat, it is possible to predict when these events should occur during a growing season regardless of differences in temperatures from year to year. Growing degrees (GDs) is defined as the number of temperature degrees above a certain threshold base temperature, which varies among crop species. The base temperature is that temperature below which plant growth is zero. GDs are calculated each day as maximum temperature plus the minimum temperature divided by 2, minus the base temperature. GDUs are accumulated by adding each day's GDs contribution as the season progresses. GDUs can be used to: assess the suitability of a region for production of a particular crop; estimate the growth-stages of crops, weeds or even life stages of insects; predict maturity and cutting dates of forage crops; predict best timing of fertilizer or pesticide application; estimate the heat stress on crops; plan spacing of planting dates to produce separate harvest dates. Crop Document 3::: Thermophysics is the application of thermodynamics to geophysics and to planetary science more broadly. It may also be used to refer to the field of thermodynamic and transport properties. Remote sensing Earth thermophysics is a branch of geophysics that uses the naturally occurring surface temperature as a function of the cyclical variation in solar radiation to characterise planetary material properties. Thermophysical properties are characteristics that control the diurnal, seasonal, or climatic surface and subsurface temperature variations (or thermal curves) of a material. The most important thermophysical property is thermal inertia, which controls the amplitude of the thermal curve and albedo (or reflectivity), which controls the average temperature. This field of observations and computer modeling was first applied to Mars due to the ideal atmospheric pressure for characterising granular materials based upon temperature. The Mariner 6, Mariner 7, and Mariner 9 spacecraft carried thermal infrared radiometers, and a global map of thermal inertia was produced from modeled surface temperatures collected by the Infrared Thermal Mapper Instruments (IRTM) on board the Viking 1 and 2 Orbiters. The original thermophysical models were based upon the studies of lunar temperature variations. Further development of the models for Mars included surface-atmosphere energy transfer, atmospheric back-radiation, surface emissivity variations, CO2 frost and blocky surfaces, variability of atmospheric back-radiation, effects of a radiative-convective atmosphere, and single-point temperature observations. Document 4::: Bioclimatology is the interdisciplinary field of science that studies the interactions between the biosphere and the Earth's atmosphere on time scales of the order of seasons or longer (in contrast to biometeorology). Examples of relevant processes Climate processes largely control the distribution, size, shape and properties of living organisms on Earth. For instance, the general circulation of the atmosphere on a planetary scale broadly determines the location of large deserts or the regions subject to frequent precipitation, which, in turn, greatly determine which organisms can naturally survive in these environments. Furthermore, changes in climates, whether due to natural processes or to human interferences, may progressively modify these habitats and cause overpopulation or extinction of indigenous species. The biosphere, for its part, and in particular continental vegetation, which constitutes over 99% of the total biomass, has played a critical role in establishing and maintaining the chemical composition of the Earth's atmosphere, especially during the early evolution of the planet (See History of Earth for more details on this topic). Currently, the terrestrial vegetation exchanges some 60 billion tons of carbon with the atmosphere on an annual basis (through processes of carbon fixation and carbon respiration), thereby playing a critical role in the carbon cycle. On a global and annual basis, small imbalances between these two major fluxes, as do occur through changes in land cover and land use, contribute to the current increase in atmospheric carbon dioxide. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. In a tropical rainforest, the difference between daytime and nighttime temperatures is relatively small. In the desert, the difference between daytime and nighttime temperatures is very large. Which factor contributes to the lower variation in rainforest temperatures? A. Foliage in the rainforest reflects heat. B. Organisms in the rainforest absorb moisture. C. Solar radiation in the rainforest is less intense. D. Cloud cover in the rainforest helps to retain heat. Answer:
sciq-5901
multiple_choice
What is the layer above the troposphere?
[ "mesosphere", "troposphere", "stratosphere", "thermosphere" ]
C
Relavent Documents: Document 0::: Atmospheric temperature is a measure of temperature at different levels of the Earth's atmosphere. It is governed by many factors, including incoming solar radiation, humidity and altitude. When discussing surface air temperature, the annual atmospheric temperature range at any geographical location depends largely upon the type of biome, as measured by the Köppen climate classification Temperature versus altitude Temperature varies greatly at different heights relative to Earth's surface and this variation in temperature characterizes the four layers that exist in the atmosphere. These layers include the troposphere, stratosphere, mesosphere, and thermosphere. The troposphere is the lowest of the four layers, extending from the surface of the Earth to about into the atmosphere where the tropopause (the boundary between the troposphere stratosphere) is located. The width of the troposphere can vary depending on latitude, for example, the troposphere is thicker in the tropics (about ) because the tropics are generally warmer, and thinner at the poles (about ) because the poles are colder. Temperatures in the atmosphere decrease with height at an average rate of 6.5°C (11.7°F) per kilometer. Because the troposphere experiences its warmest temperatures closer to Earth's surface, there is great vertical movement of heat and water vapour, causing turbulence. This turbulence, in conjunction with the presence of water vapour, is the reason that weather occurs within the troposphere. Following the tropopause is the stratosphere. This layer extends from the tropopause to the stratopause which is located at an altitude of about . Temperatures remain constant with height from the tropopause to an altitude of , after which they start to increase with height. This happening is referred to as an inversion and It is because of this inversion that the stratosphere is not characterised as turbulent. The stratosphere receives its warmth from the sun and the ozone layer which ab Document 1::: Aeronomy is the scientific study of the upper atmosphere of the Earth and corresponding regions of the atmospheres of other planets. It is a branch of both atmospheric chemistry and atmospheric physics. Scientists specializing in aeronomy, known as aeronomers, study the motions and chemical composition and properties of the Earth's upper atmosphere and regions of the atmospheres of other planets that correspond to it, as well as the interaction between upper atmospheres and the space environment. In atmospheric regions aeronomers study, chemical dissociation and ionization are important phenomena. History The mathematician Sydney Chapman introduced the term aeronomy to describe the study of the Earth's upper atmosphere in 1946 in a letter to the editor of Nature entitled "Some Thoughts on Nomenclature." The term became official in 1954 when the International Union of Geodesy and Geophysics adopted it. "Aeronomy" later also began to refer to the study of the corresponding regions of the atmospheres of other planets. Branches Aeronomy can be divided into three main branches: terrestrial aeronomy, planetary aeronomy, and comparative aeronomy. Terrestrial aeronomy Terrestrial aeronomy focuses on the Earth's upper atmosphere, which extends from the stratopause to the atmosphere's boundary with outer space and is defined as consisting of the mesosphere, thermosphere, and exosphere and their ionized component, the ionosphere. Terrestrial aeronomy contrasts with meteorology, which is the scientific study of the Earth's lower atmosphere, defined as the troposphere and stratosphere. Although terrestrial aeronomy and meteorology once were completely separate fields of scientific study, cooperation between terrestrial aeronomers and meteorologists has grown as discoveries made since the early 1990s have demonstrated that the upper and lower atmospheres have an impact on one another's physics, chemistry, and biology. Terrestrial aeronomers study atmospheric tides and upper- Document 2::: In aviation, ceiling is a measurement of the height of the base of the lowest clouds (not to be confused with cloud base which has a specific definition) that cover more than half of the sky (more than 4 oktas) relative to the ground. Ceiling is not specifically reported as part of the METAR (METeorological Aviation Report) used for flight planning by pilots worldwide, but can be deduced from the lowest height with broken (BKN) or overcast (OVC) reported. A ceiling listed as "unlimited" means either that the sky is mostly free of cloud cover, or that the cloud is high enough not to impede Visual Flight Rules (VFR) operation. Definitions ICAO The height above the ground or water of the base of the lowest layer of cloud below 6000 meters (20,000 feet) covering more than half the sky. United Kingdom The vertical distance from the elevation of an aerodrome to the lowest part of any cloud visible from the aerodrome which is sufficient to obscure more than half of the sky. United States The height above the Earth's surface of the lowest layer of clouds or obscuring phenomena that is reported as broken, overcast, or obscuration, and not classified as thin or partial. See also Cloud base Document 3::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 4::: Vertical Distribution of Ice in Arctic Clouds (VERDI) is the name of a German research project on the topic of Arctic clouds. Measurements within this project were conducted in April and May 2012 around Inuvik, Canada, organized by the University of Leipzig. The project aims at an improvement of knowledge about the effects of clouds in the Arctic climate system. The main question within VERDI is the distribution of ice crystals and liquid water droplets within the clouds. That distribution depends on various parameters, such as temperature and the cloud's life cycle. Measurements During VERDI, airborne observations were conducted inside and above low-level Arctic clouds. Their properties were characterized in detail. The observation methods included remote sensing (both active and passive) covering a large cloud extent, as well as in-situ probing of cloud droplets and ice crystals. The measurements were conducted on board of the research aircraft Polar 5 (call sign C-GAWI) of the Alfred Wegener Institute for Polar and Marine Research, which was based in Inuvik, Canada, for the measurements. The measurement flights were located in the south-eastern Beaufort Sea north of the Mackenzie Delta in the Northwest Territories. Altogether, sixteen flights were conducted between 21 April and 17 May 2012. Funding VERDI has been funded by the Alfred Wegener Institute for Polar and Marine Research, by the German Research Foundation (DFG), by Forschungszentrum Jülich (FZJ), and by the Karlsruhe Institute of Technology (KIT). In addition, the Max Planck Institute for Chemistry in Mainz, the Johannes Gutenberg University of Mainz and the Institute of Atmospheric Physics of the German Aerospace Center. Logistically, the campaign was supported by the Aurora Research Institute in Inuvik. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the layer above the troposphere? A. mesosphere B. troposphere C. stratosphere D. thermosphere Answer:
sciq-7817
multiple_choice
What theory brings together continental drift and seafloor spreading?
[ "theory of front tectonics", "theory of plate tectonics", "theory of order tectonics", "theory of water tectonics" ]
B
Relavent Documents: Document 0::: The Vine–Matthews–Morley hypothesis, also known as the Morley–Vine–Matthews hypothesis, was the first key scientific test of the seafloor spreading theory of continental drift and plate tectonics. Its key impact was that it allowed the rates of plate motions at mid-ocean ridges to be computed. It states that the Earth's oceanic crust acts as a recorder of reversals in the geomagnetic field direction as seafloor spreading takes place. History Harry Hess proposed the seafloor spreading hypothesis in 1960 (published in 1962); the term "spreading of the seafloor" was introduced by geophysicist Robert S. Dietz in 1961. According to Hess, seafloor was created at mid-oceanic ridges by the convection of the earth's mantle, pushing and spreading the older crust away from the ridge. Geophysicist Frederick John Vine and the Canadian geologist Lawrence W. Morley independently realized that if Hess's seafloor spreading theory was correct, then the rocks surrounding the mid-oceanic ridges should show symmetric patterns of magnetization reversals using newly collected magnetic surveys. Both of Morley's letters to Nature (February 1963) and Journal of Geophysical Research (April 1963) were rejected, hence Vine and his PhD adviser at Cambridge University, Drummond Hoyle Matthews, were first to publish the theory in September 1963. Some colleagues were skeptical of the hypothesis because of the numerous assumptions made—seafloor spreading, geomagnetic reversals, and remanent magnetism—all hypotheses that were still not widely accepted. The Vine–Matthews–Morley hypothesis describes the magnetic reversals of oceanic crust. Further evidence for this hypothesis came from Allan V. Cox and colleagues (1964) when they measured the remanent magnetization of lavas from land sites. Walter C. Pitman and J. R. Heirtzler offered further evidence with a remarkably symmetric magnetic anomaly profile from the Pacific-Antarctic Ridge. Marine magnetic anomalies The Vine–Matthews-Morley hypothesis Document 1::: The evolution of tectonophysics is closely linked to the history of the continental drift and plate tectonics hypotheses. The continental drift/ Airy-Heiskanen isostasy hypothesis had many flaws and scarce data. The fixist/ Pratt-Hayford isostasy, the contracting Earth and the expanding Earth concepts had many flaws as well. The idea of continents with a permanent location, the geosyncline theory, the Pratt-Hayford isostasy, the extrapolation of the age of the Earth by Lord Kelvin as a black body cooling down, the contracting Earth, the Earth as a solid and crystalline body, is one school of thought. A lithosphere creeping over the asthenosphere is a logical consequence of an Earth with internal heat by radioactivity decay, the Airy-Heiskanen isostasy, thrust faults and Niskanen's mantle viscosity determinations. Making sense of the puzzle pieces 1953, the Great Global Rift, running along the Mid-Atlantic Ridge, was discovered by Bruce Heezen (Lamont Group) (Puzzle pieces: Seismic-refraction and Sonar survey of the rifts). , , , , Their world ocean floor map was published 1977. Austrian painter Heinrich Berann worked on it. Nowadays the seafloor maps have a better resolution by the SEASAT, Geosat/ERM and ERS-1/ERM (European Remote-Sensing Satellite/Exact Repeat Mission) missions. World map of earthquake epicenters, oceanic ones mainly . 1954–1963: Alfred Rittmann was elected IAV President (IAV at that time) for three periods. 1956, S. K. Runcorn becomes a drifter. , Statistics by Ronald Fisher. , Jan Hospers work (magnetic poles and geographical poles coincide the last 23 Ma). Self-exciting dynamo theory of Elsasser-Bullard. S. W. Carey, plate tectonics . But he believed here in an Expanding Earth. 1958, Henry William Menard notes that most mid-ocean ridges are halfway between the two continental edges ( cited in ). 1959, analysis of Vanguard satellite orbit suggests "large-scale convection currents in the mantle" . Seafloor spreading December Document 2::: Tectonophysics, a branch of geophysics, is the study of the physical processes that underlie tectonic deformation. This includes measurement or calculation of the stress- and strain fields on Earth’s surface and the rheologies of the crust, mantle, lithosphere and asthenosphere. Overview Tectonophysics is concerned with movements in the Earth's crust and deformations over scales from meters to thousands of kilometers. These govern processes on local and regional scales and at structural boundaries, such as the destruction of continental crust (e.g. gravitational instability) and oceanic crust (e.g. subduction), convection in the Earth's mantle (availability of melts), the course of continental drift, and second-order effects of plate tectonics such as thermal contraction of the lithosphere. This involves the measurement of a hierarchy of strains in rocks and plates as well as deformation rates; the study of laboratory analogues of natural systems; and the construction of models for the history of deformation. History Tectonophysics was adopted as the name of a new section of AGU on April 19, 1940, at AGU's 21st Annual Meeting. According to the AGU website (https://tectonophysics.agu.org/agu-100/section-history/), using the words from Norman Bowen, the main goal of the tectonophysics section was to “designate this new borderline field between geophysics, physics and geology … for the solution of problems of tectonics.” Consequently, the claim below that the term was defined in 1954 by Gzolvskii is clearly incorrect. Since 1940 members of AGU had been presenting papers at AGU meetings, the contents of which defined the meaning of the field. Tectonophysics was defined as a field in 1954 when Mikhail Vladimirovich Gzovskii published three papers in the journal Izvestiya Akad. Nauk SSSR, Sireya Geofizicheskaya: "On the tasks and content of tectonophysics", "Tectonic stress fields", and "Modeling of tectonic stress fields". He defined the main goals of tectonophysica Document 3::: The depth of the seafloor on the flanks of a mid-ocean ridge is determined mainly by the age of the oceanic lithosphere; older seafloor is deeper. During seafloor spreading, lithosphere and mantle cooling, contraction, and isostatic adjustment with age cause seafloor deepening. This relationship has come to be better understood since around 1969 with significant updates in 1974 and 1977. Two main theories have been put forward to explain this observation: one where the mantle including the lithosphere is cooling; the cooling mantle model, and a second where a lithosphere plate cools above a mantle at a constant temperature; the cooling plate model. The cooling mantle model explains the age-depth observations for seafloor younger than 80 million years. The cooling plate model explains the age-depth observations best for seafloor older that 20 million years. In addition, the cooling plate model explains the almost constant depth and heat flow observed in very old seafloor and lithosphere. In practice it is convenient to use the solution for the cooling mantle model for an age-depth relationship younger than 20 million years. Older than this the cooling plate model fits data as well. Beyond 80 million years the plate model fits better than the mantle model. Background The first theories for seafloor spreading in the early and mid twentieth century explained the elevations of the mid-ocean ridges as upwellings above convection currents in Earth's mantle. The next idea connected seafloor spreading and continental drift in a model of plate tectonics. In 1969, the elevations of ridges was explained as thermal expansion of a lithospheric plate at the spreading center. This 'cooling plate model' was followed in 1974 by noting that elevations of ridges could be modeled by cooling of the whole upper mantle including any plate. This was followed in 1977 by a more refined plate model which explained data that showed that both the ocean depths and ocean crust heat flow approa Document 4::: Plume tectonics is a geoscientific theory that finds its roots in the mantle doming concept which was especially popular during the 1930s and initially did not accept major plate movements and continental drifting. It has survived from the 1970s until today in various forms and presentations. It has slowly evolved into a concept that recognises and accepts large-scale plate motions such as envisaged by plate tectonics, but placing them in a framework where large mantle plumes are the major driving force of the system. The initial followers of the concept during the first half of the 20th century are scientists like Beloussov and van Bemmelen, and recently the concept has gained interest especially in Japan, through new compiled work on palaeomagnetism, and is still advocated by the group of scientists elaboration upon Earth expansion. It is nowadays generally not accepted as the main theory to explain the driving forces of tectonic plate movements, although numerous modulations on the concept have been proposed. The theory focuses on the movements of mantle plumes under tectonic plates, viewing them as the major driving force of movements of (parts of) the Earth's crust. In its more modern form, conceived in the 1970s, it tries to reconcile in one single geodynamic model the horizontalistic concept of plate tectonics, and the verticalistic concepts of mantle plumes, by the gravitational movement of plates away from major domes of the Earth's crust. The existence of various supercontinents in Earth history and their break-up has been associated recently with major upwellings of the mantle. It is classified together with mantle convection as one of the mechanism that are used to explain the movements of tectonic plates. It also shows affinity with the concept of hot spots which is used in modern-day plate tectonics to generate a framework of specific mantle upwelling points that are relatively stable throughout time and are used to calibrate the plate movements usin The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What theory brings together continental drift and seafloor spreading? A. theory of front tectonics B. theory of plate tectonics C. theory of order tectonics D. theory of water tectonics Answer:
sciq-1732
multiple_choice
What debated therapy offers a potential method for replacing neurons lost to injury or disease?
[ "cell duplication", "stem cell reduction", "stem cell therapy", "cell production therapy" ]
C
Relavent Documents: Document 0::: Catherina Gwynne Becker (née Krüger) is an Alexander von Humboldt Professor at TU Dresden, and was formerly Professor of Neural Development and Regeneration at the University of Edinburgh. Early life and education Catherina Becker was born in Marburg, Germany in 1964. She was educated at the in Bremen, before going on to study at the University of Bremen where she obtained an MSci of Biology and her PhD (Dr. rer. nat.) in 1993, investigating visual system development and regeneration in frogs and salamanders under the supervision of Gerhard Roth. She then trained as post-doctorate at the Swiss Federal Institute of Technology in Zürich, the Department Dev Cell Biol funded by an EMBO long-term fellowship, at the University of California, Irvine in USA and the Centre for Molecular Neurobiology Hamburg (ZMNH), Germany where she took a position of group leader in 2000 and finished her ‚Habilitation‘ in neurobiology in 2012. Career Becker joined the University of Edinburgh in 2005 as senior Lecturer and was appointed personal chair in neural development and regeneration in 2013. She was also the Director of Postgraduate Training at the Centre for Neuroregeneration up to 2015, then centre director up to 2017. In 2021 she received an Alexander von Humboldt Professorship, joining the at the Technical University of Dresden. Research Becker's research focuses on a better understanding of the factors governing the generation of neurons and axonal pathfinding in the CNS during development and regeneration using the zebrafish model to identify fundamental mechanisms in vertebrates with clear translational implications for CNS injury and neurodegenerative diseases. The Becker group established the zebrafish as a model for spinal cord regeneration. Their research found that functional regeneration is near perfect, but anatomical repair does not fully recreate the previous network, instead, new neurons are generated and extensive rewiring occurs. They have identified neurotra Document 1::: Endogenous regeneration in the brain is the ability of cells to engage in the repair and regeneration process. While the brain has a limited capacity for regeneration, endogenous neural stem cells, as well as numerous pro-regenerative molecules, can participate in replacing and repairing damaged or diseased neurons and glial cells. Another benefit that can be achieved by using endogenous regeneration could be avoiding an immune response from the host. Neural stem cells in the adult brain During the early development of a human, neural stem cells lie in the germinal layer of the developing brain, ventricular and subventricular zones. In brain development, multipotent stem cells (those that can generate different types of cells) are present in these regions, and all of these cells differentiate into neural cell forms, such as neurons, oligodendrocytes and astrocytes. A long-held belief states that the multipotency of neural stem cells would be lost in the adult human brain. However, it is only in vitro, using neurosphere and adherent monolayer cultures, that stem cells from the adult mammalian brain have shown multipotent capacity, while the in vivo study is not convincing. Therefore, the term "neural progenitor" is used instead of "stem cell" to describe limited regeneration ability in the adult brain stem cell. Neural stem cells (NSC) reside in the subventricular zone (SVZ) of the adult human brain and the dentate gyrus of the adult mammalian hippocampus. Newly formed neurons from these regions participate in learning, memory, olfaction and mood modulation. It has not been definitively determined whether or not these stem cells are multipotents. NSC from the hippocampus of rodents, which can differentiate into dentate granule cells, have developed into many cell types when studied in culture. However, another in vivo study, using NSCs in the postnatal SVZ, showed that the stem cell is restricted to developing into different neuronal sub-type cells in the olfactory Document 2::: Neuroregeneration involves the regrowth or repair of nervous tissues, cells or cell products. Neuroregenerative mechanisms may include generation of new neurons, glia, axons, myelin, or synapses. Neuroregeneration differs between the peripheral nervous system (PNS) and the central nervous system (CNS) by the functional mechanisms involved, especially in the extent and speed of repair. When an axon is damaged, the distal segment undergoes Wallerian degeneration, losing its myelin sheath. The proximal segment can either die by apoptosis or undergo the chromatolytic reaction, which is an attempt at repair. In the CNS, synaptic stripping occurs as glial foot processes invade the dead synapse. Nervous system injuries affect over 90,000 people every year. Spinal cord injuries alone affect an estimated 10,000 people each year. As a result of this high incidence of neurological injuries, nerve regeneration and repair, a subfield of neural tissue engineering, is becoming a rapidly growing field dedicated to the discovery of new ways to recover nerve functionality after injury. The nervous system is divided by neurologists into two parts: the central nervous system (which consists of the brain and spinal cord) and the peripheral nervous system (which consists of cranial and spinal nerves along with their associated ganglia). While the peripheral nervous system has an intrinsic ability for repair and regeneration, the central nervous system is, for the most part, incapable of self-repair and regeneration. There is no treatment for recovering human nerve-function after injury to the central nervous system. Multiple attempts at nerve re-growth across the PNS-CNS transition have not been successful. There is simply not enough knowledge about regeneration in the central nervous system. In addition, although the peripheral nervous system has the capability for regeneration, much research still needs to be done to optimize the environment for maximum regrowth potential. Neurore Document 3::: Stem-cell therapy is the use of stem cells to treat or prevent a disease or condition. , the only established therapy using stem cells is hematopoietic stem cell transplantation. This usually takes the form of a bone-marrow transplantation, but the cells can also be derived from umbilical cord blood. Research is underway to develop various sources for stem cells as well as to apply stem-cell treatments for neurodegenerative diseases and conditions such as diabetes and heart disease. Stem-cell therapy has become controversial following developments such as the ability of scientists to isolate and culture embryonic stem cells, to create stem cells using somatic cell nuclear transfer and their use of techniques to create induced pluripotent stem cells. This controversy is often related to abortion politics and to human cloning. Additionally, efforts to market treatments based on transplant of stored umbilical cord blood have been controversial. Medical uses For over 90 years, hematopoietic stem cell transplantation (HSCT) has been used to treat people with conditions such as leukaemia and lymphoma; this is the only widely practiced form of stem-cell therapy. During chemotherapy, most growing cells are killed by the cytotoxic agents. These agents, however, cannot discriminate between the leukaemia or neoplastic cells, and the hematopoietic stem cells within the bone marrow. This is the side effect of conventional chemotherapy strategies that the stem-cell transplant attempts to reverse; a donor's healthy bone marrow reintroduces functional stem cells to replace the cells lost in the host's body during treatment. The transplanted cells also generate an immune response that helps to kill off the cancer cells; this process can go too far, however, leading to graft vs host disease, the most serious side effect of this treatment. Another stem-cell therapy, called Prococvhymal, was conditionally approved in Canada in 2012 for the management of acute graft-vs-host disease Document 4::: The Stem Cell Network (SCN) is a Canadian non-profit that supports stem cell and regenerative medicine research, teaches the next generation of highly qualified personal, and delivers outreach activities across Canada. The Network has been supported by the Government of Canada, since inception in 2001. SCN has catalyzed 25 clinical trials, 21 start-up companies, incubated several international and Canadian research networks and organizations, and established the Till & McCulloch Meetings, Canada's foremost stem cell research event. The organization is based in Ottawa, Ontario. Activities Annual Scientific Conference Since 2001, SCN has hosted an annual scientific conference. This conference is open to SCN investigators and trainees, and provides a forum to share new research. The conference takes place in a different Canadian city each year. In 2012, the annual conference was re-branded as the Till & McCulloch Meetings. The establishment of the Meetings ensured that the country's stem cell and regenerative medicine research community would continue to have a venue for collaboration and the sharing of important research. The Till & McCulloch Meetings are Canada's largest stem cell and regenerative medicine conference. Research Funding Programs Training The SCN training program includes studentships, fellowships, research grants and workshops. Since 2001, SCN has offered training opportunities to more than 5,000 trainees. Organization Member institutions SCN and its membership engage in collaborative funding and research activities. Current members institutions include: Partners The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What debated therapy offers a potential method for replacing neurons lost to injury or disease? A. cell duplication B. stem cell reduction C. stem cell therapy D. cell production therapy Answer:
sciq-9849
multiple_choice
What is a byproduct from the combustion of fossil fuels like coal and gasoline?
[ "carbon monoxide", "aluminum oxide", "alcohol", "nitrogen dioxide" ]
D
Relavent Documents: Document 0::: The Fischer–Tropsch process (FT) is a collection of chemical reactions that converts a mixture of carbon monoxide and hydrogen, known as syngas, into liquid hydrocarbons. These reactions occur in the presence of metal catalysts, typically at temperatures of and pressures of one to several tens of atmospheres. The Fischer–Tropsch process is an important reaction in both coal liquefaction and gas to liquids technology for producing liquid hydrocarbons. In the usual implementation, carbon monoxide and hydrogen, the feedstocks for FT, are produced from coal, natural gas, or biomass in a process known as gasification. The process then converts these gases into synthetic lubrication oil and synthetic fuel. This process has received intermittent attention as a source of low-sulfur diesel fuel and to address the supply or cost of petroleum-derived hydrocarbons. Fischer-Tropsch process is discussed as a step of producing carbon-neutral liquid hydrocarbon fuels from CO2 and hydrogen. The process was first developed by Franz Fischer and Hans Tropsch at the Kaiser Wilhelm Institute for Coal Research in Mülheim an der Ruhr, Germany, in 1925. Reaction mechanism The Fischer–Tropsch process involves a series of chemical reactions that produce a variety of hydrocarbons, ideally having the formula (CnH2n+2). The more useful reactions produce alkanes as follows: (2n + 1) H2 + n CO → CnH2n+2 + n H2O where n is typically 10–20. The formation of methane (n = 1) is unwanted. Most of the alkanes produced tend to be straight-chain, suitable as diesel fuel. In addition to alkane formation, competing reactions give small amounts of alkenes, as well as alcohols and other oxygenated hydrocarbons. The reaction is a highly exothermic reaction due to a standard reaction enthalpy (ΔH) of −165 kJ/mol CO combined. Fischer–Tropsch intermediates and elemental reactions Converting a mixture of H2 and CO into aliphatic products is a multi-step reaction with several intermediate compounds. The Document 1::: Biodesulfurization is the process of removing sulfur from crude oil through the use of microorganisms or their enzymes. Background Crude oil contains sulfur in its composition, with the latter being the most abundant element after carbon and hydrogen. Depending on its source, the amount of sulfur present in crude oil can range from 0.05 to 10%. Accordingly, the oil can be classified as sweet or sour if the sulfur concentration is below or above 0.5%, respectively. The combustion of crude oil releases sulfur oxides (SOx) to the atmosphere, which are harmful to public health and contribute to serious environmental effects such as air pollution and acid rains. In addition, the sulfur content in crude oil is a major problem for refineries, as it promotes the corrosion of the equipment and the poisoning of the noble metal catalysts. The levels of sulfur in any oil field are too high for the fossil fuels derived from it (such as gasoline, diesel, or jet fuel ) to be used in combustion engines without pre-treatment to remove organosulfur compounds. The reduction of the concentration of sulfur in crude oil becomes necessary to mitigate one of the leading sources of the harmful health and environmental effects caused by its combustion. In this sense, the European union has taken steps to decrease the sulfur content in diesel below 10 ppm, while the US has made efforts to restrict the sulfur content in diesel and gasoline to a maximum of 15 ppm. The reduction of sulfur compounds in oil fuels can be achieved by a process named desulfurization. Methods used for desulfurization include, among others, hydrodesulfurization, oxidative desulfurization, extractive desulfurization, and extraction by ionic liquids. Despite their efficiency at reducing sulfur content, the conventional desulfurization methods are still accountable for a significant amount of the CO2 emissions associated with the crude oil refining process, releasing up to 9000 metric tons per year. Furthermore, the Document 2::: Butane () or n-butane is an alkane with the formula C4H10. Butane is a highly flammable, colorless, easily liquefied gas that quickly vaporizes at room temperature and pressure. The name butane comes from the root but- (from butyric acid, named after the Greek word for butter) and the suffix -ane. It was discovered in crude petroleum in 1864 by Edmund Ronalds, who was the first to describe its properties, and commercialized by Walter O. Snelling in early 1910s. Butane is one of a group of liquefied petroleum gases (LP gases). The others include propane, propylene, butadiene, butylene, isobutylene, and mixtures thereof. Butane burns more cleanly than both gasoline and coal. History The first synthesis of butane was accidentally achieved by British chemist Edward Frankland in 1849 from ethyl iodide and zinc, but he had not realized that the ethyl radical dimerized and misidentified the substance. The proper discoverer of the butane called it "hydride of butyl", but already in the 1860s more names were used: "butyl hydride", "hydride of tetryl" and "tetryl hydride", "diethyl" or "ethyl ethylide" and others. August Wilhelm von Hofmann in his 1866 systemic nomenclature proposed the name "quartane", and the modern name was introduced to English from German around 1874. Butane did not have much practical use until the 1910s, when W. Snelling identified butane and propane as components in gasoline and found that, if they were cooled, they could be stored in a volume-reduced liquified state in pressurized containers. Density The density of butane is highly dependent on temperature and pressure in the reservoir. For example, the density of liquid propane is 571.8±1 kg/m3 (for pressures up to 2MPa and temperature 27±0.2 °C), while the density of liquid butane is 625.5±0.7 kg/m3 (for pressures up to 2MPa and temperature -13±0.2 °C). Isomers Rotation about the central C−C bond produces two different conformations (trans and gauche) for n-butane. Reactions When oxyg Document 3::: The Mega Borg oil spill occurred in the Gulf of Mexico on June 8, 1990, roughly 50 miles off the coast of Texas, when the oil tanker Mega Borg caught on fire and exploded. The cleanup was one of the first practical uses of bioremediation. Initial explosion and cause At 11:30 PM on the evening of Friday June 8, 1990, an explosion in the cargo room of the Norwegian oil tanker the Mega Borg “ruptured the bulkhead between the pump room and the engine room”, causing the ship to catch fire and begin to leak oil. The 853-foot-long, 15-year-old vessel was about 50 miles off the coast of Galveston, Texas when the explosion occurred. The weather at the time was calm and the tanker had easily passed Coast Guard safety inspections in April earlier that year. While the direct cause of the engine room explosion remains unknown, the initial blast occurred during a lightering process in which the Mega Borg was transferring oil onto a smaller Italian tanker, the Fraqmura, in order to then transport the oil to Houston. This transfer was necessary, as the Mega Borg was too large to dock at the Texas port. Three million gallons of the total 38 million gallons of light Angolan Palanca crude oil on board the tanker were able to be transferred to the Fraqmura before the blast. Two days after the initial blast, there were five successive explosions in a ten-minute window. These explosions greatly increased the rate of the spill from the tanker into the water. By the end of that day (June 11) the tanker stern had dropped 58 feet and had stabilized five feet above the water line. This was either due to shifting cargo or the tanker taking on water, which would be an indication of the vessel’s imminent sinking. The light crude oil spilled in the Mega Borg incident was brown and evaporated much quicker than the heavy crude oil in spills such as the Exxon Valdez. This means that the oil is less likely to heavily coat nearby beaches, flora and fauna, however the tanker was carrying more oil Document 4::: Activated carbon, also called activated charcoal, is a form of carbon commonly used to filter contaminants from water and air, among many other uses. It is processed (activated) to have small, low-volume pores that increase the surface area available for adsorption (which is not the same as absorption) or chemical reactions. Activation is analogous to making popcorn from dried corn kernels: popcorn is light, fluffy, and its kernels have a high surface-area-to-volume ratio. Activated is sometimes replaced by active. Due to its high degree of microporosity, one gram of activated carbon has a surface area in excess of as determined by gas adsorption. Charcoal, before activation, has a specific surface area in the range of . An activation level sufficient for useful application may be obtained solely from high surface area. Further chemical treatment often enhances adsorption properties. Activated carbon is usually derived from waste products such as coconut husks; waste from paper mills has been studied as a source. These bulk sources are converted into charcoal before being 'activated'. When derived from coal it is referred to as activated coal. Activated coke is derived from coke. Uses Activated carbon is used in methane and hydrogen storage, air purification, capacitive deionization, supercapacitive swing adsorption, solvent recovery, decaffeination, gold purification, metal extraction, water purification, medicine, sewage treatment, air filters in respirators, filters in compressed air, teeth whitening, production of hydrogen chloride, edible electronics, and many other applications. Industrial One major industrial application involves use of activated carbon in metal finishing for purification of electroplating solutions. For example, it is the main purification technique for removing organic impurities from bright nickel plating solutions. A variety of organic chemicals are added to plating solutions for improving their deposit qualities and for enhancing The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is a byproduct from the combustion of fossil fuels like coal and gasoline? A. carbon monoxide B. aluminum oxide C. alcohol D. nitrogen dioxide Answer:
sciq-9039
multiple_choice
E. coli need what kind of acids to survive?
[ "bacterial", "boric", "hydrochloric", "amino" ]
D
Relavent Documents: Document 0::: MicrobeLibrary is a permanent collection of over 1400 original peer-reviewed resources for teaching undergraduate microbiology. It is provided by the American Society for Microbiology, Washington DC, United States. Contents include curriculum activities; images and animations; reviews of books, websites and other resources; and articles from Focus on Microbiology Education, Microbiology Education and Microbe. Around 40% of the materials are free to educators and students, the remainder require a subscription. the service is suspended with the message to: "Please check back with us in 2017". External links MicrobeLibrary Microbiology Document 1::: GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test. Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95. After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17. Content specification Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below: Biochemistry (36%) A Chemical and Physical Foundations Thermodynamics and kinetics Redox states Water, pH, acid-base reactions and buffers Solutions and equilibria Solute-solvent interactions Chemical interactions and bonding Chemical reaction mechanisms B Structural Biology: Structure, Assembly, Organization and Dynamics Small molecules Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids) Supramolecular complexes (e.g. Document 2::: Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research. Americas Human Biology major at Stanford University, Palo Alto (since 1970) Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government. Human and Social Biology (Caribbean) Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment. Human Biology Program at University of Toronto The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications. Asia BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002) BSc (honours) Human Biology at AIIMS (New Document 3::: Biochemical engineering, also known as bioprocess engineering, is a field of study with roots stemming from chemical engineering and biological engineering. It mainly deals with the design, construction, and advancement of unit processes that involve biological organisms (such as fermentation) or organic molecules (often enzymes) and has various applications in areas of interest such as biofuels, food, pharmaceuticals, biotechnology, and water treatment processes. The role of a biochemical engineer is to take findings developed by biologists and chemists in a laboratory and translate that to a large-scale manufacturing process. History For hundreds of years, humans have made use of the chemical reactions of biological organisms in order to create goods. In the mid-1800s, Louis Pasteur was one of the first people to look into the role of these organisms when he researched fermentation. His work also contributed to the use of pasteurization, which is still used to this day. By the early 1900s, the use of microorganisms had expanded, and was used to make industrial products. Up to this point, biochemical engineering hadn't developed as a field yet. It wasn't until 1928 when Alexander Fleming discovered penicillin that the field of biochemical engineering was established. After this discovery, samples were gathered from around the world in order to continue research into the characteristics of microbes from places such as soils, gardens, forests, rivers, and streams. Today, biochemical engineers can be found working in a variety of industries, from food to pharmaceuticals. This is due to the increasing need for efficiency and production which requires knowledge of how biological systems and chemical reactions interact with each other and how they can be used to meet these needs. Education Biochemical engineering is not a major offered by most universities and is instead an area of interest under the chemical engineering major in most cases. The following universiti Document 4::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. E. coli need what kind of acids to survive? A. bacterial B. boric C. hydrochloric D. amino Answer:
scienceQA-81
multiple_choice
What do these two changes have in common? a banana getting ripe on the counter newly poured concrete becoming hard
[ "Both are caused by heating.", "Both are chemical changes.", "Both are caused by cooling.", "Both are only physical changes." ]
B
Step 1: Think about each change. A banana getting ripe on the counter is a chemical change. As a banana ripens, the type of matter in it changes. The peel changes color and the inside becomes softer and sweeter. Concrete hardening is a chemical change. The chemicals in the concrete react with each other to form a different type of matter. The new matter is hard and strong. Step 2: Look at each answer choice. Both are only physical changes. Both changes are chemical changes. They are not physical changes. Both are chemical changes. Both changes are chemical changes. The type of matter before and after each change is different. Both are caused by heating. Neither change is caused by heating. Both are caused by cooling. Neither change is caused by cooling.
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds. Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate. A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density. An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge. Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change. Examples Heating and cooling Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation. Magnetism Ferro-magnetic materials can become magnetic. The process is reve Document 2::: Adaptive comparative judgement is a technique borrowed from psychophysics which is able to generate reliable results for educational assessment – as such it is an alternative to traditional exam script marking. In the approach, judges are presented with pairs of student work and are then asked to choose which is better, one or the other. By means of an iterative and adaptive algorithm, a scaled distribution of student work can then be obtained without reference to criteria. Introduction Traditional exam script marking began in Cambridge 1792 when, with undergraduate numbers rising, the importance of proper ranking of students was growing. So in 1792 the new Proctor of Examinations, William Farish, introduced marking, a process in which every examiner gives a numerical score to each response by every student, and the overall total mark puts the students in the final rank order. Francis Galton (1869) noted that, in an unidentified year about 1863, the Senior Wrangler scored 7,634 out of a maximum of 17,000, while the Second Wrangler scored 4,123. (The 'Wooden Spoon' scored only 237.) Prior to 1792, a team of Cambridge examiners convened at 5pm on the last day of examining, reviewed the 19 papers each student had sat – and published their rank order at midnight. Marking solved the problems of numbers and prevented unfair personal bias, and its introduction was a step towards modern objective testing, the format it is best suited to. But the technology of testing that followed, with its major emphasis on reliability and the automatisation of marking, has been an uncomfortable partner for some areas of educational achievement: assessing writing or speaking, and other kinds of performance need something more qualitative and judgemental. The technique of Adaptive Comparative Judgement is an alternative to marking. It returns to the pre-1792 idea of sorting papers according to their quality, but retains the guarantee of reliability and fairness. It is by far the most rel Document 3::: Engineering mathematics is a branch of applied mathematics concerning mathematical methods and techniques that are typically used in engineering and industry. Along with fields like engineering physics and engineering geology, both of which may belong in the wider category engineering science, engineering mathematics is an interdisciplinary subject motivated by engineers' needs both for practical, theoretical and other considerations outside their specialization, and to deal with constraints to be effective in their work. Description Historically, engineering mathematics consisted mostly of applied analysis, most notably: differential equations; real and complex analysis (including vector and tensor analysis); approximation theory (broadly construed, to include asymptotic, variational, and perturbative methods, representations, numerical analysis); Fourier analysis; potential theory; as well as linear algebra and applied probability, outside of analysis. These areas of mathematics were intimately tied to the development of Newtonian physics, and the mathematical physics of that period. This history also left a legacy: until the early 20th century subjects such as classical mechanics were often taught in applied mathematics departments at American universities, and fluid mechanics may still be taught in (applied) mathematics as well as engineering departments. The success of modern numerical computer methods and software has led to the emergence of computational mathematics, computational science, and computational engineering (the last two are sometimes lumped together and abbreviated as CS&E), which occasionally use high-performance computing for the simulation of phenomena and the solution of problems in the sciences and engineering. These are often considered interdisciplinary fields, but are also of interest to engineering mathematics. Specialized branches include engineering optimization and engineering statistics. Engineering mathematics in tertiary educ Document 4::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What do these two changes have in common? a banana getting ripe on the counter newly poured concrete becoming hard A. Both are caused by heating. B. Both are chemical changes. C. Both are caused by cooling. D. Both are only physical changes. Answer:
sciq-2965
multiple_choice
In the forward reaction, hydrogen and iodine combine to form what?
[ "ionic hydrogen", "iodinic hydrogen", "hydrogen ionicide", "hydrogen iodide" ]
D
Relavent Documents: Document 0::: Gas phase ion chemistry is a field of science encompassed within both chemistry and physics. It is the science that studies ions and molecules in the gas phase, most often enabled by some form of mass spectrometry. By far the most important applications for this science is in studying the thermodynamics and kinetics of reactions. For example, one application is in studying the thermodynamics of the solvation of ions. Ions with small solvation spheres of 1, 2, 3... solvent molecules can be studied in the gas phase and then extrapolated to bulk solution. Theory Transition state theory Transition state theory is the theory of the rates of elementary reactions which assumes a special type of chemical equilibrium (quasi-equilibrium) between reactants and activated complexes. RRKM theory RRKM theory is used to compute simple estimates of the unimolecular ion decomposition reaction rates from a few characteristics of the potential energy surface. Gas phase ion formation The process of converting an atom or molecule into an ion by adding or removing charged particles such as electrons or other ions can occur in the gas phase. These processes are an important component of gas phase ion chemistry. Associative ionization Associative ionization is a gas phase reaction in which two atoms or molecules interact to form a single product ion. where species A with excess internal energy (indicated by the asterisk) interacts with B to form the ion AB+. One or both of the interacting species may have excess internal energy. Charge-exchange ionization Charge-exchange ionization (also called charge-transfer ionization) is a gas phase reaction between an ion and a neutral species in which the charge of the ion is transferred to the neutral. Chemical ionization In chemical ionization, ions are produced through the reaction of ions of a reagent gas with other species. Some common reagent gases include: methane, ammonia, and isobutane. Chemi-ionization Chemi-ionization can Document 1::: Hydrogen–deuterium exchange (also called H–D or H/D exchange) is a chemical reaction in which a covalently bonded hydrogen atom is replaced by a deuterium atom, or vice versa. It can be applied most easily to exchangeable protons and deuterons, where such a transformation occurs in the presence of a suitable deuterium source, without any catalyst. The use of acid, base or metal catalysts, coupled with conditions of increased temperature and pressure, can facilitate the exchange of non-exchangeable hydrogen atoms, so long as the substrate is robust to the conditions and reagents employed. This often results in perdeuteration: hydrogen-deuterium exchange of all non-exchangeable hydrogen atoms in a molecule. An example of exchangeable protons which are commonly examined in this way are the protons of the amides in the backbone of a protein. The method gives information about the solvent accessibility of various parts of the molecule, and thus the tertiary structure of the protein. The theoretical framework for understanding hydrogen exchange in proteins was first described by Kaj Ulrik Linderstrøm-Lang and he was the first to apply H/D exchange to study proteins. Exchange reaction In protic solution exchangeable protons such as those in hydroxyl or amine group exchange protons with the solvent. If D2O is solvent, deuterons will be incorporated at these positions. The exchange reaction can be followed using a variety of methods (see Detection). Since this exchange is an equilibrium reaction, the molar amount of deuterium should be high compared to the exchangeable protons of the substrate. For instance, deuterium is added to a protein in H2O by diluting the H2O solution with D2O (e.g. tenfold). Usually exchange is performed at physiological pH (7.0–8.0) where proteins are in their most native ensemble of conformational states. The H/D exchange reaction can also be catalysed, by acid, base or metal catalysts such as platinum. For the backbone amide hydrogen atoms of p Document 2::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 3::: In chemistry, ethenium, protonated ethylene or ethyl cation is a positive ion with the formula . It can be viewed as a molecule of ethylene () with one added proton (), or a molecule of ethane () minus one hydride ion (). It is a carbocation; more specifically, a nonclassical carbocation. Preparation Ethenium has been observed in rarefied gases subjected to radiation. Another preparation method is to react certain proton donors such as , , , and with ethane at ambient temperature and pressures below 1 mmHg. (Other donors such as and form ethanium preferably to ethenium.) At room temperature and in a rarefied methane atmosphere, ethanium slowly dissociates to ethenium and . The reaction is much faster at 90 °C. Stability and reactions Contrary to some earlier reports, ethenium was found to be largely unreactive towards neutral methane at ambient temperature and low pressure (on the order of 1 mmHg), even though the reaction yielding sec- and is believed to be exothermic. Structure The structure of ethenium's ground state was in dispute for many years, but it was eventually agreed to be a non-classical structure, with the two carbon atoms and one of the hydrogen atoms forming a three-center two-electron bond. Calculations have shown that higher homologues, like the propyl and n-butyl cations also have bridged structures. Generally speaking, bridging appears to be a common means by which 1° alkyl carbocations achieve additional stabilization. Consequently, true 1° carbocations (with a classical structure) may be rare or nonexistent. Document 4::: O2•– + H+ + H2O2 → O2 + HO• + H2O    (step 3: propagation) Finally, the chain is terminated when the hydroxyl radical is scavenged by a ferrous ion: Fe2+ + HO• + H+ → Fe3+ + H2O        (step 4: termination) George showed in 1947 that, in water, step 3 cannot compete with the spontaneous disproportionation of superoxide The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. In the forward reaction, hydrogen and iodine combine to form what? A. ionic hydrogen B. iodinic hydrogen C. hydrogen ionicide D. hydrogen iodide Answer:
sciq-7572
multiple_choice
What are required when the damage from trauma or infection cannot be closed with sutures or staples?
[ "debris grafts", "bone grafts", "skin grafts", "tree grafts" ]
C
Relavent Documents: Document 0::: Scar free healing is the process by which significant injuries can heal without permanent damage to the tissue the injury has affected. In most healing, scars form due to the fibrosis and wound contraction, however in scar free healing, tissue is completely regenerated. During the 1990s, published research on the subject increased; it is a relatively recent term in the literature. Scar free healing occurs in foetal life but the ability progressively diminishes into adulthood. In other animals such as amphibians, however, tissue regeneration occurs, for example as skin regeneration in the adult axolotl. Scarring versus scar free healing Scarring takes place in response to damaged or missing tissue following injury due to biological processes or wounding: it is a process that occurs in order to replace the lost tissue. The process of scarring is complex, it involves the inflammatory response and remodelling amongst other cell activities. Many growth factors and cytokines are also involved in the process, as well as extracellular matrix interactions. Scarring during healing can create both physical and psychological problems, and is a significant clinical burden. Collagen, for instance, is abnormally organised in scar tissue; collagen in scars is arranged in parallel bundles of collagen fibers, whilst healthy scar free tissue has a "basket weave" structure (Figure 1). The difference in collagen arrangement along with a lack of difference in the dermal tissue when healing has taken place with or without scarring is indicative of regenerative failure of normal skin. Severe scarring resulting from these collagen deposits is known as hypertrophic scarring and is of great concern worldwide with an incidence ranging from 32–72%. Scar free healing in nature Unlike the limited regeneration seen in adult humans, many animal groups possess an ability to completely regenerate damaged tissue. Full limb regeneration is seen both in invertebrates (e.g. starfish and flatworms Document 1::: Postoperative wounds are those wounds acquired during surgical procedures. Postoperative wound healing occurs after surgery and normally follows distinct bodily reactions: the inflammatory response, the proliferation of cells and tissues that initiate healing, and the final remodeling. Postoperative wounds are different from other wounds in that they are anticipated and treatment is usually standardized depending on the type of surgery performed. Since the wounds are 'predicted' actions can be taken beforehand and after surgery that can reduce complications and promote healing. Healing sequence The body responds to postoperative wounds in the same manner as it does to tissue damage acquired in other circumstances. The inflammatory response is designed to create homeostasis. This first step is called the inflammatory stage. The next stage and wound healing is the infiltration of leukocytes and release of cytokines into the tissue. The inflammatory response and the infiltration of leukocytes occur simultaneously. The final stage of postoperative wound healing is called remodeling. Remodeling restores the structure of the tissue and that tissues ability to regain its function. Diagnosis Surgical wounds can begin to open between three and five days after surgery. The wound usually appears red and can be accompanied by drainage. Clinicians delay re-opening the wound unless it is necessary due to the potential of other complications. If the surgical wound worsens, or if a rupture of the digestive system is suspected the decision may be to investigate the source of the drainage or infection. Complications Wound dehiscence The rates of a surgical wound opening after surgery has remained constant. When a wound opens after surgery, the hospital stay becomes longer and the medical care becomes more intensive if a surgical wound opens after surgery. Infection Infection will complicate healing of surgical wounds and is commonly observed. Most infections are present wit Document 2::: Wound healing refers to a living organism's replacement of destroyed or damaged tissue by newly produced tissue. In undamaged skin, the epidermis (surface, epithelial layer) and dermis (deeper, connective layer) form a protective barrier against the external environment. When the barrier is broken, a regulated sequence of biochemical events is set into motion to repair the damage. This process is divided into predictable phases: blood clotting (hemostasis), inflammation, tissue growth (cell proliferation), and tissue remodeling (maturation and cell differentiation). Blood clotting may be considered to be part of the inflammation stage instead of a separate stage. The wound-healing process is not only complex but fragile, and it is susceptible to interruption or failure leading to the formation of non-healing chronic wounds. Factors that contribute to non-healing chronic wounds are diabetes, venous or arterial disease, infection, and metabolic deficiencies of old age. Wound care encourages and speeds wound healing via cleaning and protection from reinjury or infection. Depending on each patient's needs, it can range from the simplest first aid to entire nursing specialties such as wound, ostomy, and continence nursing and burn center care. Stages Hemostasis (blood clotting): Within the first few minutes of injury, platelets in the blood begin to stick to the injured site. They change into an amorphous shape, more suitable for clotting, and they release chemical signals to promote clotting. This results in the activation of fibrin, which forms a mesh and acts as "glue" to bind platelets to each other. This makes a clot that serves to plug the break in the blood vessel, slowing/preventing further bleeding. Inflammation: During this phase, damaged and dead cells are cleared out, along with bacteria and other pathogens or debris. This happens through the process of phagocytosis, where white blood cells engulf debris and destroy it. Platelet-derived growth factors Document 3::: Injury is physiological damage to the living tissue of any organism, whether in humans, in other animals, or in plants. Injuries can be caused in many ways, such as mechanically with penetration by sharp objects such as teeth or with blunt objects, by heat or cold, or by venoms and biotoxins. Injury prompts an inflammatory response in many taxa of animals; this prompts wound healing. In both plants and animals, substances are often released to help to occlude the wound, limiting loss of fluids and the entry of pathogens such as bacteria. Many organisms secrete antimicrobial chemicals which limit wound infection; in addition, animals have a variety of immune responses for the same purpose. Both plants and animals have regrowth mechanisms which may result in complete or partial healing over the injury. Taxonomic range Animals Injury in animals is sometimes defined as mechanical damage to anatomical structure, but it has a wider connotation of physical damage with any cause, including drowning, burns, and poisoning. Such damage may result from attempted predation, territorial fights, falls, and abiotic factors. Injury prompts an inflammatory response in animals of many different phyla; this prompts coagulation of the blood or body fluid, followed by wound healing, which may be rapid, as in the cnidaria. Arthropods are able to repair injuries to the cuticle that forms their exoskeleton to some extent. Animals in several phyla, including annelids, arthropods, cnidaria, molluscs, nematodes, and vertebrates are able to produce antimicrobial peptides to fight off infection following an injury. Humans Injury in humans has been studied extensively for its importance in medicine. Much of medical practice including emergency medicine and pain management is dedicated to the treatment of injuries. The World Health Organization has developed a classification of injuries in humans by categories including mechanism, objects/substances producing injury, place of occurrence, Document 4::: With physical trauma or disease suffered by an organism, healing involves the repairing of damaged tissue(s), organs and the biological system as a whole and resumption of (normal) functioning. Medicine includes the process by which the cells in the body regenerate and repair to reduce the size of a damaged or necrotic area and replace it with new living tissue. The replacement can happen in two ways: by regeneration in which the necrotic cells are replaced by new cells that form "like" tissue as was originally there; or by repair in which injured tissue is replaced with scar tissue. Most organs will heal using a mixture of both mechanisms. Within surgery, healing is more often referred to as recovery, and postoperative recovery has historically been viewed simply as restitution of function and readiness for discharge. More recently, it has been described as an energy‐requiring process to decrease physical symptoms, reach a level of emotional well‐being, regain functions, and re‐establish activities Healing is also referred to in the context of the grieving process. In psychiatry and psychology, healing is the process by which neuroses and psychoses are resolved to the degree that the client is able to lead a normal or fulfilling existence without being overwhelmed by psychopathological phenomena. This process may involve psychotherapy, pharmaceutical treatment or alternative approaches such as traditional spiritual healing. Regeneration In order for an injury to be healed by regeneration, the cell type that was destroyed must be able to replicate. Cells also need a collagen framework along which to grow. Alongside most cells there is either a basement membrane or a collagenous network made by fibroblasts that will guide the cells' growth. Since ischaemia and most toxins do not destroy collagen, it will continue to exist even when the cells around it are dead. Example Acute tubular necrosis (ATN) in the kidney is a case in which cells heal completely by regen The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What are required when the damage from trauma or infection cannot be closed with sutures or staples? A. debris grafts B. bone grafts C. skin grafts D. tree grafts Answer:
sciq-348
multiple_choice
Water is recycled constantly through which system?
[ "the habitat", "the troposphere", "the hydropshere", "the ecosystem" ]
D
Relavent Documents: Document 0::: Earth system science (ESS) is the application of systems science to the Earth. In particular, it considers interactions and 'feedbacks', through material and energy fluxes, between the Earth's sub-systems' cycles, processes and "spheres"—atmosphere, hydrosphere, cryosphere, geosphere, pedosphere, lithosphere, biosphere, and even the magnetosphere—as well as the impact of human societies on these components. At its broadest scale, Earth system science brings together researchers across both the natural and social sciences, from fields including ecology, economics, geography, geology, glaciology, meteorology, oceanography, climatology, paleontology, sociology, and space science. Like the broader subject of systems science, Earth system science assumes a holistic view of the dynamic interaction between the Earth's spheres and their many constituent subsystems fluxes and processes, the resulting spatial organization and time evolution of these systems, and their variability, stability and instability. Subsets of Earth System science include systems geology and systems ecology, and many aspects of Earth System science are fundamental to the subjects of physical geography and climate science. Definition The Science Education Resource Center, Carleton College, offers the following description: "Earth System science embraces chemistry, physics, biology, mathematics and applied sciences in transcending disciplinary boundaries to treat the Earth as an integrated system. It seeks a deeper understanding of the physical, chemical, biological and human interactions that determine the past, current and future states of the Earth. Earth System science provides a physical basis for understanding the world in which we live and upon which humankind seeks to achieve sustainability". Earth System science has articulated four overarching, definitive and critically important features of the Earth System, which include: Variability: Many of the Earth System's natural 'modes' and variab Document 1::: The International Space Station Environmental Control and Life Support System (ECLSS) is a life support system that provides or controls atmospheric pressure, fire detection and suppression, oxygen levels, waste management and water supply. The highest priority for the ECLSS is the ISS atmosphere, but the system also collects, processes, and stores both waste and water produced and used by the crew—a process that recycles fluid from the sink, shower, toilet, and condensation from the air. The Elektron system aboard Zvezda and a similar system in Destiny generate oxygen aboard the station. The crew has a backup option in the form of bottled oxygen and Solid Fuel Oxygen Generation (SFOG) canisters. Carbon dioxide is removed from the air by the Vozdukh system in Zvezda, one Carbon Dioxide Removal Assembly (CDRA) located in the U.S. Lab module, and one CDRA in the U.S. Node 3 module. Other by-products of human metabolism, such as methane from flatulence and ammonia from sweat, are removed by activated charcoal filters or by the Trace Contaminant Control System (TCCS). Water recovery systems The ISS has two water recovery systems. Zvezda contains a water recovery system that processes water vapor from the atmosphere that could be used for drinking in an emergency but is normally fed to the Elektron system to produce oxygen. The American segment has a Water Recovery System installed during STS-126 that can process water vapour collected from the atmosphere and urine into water that is intended for drinking. The Water Recovery System was installed initially in Destiny on a temporary basis in November 2008 and moved into Tranquility (Node 3) in February 2010. The Water Recovery System consists of a Urine Processor Assembly and a Water Processor Assembly, housed in two of the three ECLSS racks. The Urine Processor Assembly uses a low pressure vacuum distillation process that uses a centrifuge to compensate for the lack of gravity and thus aid in separating liquids and g Document 2::: Water-use efficiency (WUE) refers to the ratio of water used in plant metabolism to water lost by the plant through transpiration. Two types of water-use efficiency are referred to most frequently: photosynthetic water-use efficiency (also called instantaneous water-use efficiency), which is defined as the ratio of the rate of carbon assimilation (photosynthesis) to the rate of transpiration, and water-use efficiency of productivity (also called integrated water-use efficiency), which is typically defined as the ratio of biomass produced to the rate of transpiration. Increases in water-use efficiency are commonly cited as a response mechanism of plants to moderate to severe soil water deficits and have been the focus of many programs that seek to increase crop tolerance to drought. However, there is some question as to the benefit of increased water-use efficiency of plants in agricultural systems, as the processes of increased yield production and decreased water loss due to transpiration (that is, the main driver of increases in water-use efficiency) are fundamentally opposed. If there existed a situation where water deficit induced lower transpirational rates without simultaneously decreasing photosynthetic rates and biomass production, then water-use efficiency would be both greatly improved and the desired trait in crop production. Document 3::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 4::: Irrigation informatics is a newly emerging academic field that is a cross-disciplinary science using informatics to study the information flows and data management related to irrigation. The field is one of many new informatics sub-specialities that uses the science of information, the practice of information processing, and the engineering of information systems to advance a biophysical science or engineering field. Background Agricultural productivity increases are eagerly sought by governments and industry, spurred by the realisation that world food production must double in the 21st century to feed growing populations and that as irrigation makes up 36% of global food production, but that new land for irrigation growth is very limited, irrigation efficiency must increase. Since irrigation science is a mature and stable field, irrigation researchers are looking to cross-disciplinary science to bring about production gains and informatics is one such science along with others such as social science. Much of the driver for work in the area of irrigation informatics is the perceived success of other informatics fields such as health informatics. Current research Irrigation informatics is very much a part of the wider research into irrigation wherever information technology or data systems are used, however the term informatics is not always used to describe research involving computer systems and data management so that information science or information technology may alternatively be used. This leads to a great number of irrigation informatics articles not using the term irrigation informatics. There are currently no formal publications (journals) that focus on irrigation informatics with the publication most likely to present articles on the topic being Computers and electronics in Agriculture or one of the many irrigation science journals such as Irrigation Science. Recent work in the general area of irrigation informatics has mentioned the exact phrase "Ir The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Water is recycled constantly through which system? A. the habitat B. the troposphere C. the hydropshere D. the ecosystem Answer:
sciq-11541
multiple_choice
What type of compounds can form crystals?
[ "integral compounds", "ionic compounds", "molecular compounds", "magnetic compounds" ]
B
Relavent Documents: Document 0::: Crystallization is the process by which solid forms, where the atoms or molecules are highly organized into a structure known as a crystal. Some ways by which crystals form are precipitating from a solution, freezing, or more rarely deposition directly from a gas. Attributes of the resulting crystal depend largely on factors such as temperature, air pressure, and in the case of liquid crystals, time of fluid evaporation. Crystallization occurs in two major steps. The first is nucleation, the appearance of a crystalline phase from either a supercooled liquid or a supersaturated solvent. The second step is known as crystal growth, which is the increase in the size of particles and leads to a crystal state. An important feature of this step is that loose particles form layers at the crystal's surface and lodge themselves into open inconsistencies such as pores, cracks, etc. The majority of minerals and organic molecules crystallize easily, and the resulting crystals are generally of good quality, i.e. without visible defects. However, larger biochemical particles, like proteins, are often difficult to crystallize. The ease with which molecules will crystallize strongly depends on the intensity of either atomic forces (in the case of mineral substances), intermolecular forces (organic and biochemical substances) or intramolecular forces (biochemical substances). Crystallization is also a chemical solid–liquid separation technique, in which mass transfer of a solute from the liquid solution to a pure solid crystalline phase occurs. In chemical engineering, crystallization occurs in a crystallizer. Crystallization is therefore related to precipitation, although the result is not amorphous or disordered, but a crystal. Process The crystallization process consists of two major events, nucleation and crystal growth which are driven by thermodynamic properties as well as chemical properties. Nucleation is the step where the solute molecules or atoms dispersed in the so Document 1::: In crystallography, the diamond cubic crystal structure is a repeating pattern of 8 atoms that certain materials may adopt as they solidify. While the first known example was diamond, other elements in group 14 also adopt this structure, including α-tin, the semiconductors silicon and germanium, and silicon–germanium alloys in any proportion. There are also crystals, such as the high-temperature form of cristobalite, which have a similar structure, with one kind of atom (such as silicon in cristobalite) at the positions of carbon atoms in diamond but with another kind of atom (such as oxygen) halfway between those (see :Category:Minerals in space group 227). Although often called the diamond lattice, this structure is not a lattice in the technical sense of this word used in mathematics. Crystallographic structure Diamond's cubic structure is in the Fdm space group (space group 227), which follows the face-centered cubic Bravais lattice. The lattice describes the repeat pattern; for diamond cubic crystals this lattice is "decorated" with a motif of two tetrahedrally bonded atoms in each primitive cell, separated by of the width of the unit cell in each dimension. The diamond lattice can be viewed as a pair of intersecting face-centered cubic lattices, with each separated by of the width of the unit cell in each dimension. Many compound semiconductors such as gallium arsenide, β-silicon carbide, and indium antimonide adopt the analogous zincblende structure, where each atom has nearest neighbors of an unlike element. Zincblende's space group is F3m, but many of its structural properties are quite similar to the diamond structure. The atomic packing factor of the diamond cubic structure (the proportion of space that would be filled by spheres that are centered on the vertices of the structure and are as large as possible without overlapping) is significantly smaller (indicating a less dense structure) than the packing factors for the face-centered and body-cent Document 2::: In crystallography, the term polysome is used to describe overall mineral structures which have structurally and compositionally different framework structures. A general example is amphiboles, in which cutting along the {010} plane yields alternating layers of pyroxene and trioctahedral mica. Document 3::: A crystal cluster is a group of crystals which are formed in an open space environment and exhibit euhedral crystal form determined by their internal crystal structure. A cluster of small crystals coating the walls of a cavity are called druse. See also Document 4::: In chemistry, water(s) of crystallization or water(s) of hydration are water molecules that are present inside crystals. Water is often incorporated in the formation of crystals from aqueous solutions. In some contexts, water of crystallization is the total mass of water in a substance at a given temperature and is mostly present in a definite (stoichiometric) ratio. Classically, "water of crystallization" refers to water that is found in the crystalline framework of a metal complex or a salt, which is not directly bonded to the metal cation. Upon crystallization from water, or water-containing solvents, many compounds incorporate water molecules in their crystalline frameworks. Water of crystallization can generally be removed by heating a sample but the crystalline properties are often lost. Compared to inorganic salts, proteins crystallize with large amounts of water in the crystal lattice. A water content of 50% is not uncommon for proteins. Applications Knowledge of hydration is essential for calculating the masses for many compounds. The reactivity of many salt-like solids is sensitive to the presence of water. The hydration and dehydration of salts is central to the use of phase-change materials for energy storage. Position in the crystal structure A salt with associated water of crystallization is known as a hydrate. The structure of hydrates can be quite elaborate, because of the existence of hydrogen bonds that define polymeric structures. Historically, the structures of many hydrates were unknown, and the dot in the formula of a hydrate was employed to specify the composition without indicating how the water is bound. Per IUPAC's recommendations, the middle dot is not surrounded by spaces when indicating a chemical adduct. Examples: – copper(II) sulfate pentahydrate – cobalt(II) chloride hexahydrate – tin(II) (or stannous) chloride dihydrate For many salts, the exact bonding of the water is unimportant because the water molecules are made labi The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What type of compounds can form crystals? A. integral compounds B. ionic compounds C. molecular compounds D. magnetic compounds Answer:
sciq-2486
multiple_choice
Water molds are commonly found in moist soil and where else?
[ "standing water", "crust water", "methane water", "surface water" ]
D
Relavent Documents: Document 0::: The University of Michigan Biological Station (UMBS) is a research and teaching facility operated by the University of Michigan. It is located on the south shore of Douglas Lake in Cheboygan County, Michigan. The station consists of 10,000 acres (40 km2) of land near Pellston, Michigan in the northern Lower Peninsula of Michigan and 3,200 acres (13 km2) on Sugar Island in the St. Mary's River near Sault Ste. Marie, in the Upper Peninsula. It is one of only 28 Biosphere Reserves in the United States. Overview Founded in 1909, it has grown to include approximately 150 buildings, including classrooms, student cabins, dormitories, a dining hall, and research facilities. Undergraduate and graduate courses are available in the spring and summer terms. It has a full-time staff of 15. In the 2000s, UMBS is increasingly focusing on the measurement of climate change. Its field researchers are gauging the impact of global warming and increased levels of atmospheric carbon dioxide on the ecosystem of the upper Great Lakes region, and are using field data to improve the computer models used to forecast further change. Several archaeological digs have been conducted at the station as well. UMBS field researchers sometimes call the station "bug camp" amongst themselves. This is believed to be due to the number of mosquitoes and other insects present. It is also known as "The Bio-Station". The UMBS is also home to Michigan's most endangered species and one of the most endangered species in the world: the Hungerford's Crawling Water Beetle. The species lives in only five locations in the world, two of which are in Emmet County. One of these, a two and a half mile stretch downstream from the Douglas Road crossing of the East Branch of the Maple River supports the only stable population of the Hungerford's Crawling Water Beetle, with roughly 1000 specimens. This area, though technically not part of the UMBS is largely within and along the boundary of the University of Michigan Document 1::: In hydrology, bound water, is an extremely thin layer of water surrounding mineral surfaces. Water molecules have a strong electrical polarity, meaning that there is a very strong positive charge on one side of the molecule and a strong negative charge on the other. This causes the water molecules to bond to each other and to other charged surfaces, such as soil minerals. Clay in particular has a high ability to bond with water molecules. The strong attraction between these surfaces causes an extremely thin water film (a few molecules thick) to form on the mineral surface. These water molecules are much less mobile than the rest of the water in the soil, and have significant effects on soil dielectric permittivity and freezing-thawing. In molecular biology and food science, bound water refers to the amount of water in body tissues which are bound to macromolecules or organelles. In food science this form of water is practically unavailable for microbiological activities so it would not cause quality decreases or pathogen increases. See also Adsorption Capillary action Effective porosity Surface tension Document 2::: Evapotranspiration (ET) is the combined processes which move water from the Earth's surface into the atmosphere. It covers both water evaporation (movement of water to the air directly from soil, canopies, and water bodies) and transpiration (evaporation that occurs through the stomata, or openings, in plant leaves). Evapotranspiration is an important part of the local water cycle and climate, and measurement of it plays a key role in agricultural irrigation and water resource management. Definition of evapotranspiration Evapotranspiration is a combination of evaporation and transpiration, measured in order to better understand crop water requirements, irrigation scheduling, and watershed management. The two key components of evapotranspiration are: Evaporation: the movement of water directly to the air from sources such as the soil and water bodies. It can be affected by factors including heat, humidity, solar radiation and wind speed. Transpiration: the movement of water from root systems, through a plant, and exit into the air as water vapor. This exit occurs through stomata in the plant. Rate of transpiration can be influenced by factors including plant type, soil type, weather conditions and water content, and also cultivation practices. Evapotranspiration is typically measured in millimeters of water (i.e. volume of water moved per unit area of the Earth's surface) in a set unit of time. Globally, it is estimated that on average between three-fifths and three-quarters of land precipitation is returned to the atmosphere via evapotranspiration. Evapotranspiration does not, in general, account for other mechanisms which are involved in returning water to the atmosphere, though some of these, such as snow and ice sublimation in regions of high elevation or high latitude, can make a large contribution to atmospheric moisture even under standard conditions. Factors that impact evapotranspiration levels Primary factors Because evaporation and transpiration Document 3::: Wet Processing Engineering is one of the major streams in Textile Engineering or Textile manufacturing which refers to the engineering of textile chemical processes and associated applied science. The other three streams in textile engineering are yarn engineering, fabric engineering, and apparel engineering. The processes of this stream are involved or carried out in an aqueous stage. Hence, it is called a wet process which usually covers pre-treatment, dyeing, printing, and finishing. The wet process is usually done in the manufactured assembly of interlacing fibers, filaments and yarns, having a substantial surface (planar) area in relation to its thickness, and adequate mechanical strength to give it a cohesive structure. In other words, the wet process is done on manufactured fiber, yarn and fabric. All of these stages require an aqueous medium which is created by water. A massive amount of water is required in these processes per day. It is estimated that, on an average, almost 50–100 liters of water is used to process only 1 kilogram of textile goods, depending on the process engineering and applications. Water can be of various qualities and attributes. Not all water can be used in the textile processes; it must have some certain properties, quality, color and attributes of being used. This is the reason why water is a prime concern in wet processing engineering. Water Water consumption and discharge of wastewater are the two major concerns. The textile industry uses a large amount of water in its varied processes especially in wet operations such as pre-treatment, dyeing, and printing. Water is required as a solvent of various dyes and chemicals and it is used in washing or rinsing baths in different steps. Water consumption depends upon the application methods, processes, dyestuffs, equipment/machines and technology which may vary mill to mill and material composition. Longer processing sequences, processing of extra dark colors and reprocessing lead Document 4::: Groundwater remediation is the process that is used to treat polluted groundwater by removing the pollutants or converting them into harmless products. Groundwater is water present below the ground surface that saturates the pore space in the subsurface. Globally, between 25 per cent and 40 per cent of the world's drinking water is drawn from boreholes and dug wells. Groundwater is also used by farmers to irrigate crops and by industries to produce everyday goods. Most groundwater is clean, but groundwater can become polluted, or contaminated as a result of human activities or as a result of natural conditions. The many and diverse activities of humans produce innumerable waste materials and by-products. Historically, the disposal of such waste have not been subject to many regulatory controls. Consequently, waste materials have often been disposed of or stored on land surfaces where they percolate into the underlying groundwater. As a result, the contaminated groundwater is unsuitable for use. Current practices can still impact groundwater, such as the over application of fertilizer or pesticides, spills from industrial operations, infiltration from urban runoff, and leaking from landfills. Using contaminated groundwater causes hazards to public health through poisoning or the spread of disease, and the practice of groundwater remediation has been developed to address these issues. Contaminants found in groundwater cover a broad range of physical, inorganic chemical, organic chemical, bacteriological, and radioactive parameters. Pollutants and contaminants can be removed from groundwater by applying various techniques, thereby bringing the water to a standard that is commensurate with various intended uses. Techniques Ground water remediation techniques span biological, chemical, and physical treatment technologies. Most ground water treatment techniques utilize a combination of technologies. Some of the biological treatment techniques include bioaugmentation, The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Water molds are commonly found in moist soil and where else? A. standing water B. crust water C. methane water D. surface water Answer:
sciq-8992
multiple_choice
The base in an antacid reacts to do what to excess stomach acid?
[ "expel it", "repel it", "oxidize it", "neutralize it" ]
D
Relavent Documents: Document 0::: Digestion is the breakdown of large insoluble food compounds into small water-soluble components so that they can be absorbed into the blood plasma. In certain organisms, these smaller substances are absorbed through the small intestine into the blood stream. Digestion is a form of catabolism that is often divided into two processes based on how food is broken down: mechanical and chemical digestion. The term mechanical digestion refers to the physical breakdown of large pieces of food into smaller pieces which can subsequently be accessed by digestive enzymes. Mechanical digestion takes place in the mouth through mastication and in the small intestine through segmentation contractions. In chemical digestion, enzymes break down food into the small compounds that the body can use. In the human digestive system, food enters the mouth and mechanical digestion of the food starts by the action of mastication (chewing), a form of mechanical digestion, and the wetting contact of saliva. Saliva, a liquid secreted by the salivary glands, contains salivary amylase, an enzyme which starts the digestion of starch in the food; the saliva also contains mucus, which lubricates the food, and hydrogen carbonate, which provides the ideal conditions of pH (alkaline) for amylase to work, and electrolytes (Na+, K+, Cl−, HCO−3). About 30% of starch is hydrolyzed into disaccharide in the oral cavity (mouth). After undergoing mastication and starch digestion, the food will be in the form of a small, round slurry mass called a bolus. It will then travel down the esophagus and into the stomach by the action of peristalsis. Gastric juice in the stomach starts protein digestion. Gastric juice mainly contains hydrochloric acid and pepsin. In infants and toddlers, gastric juice also contains rennin to digest milk proteins. As the first two chemicals may damage the stomach wall, mucus and bicarbonates are secreted by the stomach. They provide a slimy layer that acts as a shield against the damag Document 1::: Stomachic is a historic term for a medicine that serves to tone the stomach, improving its function and increase appetite. While many herbal remedies claim stomachic effects, modern pharmacology does not have an equivalent term for this type of action. Herbs with putative stomachic effects include: Agrimony Aloe Anise Avens (Geum urbanum) Barberry Bitterwood (Picrasmaa excelsa) Cannabis Cayenne Centaurium Cleome Colombo (herb) (Frasera carolinensis) Dandelion Elecampane Ginseng Goldenseal Grewia asiatica (Phalsa or Falsa) Hops Holy thistle Juniper berry Mint Mugwort Oregano Peach bark Rhubarb White mustard seeds Rose hips Rue Sweet flag (Acorus calamus) Wormwood (Artemisia absinthium) The purported stomachic mechanism of action of these substances is to stimulate the appetite by increasing the gastric secretions of the stomach; however, the actual therapeutic value of some of these compounds is dubious. Some other important agents used are: Bitters: used to stimulate the taste buds, thus producing reflex secretion of gastric juices. Quassia, Aristolochia, gentian, and chirata are commonly used. Alcohol: increases gastric secretion by direct action and also by the reflex stimulation of taste buds. Miscellaneous compounds: including insulin which increases the gastric secretion by producing hypoglycemia, and histamine, which produces direct stimulation of gastric glands. Document 2::: S cells are cells which release secretin, found in the jejunum and duodenum. They are stimulated by a drop in pH to 4 or below in the small intestine's lumen. The released secretin will increase the secretion of bicarbonate (HCO3−) into the lumen, via the pancreas. This is primarily accomplished by an increase in cyclic AMP that activates CFTR to release chloride anions into the lumen. The luminal Cl− is then involved in a bicarbonate transporter protein exchange, in which the chloride is reabsorbed by the cell and HCO3− is secreted into the lumen. S cells are also one of the main producers of cyclosamatin. Human cells Digestive system Document 3::: This is a list of articles that describe particular biomolecules or types of biomolecules. A For substances with an A- or α- prefix such as α-amylase, please see the parent page (in this case Amylase). A23187 (Calcimycin, Calcium Ionophore) Abamectine Abietic acid Acetic acid Acetylcholine Actin Actinomycin D Adenine Adenosmeme Adenosine diphosphate (ADP) Adenosine monophosphate (AMP) Adenosine triphosphate (ATP) Adenylate cyclase Adiponectin Adonitol Adrenaline, epinephrine Adrenocorticotropic hormone (ACTH) Aequorin Aflatoxin Agar Alamethicin Alanine Albumins Aldosterone Aleurone Alpha-amanitin Alpha-MSH (Melaninocyte stimulating hormone) Allantoin Allethrin α-Amanatin, see Alpha-amanitin Amino acid Amylase (also see α-amylase) Anabolic steroid Anandamide (ANA) Androgen Anethole Angiotensinogen Anisomycin Antidiuretic hormone (ADH) Anti-Müllerian hormone (AMH) Arabinose Arginine Argonaute Ascomycin Ascorbic acid (vitamin C) Asparagine Aspartic acid Asymmetric dimethylarginine ATP synthase Atrial-natriuretic peptide (ANP) Auxin Avidin Azadirachtin A – C35H44O16 B Bacteriocin Beauvericin beta-Hydroxy beta-methylbutyric acid beta-Hydroxybutyric acid Bicuculline Bilirubin Biopolymer Biotin (Vitamin H) Brefeldin A Brassinolide Brucine Butyric acid C Document 4::: The Joan Mott Prize Lecture is a prize lecture awarded annually by The Physiological Society in honour of Joan Mott. Laureates Laureates of the award have included: - Intestinal absorption of sugars and peptides: from textbook to surprises See also Physiological Society Annual Review Prize Lecture The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The base in an antacid reacts to do what to excess stomach acid? A. expel it B. repel it C. oxidize it D. neutralize it Answer:
sciq-2162
multiple_choice
What do tadpoles change into?
[ "adult frogs", "toads", "mud puppies", "reptiles" ]
A
Relavent Documents: Document 0::: The common frog or grass frog (Rana temporaria), also known as the European common frog, European common brown frog, European grass frog, European Holarctic true frog, European pond frog or European brown frog, is a semi-aquatic amphibian of the family Ranidae, found throughout much of Europe as far north as Scandinavia and as far east as the Urals, except for most of the Iberian Peninsula, southern Italy, and the southern Balkans. The farthest west it can be found is Ireland. It is also found in Asia, and eastward to Japan. The nominative, and most common, subspecies Rana temporaria temporaria is a largely terrestrial frog native to Europe. It is distributed throughout northern Europe and can be found in Ireland, the Isle of Lewis and as far east as Japan. Common frogs metamorphose through three distinct developmental life stages — aquatic larva, terrestrial juvenile, and adult. They have corpulent bodies with a rounded snout, webbed feet and long hind legs adapted for swimming in water and hopping on land. Common frogs are often confused with the common toad (Bufo bufo), but frogs can easily be distinguished as they have longer legs, hop, and have a moist skin, whereas toads crawl and have a dry 'warty' skin. The spawn of the two species also differs, in that frog spawn is laid in clumps and toad spawn is laid in long strings. There are 3 subspecies of the common frog, R. t. temporaria, R. t. honnorati and R. t. palvipalmata. R. t. temporaria is the most common subspecies of this frog. Description The adult common frog has a body length of . In addition, its back and flanks vary in colour from olive green to grey-brown, brown, olive brown, grey, yellowish and rufous. However, it can lighten and darken its skin to match its surroundings. Some individuals have more unusual colouration—both black and red individuals have been found in Scotland, and albino frogs have been found with yellow skin and red eyes. During the mating season the male common frog tends to tu Document 1::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 2::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 3::: Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women. The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development. Current status of girls and women in STEM education Overall trends in STEM education Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle. Learning achievement in STEM education Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and Document 4::: Direct development is a concept in biology. It refers to forms of growth to adulthood that do not involve metamorphosis. An animal undergoes direct development if the immature organism resembles a small adult rather than having a distinct larval form. A frog that hatches out of its egg as a small frog undergoes direct development. A frog that hatches out of its egg as a tadpole does not. Direct development is the opposite of complete metamorphosis. An animal undergoes complete metamorphosis if it becomes a non-moving thing, for example a pupa in a cocoon, between its larval and adult stages. Examples Most frogs in the genus Callulina hatch out of their eggs as froglets. Springtails and mayflies, called ametabolous insects, undergo direct development. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What do tadpoles change into? A. adult frogs B. toads C. mud puppies D. reptiles Answer:
sciq-10337
multiple_choice
What type of symmetry does an octopus have?
[ "bilateral", "internal", "quadrilateral", "essential" ]
A
Relavent Documents: Document 0::: Symmetry in biology refers to the symmetry observed in organisms, including plants, animals, fungi, and bacteria. External symmetry can be easily seen by just looking at an organism. For example, the face of a human being has a plane of symmetry down its centre, or a pine cone displays a clear symmetrical spiral pattern. Internal features can also show symmetry, for example the tubes in the human body (responsible for transporting gases, nutrients, and waste products) which are cylindrical and have several planes of symmetry. Biological symmetry can be thought of as a balanced distribution of duplicate body parts or shapes within the body of an organism. Importantly, unlike in mathematics, symmetry in biology is always approximate. For example, plant leaves – while considered symmetrical – rarely match up exactly when folded in half. Symmetry is one class of patterns in nature whereby there is near-repetition of the pattern element, either by reflection or rotation. While sponges and placozoans represent two groups of animals which do not show any symmetry (i.e. are asymmetrical), the body plans of most multicellular organisms exhibit, and are defined by, some form of symmetry. There are only a few types of symmetry which are possible in body plans. These are radial (cylindrical), bilateral, biradial and spherical symmetry. While the classification of viruses as an "organism" remains controversial, viruses also contain icosahedral symmetry. The importance of symmetry is illustrated by the fact that groups of animals have traditionally been defined by this feature in taxonomic groupings. The Radiata, animals with radial symmetry, formed one of the four branches of Georges Cuvier's classification of the animal kingdom. Meanwhile, Bilateria is a taxonomic grouping still used today to represent organisms with embryonic bilateral symmetry. Radial symmetry Organisms with radial symmetry show a repeating pattern around a central axis such that they can be separated in Document 1::: Polydactyly in stem-tetrapods should here be understood as having more than five digits to the finger or foot, a condition that was the natural state of affairs in the earliest stegocephalians during the evolution of terrestriality. The polydactyly in these largely aquatic animals is not to be confused with polydactyly in the medical sense, i.e. it was not an anomaly in the sense it was not a congenital condition of having more than the typical number of digits for a given taxon. Rather, it appears to be a result of the early evolution from a limb with a fin rather than digits. "Living tetrapods, such as the frogs, turtles, birds and mammals, are a subgroup of the tetrapod lineage. The lineage also includes finned and limbed tetrapods that are more closely related to living tetrapods than to living lungfishes." Tetrapods evolved from animals with fins such as found in lobe-finned fishes. From this condition a new pattern of limb formation evolved, where the development axis of the limb rotated to sprout secondary axes along the lower margin, giving rise to a variable number of very stout skeletal supports for a paddle-like foot. The condition is thought to have arisen from the loss of the fin ray-forming proteins actinodin 1 and actinodin 2 or modification of the expression of HOXD13. It is still unknown why exactly this happens. "SHH is produced by the mesenchymal cells of the zone of polarizing activity (ZPA) found at the posterior margin of the limbs of all vertebrates with paired appendages, including the most primitive chondrichthyian fishes. Its expression is driven by a well-conserved limb-specific enhancer called the ZRS (zone of polarizing region activity regulatory sequence) that is located approximately 1 Mb upstream of the coding sequence of Shh." Devonian taxa were polydactylous. Acanthostega had eight digits on both the hindlimbs and forelimbs. Ichthyostega, which was both more derived and more specialized, had seven digits on the hindlimb, though th Document 2::: Symmetry breaking in biology is the process by which uniformity is broken, or the number of points to view invariance are reduced, to generate a more structured and improbable state. Symmetry breaking is the event where symmetry along a particular axis is lost to establish a polarity. Polarity is a measure for a biological system to distinguish poles along an axis. This measure is important because it is the first step to building complexity. For example, during organismal development, one of the first steps for the embryo is to distinguish its dorsal-ventral axis. The symmetry-breaking event that occurs here will determine which end of this axis will be the ventral side, and which end will be the dorsal side. Once this distinction is made, then all the structures that are located along this axis can develop at the proper location. As an example, during human development, the embryo needs to establish where is ‘back’ and where is ‘front’ before complex structures, such as the spine and lungs, can develop in the right location (where the lungs are placed ‘in front’ of the spine). This relationship between symmetry breaking and complexity was articulated by P.W. Anderson. He speculated that increasing levels of broken symmetry in many-body systems correlates with increasing complexity and functional specialization. In a biological perspective, the more complex an organism is, the higher number of symmetry-breaking events can be found. The importance of symmetry breaking in biology is also reflected in the fact that it's found at all scales. Symmetry breaking can be found at the macromolecular level, at the subcellular level and even at the tissues and organ level. It's also interesting to note that most asymmetry on a higher scale is a reflection of symmetry breaking on a lower scale. Cells first need to establish a polarity through a symmetry-breaking event before tissues and organs themselves can be polar. For example, one model proposes that left-right bo Document 3::: Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research. Americas Human Biology major at Stanford University, Palo Alto (since 1970) Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government. Human and Social Biology (Caribbean) Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment. Human Biology Program at University of Toronto The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications. Asia BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002) BSc (honours) Human Biology at AIIMS (New Document 4::: Asymmetry is the absence of, or a violation of, symmetry (the property of an object being invariant to a transformation, such as reflection). Symmetry is an important property of both physical and abstract systems and it may be displayed in precise terms or in more aesthetic terms. The absence of or violation of symmetry that are either expected or desired can have important consequences for a system. In organisms Due to how cells divide in organisms, asymmetry in organisms is fairly usual in at least one dimension, with biological symmetry also being common in at least one dimension. Louis Pasteur proposed that biological molecules are asymmetric because the cosmic [i.e. physical] forces that preside over their formation are themselves asymmetric. While at his time, and even now, the symmetry of physical processes are highlighted, it is known that there are fundamental physical asymmetries, starting with time. Asymmetry in biology Asymmetry is an important and widespread trait, having evolved numerous times in many organisms and at many levels of organisation (ranging from individual cells, through organs, to entire body-shapes). Benefits of asymmetry sometimes have to do with improved spatial arrangements, such as the left human lung being smaller, and having one fewer lobes than the right lung to make room for the asymmetrical heart. In other examples, division of function between the right and left half may have been beneficial and has driven the asymmetry to become stronger. Such an explanation is usually given for mammal hand or paw preference (handedness), an asymmetry in skill development in mammals. Training the neural pathways in a skill with one hand (or paw) may take less effort than doing the same with both hands. Nature also provides several examples of handedness in traits that are usually symmetric. The following are examples of animals with obvious left-right asymmetries: Most snails, because of torsion during development, show remarkable as The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What type of symmetry does an octopus have? A. bilateral B. internal C. quadrilateral D. essential Answer:
sciq-9614
multiple_choice
What process is magnesium important to?
[ "absorption", "photosynthetic", "dna replication", "carbon cycle" ]
B
Relavent Documents: Document 0::: Magnesium is an essential element in biological systems. Magnesium occurs typically as the Mg2+ ion. It is an essential mineral nutrient (i.e., element) for life and is present in every cell type in every organism. For example, adenosine triphosphate (ATP), the main source of energy in cells, must bind to a magnesium ion in order to be biologically active. What is called ATP is often actually Mg-ATP. As such, magnesium plays a role in the stability of all polyphosphate compounds in the cells, including those associated with the synthesis of DNA and RNA. Over 300 enzymes require the presence of magnesium ions for their catalytic action, including all enzymes utilizing or synthesizing ATP, or those that use other nucleotides to synthesize DNA and RNA. In plants, magnesium is necessary for synthesis of chlorophyll and photosynthesis. Function A balance of magnesium is vital to the well-being of all organisms. Magnesium is a relatively abundant ion in Earth's crust and mantle and is highly bioavailable in the hydrosphere. This availability, in combination with a useful and very unusual chemistry, may have led to its utilization in evolution as an ion for signaling, enzyme activation, and catalysis. However, the unusual nature of ionic magnesium has also led to a major challenge in the use of the ion in biological systems. Biological membranes are impermeable to magnesium (and other ions), so transport proteins must facilitate the flow of magnesium, both into and out of cells and intracellular compartments. Human health Inadequate magnesium intake frequently causes muscle spasms, and has been associated with cardiovascular disease, diabetes, high blood pressure, anxiety disorders, migraines, osteoporosis, and cerebral infarction. Acute deficiency (see hypomagnesemia) is rare, and is more common as a drug side-effect (such as chronic alcohol or diuretic use) than from low food intake per se, but it can occur in people fed intravenously for extended periods of time. Document 1::: Manganese is an essential biological element in all organisms. It is used in many enzymes and proteins. It is essential in plants. Biochemistry The classes of enzymes that have manganese cofactors include oxidoreductases, transferases, hydrolases, lyases, isomerases and ligases. Other enzymes containing manganese are arginase and Mn-containing superoxide dismutase (Mn-SOD). Also the enzyme class of reverse transcriptases of many retroviruses (though not lentiviruses such as HIV) contains manganese. Manganese-containing polypeptides are the diphtheria toxin, lectins and integrins. Biological role in humans Manganese is an essential human dietary element. It is present as a coenzyme in several biological processes, which include macronutrient metabolism, bone formation, and free radical defense systems. It is a critical component in dozens of proteins and enzymes. The human body contains about 12 mg of manganese, mostly in the bones. The soft tissue remainder is concentrated in the liver and kidneys. In the human brain, the manganese is bound to manganese metalloproteins, most notably glutamine synthetase in astrocytes. Nutrition Dietary recommendations The U.S. Institute of Medicine (IOM) updated Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for minerals in 2001. For manganese there was not sufficient information to set EARs and RDAs, so needs are described as estimates for Adequate Intakes (AIs). As for safety, the IOM sets Tolerable upper intake levels (ULs) for vitamins and minerals when evidence is sufficient. In the case of manganese the adult UL is set at 11 mg/day. Collectively the EARs, RDAs, AIs and ULs are referred to as Dietary Reference Intakes (DRIs). Manganese deficiency is rare. The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL defined the s Document 2::: The manganese cycle is the biogeochemical cycle of manganese through the atmosphere, hydrosphere, biosphere and lithosphere. There are bacteria that oxidise manganese to insoluble oxides, and others that reduce it to Mn2+ in order to use it. Manganese is a heavy metal that comprises about 0.1% of the Earth's crust and a necessary element for biological processes. It is cycled through the Earth in similar ways to iron, but with distinct redox pathways. Human activities have impacted the fluxes of manganese among the different spheres of the Earth. Global manganese cycle Manganese is a necessary element for biological functions such as photosynthesis, and some manganese oxidizing bacteria utilize this element in anoxic environments. Movement of manganese (Mn) among the global "spheres" (described below) is mediated by both physical and biological processes. Manganese in the lithosphere enters the hydrosphere from erosion and dissolution of bedrock in rivers, in solution it then makes its way into the ocean. Once in the ocean, Mn can form minerals and sink to the ocean floor where the solid phase is buried. The global manganese cycle is being altered by anthropogenic influences, such as mining and mineral processing for industrial use, as well as through the burning of fossil fuels. Lithosphere Manganese is the tenth most abundant metal in the Earth's crust, making up approximately 0.1% of the total composition, or about 0.019 mol kg−1, which is found mostly in the oceanic crust. Crust Manganese (Mn) commonly precipitates in igneous rocks in the form of early-stage crystalline minerals, which, once exposed to water and/or oxygen, are highly soluble and easily oxidized to form Mn oxides on the surfaces of rocks. Dendritic crystals rich in Mn form when microbes reprecipitate the Mn from the rocks on which they develop onto the surface after utilizing the Mn for their metabolism. For certain cyanobacteria found on desert varnish samples, for example, it has been f Document 3::: A Master of Bioscience Enterprise (abbreviated MBE or MBioEnt) is a specialised degree taught at The University of Auckland, New Zealand, Karolinska Institute, Sweden and The University of Cambridge, United Kingdom. The MBE is an interdisciplinary programme incorporating multiple faculties and includes significant industry involvement. The degree is primarily focused on the commercialisation of biotechnology. Both universities have developed the MBE programme to provide specialist business and legal skills relevant to employment in the bio-economy. The context in which both programmes were developed are significantly different. These differences are reflected in internship placements, thesis topics and postgraduate employment opportunities. University of Auckland Inaugurated in 2006, the MBE programme was developed in partnership between the School of Biological Sciences (SBS), the Business School and the Law School. Program Structure The prerequisite for the first year (the Postgraduate Diploma) is a Bachelor of Science with a major or specialisation in Biological Sciences, Bioinformatics, Biomedical Science, Food Science, Medicinal Chemistry, Pharmacology or Physiology; a Bachelor of Engineering in Biomedical Engineering; a Bachelor of Pharmacy; or a Bachelor of Technology in Biotechnology. The Postgraduate Diploma of Bioscience Enterprise is required for entry into the Masters year. Associate degrees are also available. Academic Component There is an academic component in both the Post Graduate Diploma and Masters year. Postgraduate Diploma Year The Postgraduate Diploma year has five core papers required for the Postgraduate Diploma in Bioscience Enterprise. Students are also required to take three electives, which are generally science-based papers. SCIENT 701 (15 points) Accounting and Finance for Scientists SCIENT 702 (15 points) Marketing for Scientific and Technical Personnel SCIENT 703 (15 points) Frontiers in Biotechnology SCIENT 704 (15 points) Document 4::: In plants and animals, mineral absorption, also called mineral uptake is the way in which minerals enter the cellular material, typically following the same pathway as water. In plants, the entrance portal for mineral uptake is usually through the roots. Some mineral ions diffuse in-between the cells. In contrast to water, some minerals are actively taken up by plant cells. Mineral nutrient concentration in roots may be 10,000 times more than in surrounding soil. During transport throughout a plant, minerals can exit xylem and enter cells that require them. Mineral ions cross plasma membranes by a chemiosmotic mechanism. Plants absorb minerals in ionic form: nitrate (NO3−), phosphate (HPO4−) and potassium ions (K+); all have difficulty crossing a charged plasma membrane. It has long been known plants expend energy to actively take up and concentrate mineral ions. Proton pump hydrolyzes adenosine triphosphate (ATP) to transport H+ ions out of cell; this sets up an electrochemical gradient that causes positive ions to flow into cells. Negative ions are carried across the plasma membrane in conjunction with H+ ions as H+ ions diffuse down their concentration gradient. In animals, minerals found in low small amounts are microminerals while the seven elements that are required in large quantity are known as macrominerals; these are Ca, P, Mg, Na, K, Cl, and S. In most cases, minerals that enter the blood pass through the epithelial cells which line the gastrointestinal mucosa of the small intestine. Minerals can diffuse through the pores of the tight junction in paracellular absorption if there is an electrochemical gradient. Through the process of solvent drag, minerals can also enter with water when solubilized by dipole-ion interactions. Furthermore, the absorption of trace elements can be enhanced by the presence of amino acids that are covalently bonded to the mineral. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What process is magnesium important to? A. absorption B. photosynthetic C. dna replication D. carbon cycle Answer:
scienceQA-1523
multiple_choice
What do these two changes have in common? using polish to remove tarnish from a silver spoon a penny tarnishing
[ "Both are chemical changes.", "Both are caused by heating.", "Both are caused by cooling.", "Both are only physical changes." ]
A
Step 1: Think about each change. A tarnished silver spoon is one that has become less shiny over time. Polishing the spoon makes it look shiny again. The polish changes the tarnish into a different type of matter that can be easily wiped away. So, using polish to remove tarnish from silver is a chemical change. Metal turning less shiny over time is called tarnishing. A penny tarnishing is a chemical change. When air touches the penny, the surface of the penny changes into a different type of matter. This matter makes the penny dull. Step 2: Look at each answer choice. Both are only physical changes. Both changes are chemical changes. They are not physical changes. Both are chemical changes. Both changes are chemical changes. The type of matter before and after each change is different. Both are caused by heating. Neither change is caused by heating. Both are caused by cooling. Neither change is caused by cooling.
Relavent Documents: Document 0::: Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds. Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate. A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density. An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge. Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change. Examples Heating and cooling Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation. Magnetism Ferro-magnetic materials can become magnetic. The process is reve Document 1::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 2::: Adaptive comparative judgement is a technique borrowed from psychophysics which is able to generate reliable results for educational assessment – as such it is an alternative to traditional exam script marking. In the approach, judges are presented with pairs of student work and are then asked to choose which is better, one or the other. By means of an iterative and adaptive algorithm, a scaled distribution of student work can then be obtained without reference to criteria. Introduction Traditional exam script marking began in Cambridge 1792 when, with undergraduate numbers rising, the importance of proper ranking of students was growing. So in 1792 the new Proctor of Examinations, William Farish, introduced marking, a process in which every examiner gives a numerical score to each response by every student, and the overall total mark puts the students in the final rank order. Francis Galton (1869) noted that, in an unidentified year about 1863, the Senior Wrangler scored 7,634 out of a maximum of 17,000, while the Second Wrangler scored 4,123. (The 'Wooden Spoon' scored only 237.) Prior to 1792, a team of Cambridge examiners convened at 5pm on the last day of examining, reviewed the 19 papers each student had sat – and published their rank order at midnight. Marking solved the problems of numbers and prevented unfair personal bias, and its introduction was a step towards modern objective testing, the format it is best suited to. But the technology of testing that followed, with its major emphasis on reliability and the automatisation of marking, has been an uncomfortable partner for some areas of educational achievement: assessing writing or speaking, and other kinds of performance need something more qualitative and judgemental. The technique of Adaptive Comparative Judgement is an alternative to marking. It returns to the pre-1792 idea of sorting papers according to their quality, but retains the guarantee of reliability and fairness. It is by far the most rel Document 3::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of Document 4::: Computer science and engineering (CSE) is an academic program at many universities which comprises computer science classes (e.g. data structures and algorithms) and computer engineering classes (e.g computer architecture). There is no clear division in computing between science and engineering, just like in the field of materials science and engineering. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both undergraduate as well postgraduate with specializations. Academic courses Academic programs vary between colleges, but typically include a combination of topics in computer science, computer engineering, and electrical engineering. Undergraduate courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most CSE programs require introductory mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, probability, and statistics, as well as the basics of electrical and electronic engineering, physics, and electromagnetism. Example universities with CSE majors and departments APJ Abdul Kalam Technological University American International University-B The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What do these two changes have in common? using polish to remove tarnish from a silver spoon a penny tarnishing A. Both are chemical changes. B. Both are caused by heating. C. Both are caused by cooling. D. Both are only physical changes. Answer:
sciq-6976
multiple_choice
What element is responsible for one-half of the osmotic pressure gradient that exists between the interior of cells and their surrounding environment?
[ "sodium", "potassium", "nitrogen", "calcium" ]
A
Relavent Documents: Document 0::: Osmotic shock or osmotic stress is physiologic dysfunction caused by a sudden change in the solute concentration around a cell, which causes a rapid change in the movement of water across its cell membrane. Under hypertonic conditions - conditions of high concentrations of either salts, substrates or any solute in the supernatant - water is drawn out of the cells through osmosis. This also inhibits the transport of substrates and cofactors into the cell thus “shocking” the cell. Alternatively, under hypotonic conditions - when concentrations of solutes are low - water enters the cell in large amounts, causing it to swell and either burst or undergo apoptosis. All organisms have mechanisms to respond to osmotic shock, with sensors and signal transduction networks providing information to the cell about the osmolarity of its surroundings; these signals activate responses to deal with extreme conditions. Cells that have a cell wall tend to be more resistant to osmotic shock because their cell wall enables them to maintain their shape. Although single-celled organisms are more vulnerable to osmotic shock, since they are directly exposed to their environment, cells in large animals such as mammals still suffer these stresses under some conditions. Current research also suggests that osmotic stress in cells and tissues may significantly contribute to many human diseases. In eukaryotes, calcium acts as one of the primary regulators of osmotic stress. Intracellular calcium levels rise during hypo-osmotic and hyper-osmotic stresses. Recovery and tolerance mechanisms For hyper-osmotic stress Calcium plays a large role in the recovery and tolerance for both hyper and hypo-osmotic stress situations. Under hyper-osmotic stress conditions, increased levels of intracellular calcium are exhibited. This may play a crucial role in the activation of second messenger pathways. One example of a calcium activated second messenger molecule is MAP Kinase Hog-1. It is activated under Document 1::: Osmoregulation is the active regulation of the osmotic pressure of an organism's body fluids, detected by osmoreceptors, to maintain the homeostasis of the organism's water content; that is, it maintains the fluid balance and the concentration of electrolytes (salts in solution which in this case is represented by body fluid) to keep the body fluids from becoming too diluted or concentrated. Osmotic pressure is a measure of the tendency of water to move into one solution from another by osmosis. The higher the osmotic pressure of a solution, the more water tends to move into it. Pressure must be exerted on the hypertonic side of a selectively permeable membrane to prevent diffusion of water by osmosis from the side containing pure water. Although there may be hourly and daily variations in osmotic balance, an animal is generally in an osmotic steady state over the long term. Organisms in aquatic and terrestrial environments must maintain the right concentration of solutes and amount of water in their body fluids; this involves excretion (getting rid of metabolic nitrogen wastes and other substances such as hormones that would be toxic if allowed to accumulate in the blood) through organs such as the skin and the kidneys. Regulators and conformers Two major types of osmoregulation are osmoconformers and osmoregulators. Osmoconformers match their body osmolarity to their environment actively or passively. Most marine invertebrates are osmoconformers, although their ionic composition may be different from that of seawater. In a strictly osmoregulating animal, the amounts of internal salt and water are held relatively constant in the face of environmental changes. It requires that intake and outflow of water and salts be equal over an extended period of time. Organisms that maintain an internal osmolarity different from the medium in which they are immersed have been termed osmoregulators. They tightly regulate their body osmolarity, maintaining constant internal c Document 2::: The Society of General Physiologists (SGP) is a scientific organization whose purpose is to promote and disseminate knowledge in the field of general physiology, and otherwise to advance understanding and interest in the subject of general physiology. The Society’s main office is located at the Marine Biological Laboratory in Woods Hole, MA, where the society was founded in 1946. Past Presidents of the Society include Richard W. Aldrich, Richard W. Tsien, Clay Armstrong, and Andrew Szent-Gyorgi. The society's archives is held at the National Library of Medicine in Bethesda, Maryland. Membership The Society's international membership is made up of nearly 600 career physiologists who work in academia, government, and industry. Membership in the Society is open to any individual actively interested in the field of general physiology and who has made significant contributions to knowledge in that field. The Society has become known for promoting research in many subfields of cellular and molecular physiology, but especially in the fields of membrane transport and ion channels, cell membrane structure, regulation, and dynamics, and cellular contractility and molecular motors. Activities The major activity of the Society is its annual symposium, which is held at the Marine Biological Laboratory in Woods Hole, MA. Society of General Physiologists symposia cover the forefront of physiological research and are small enough to maximize discussion and interaction among both young and established investigators. Abstracts of the annual meeting are published in The Journal of General Physiology. The 2015 symposium (September 16–20) topic is "Macromolecular Local Signaling Complexes." Detailed information regarding the scientific agenda and registration is provided at the symposium website: https://web.archive.org/web/20150801070408/http://www.sgpweb.org/symposium2015.html Recent past symposium topics include: 2014 Sensory Transduction 2013 The Enigmatic Chloride Ion: Tra Document 3::: The blood circulatory system is a system of organs that includes the heart, blood vessels, and blood which is circulated throughout the entire body of a human or other vertebrate. It includes the cardiovascular system, or vascular system, that consists of the heart and blood vessels (from Greek kardia meaning heart, and from Latin vascula meaning vessels). The circulatory system has two divisions, a systemic circulation or circuit, and a pulmonary circulation or circuit. Some sources use the terms cardiovascular system and vascular system interchangeably with the circulatory system. The network of blood vessels are the great vessels of the heart including large elastic arteries, and large veins; other arteries, smaller arterioles, capillaries that join with venules (small veins), and other veins. The circulatory system is closed in vertebrates, which means that the blood never leaves the network of blood vessels. Some invertebrates such as arthropods have an open circulatory system. Diploblasts such as sponges, and comb jellies lack a circulatory system. Blood is a fluid consisting of plasma, red blood cells, white blood cells, and platelets; it is circulated around the body carrying oxygen and nutrients to the tissues and collecting and disposing of waste materials. Circulated nutrients include proteins and minerals and other components include hemoglobin, hormones, and gases such as oxygen and carbon dioxide. These substances provide nourishment, help the immune system to fight diseases, and help maintain homeostasis by stabilizing temperature and natural pH. In vertebrates, the lymphatic system is complementary to the circulatory system. The lymphatic system carries excess plasma (filtered from the circulatory system capillaries as interstitial fluid between cells) away from the body tissues via accessory routes that return excess fluid back to blood circulation as lymph. The lymphatic system is a subsystem that is essential for the functioning of the bloo Document 4::: Oncotic pressure, or colloid osmotic-pressure, is a type of osmotic pressure induced by the plasma proteins, notably albumin, in a blood vessel's plasma (or any other body fluid such as blood and lymph) that causes a pull on fluid back into the capillary. Participating colloids displace water molecules, thus creating a relative water molecule deficit with water molecules moving back into the circulatory system within the lower venous pressure end of capillaries. It has the opposing effect of both hydrostatic blood pressure pushing water and small molecules out of the blood into the interstitial spaces within the arterial end of capillaries and interstitial colloidal osmotic pressure. These interacting factors determine the partition balancing of extracellular water between the blood plasma and outside the blood stream. Oncotic pressure strongly affects the physiological function of the circulatory system. It is suspected to have a major effect on the pressure across the glomerular filter. However, this concept has been strongly criticised and attention has been shifted to the impact of the intravascular glycocalyx layer as the major player. Etymology The word 'Oncotic' by definition is termed as 'pertaining to swelling', indicating the effect of oncotic imbalance on the swelling of tissues. The word itself is derived from onco- and -ic; 'onco-' meaning 'pertaining to mass or tumors' and '-ic', which forms an adjective. Description Throughout the body, dissolved compounds have an osmotic pressure. Because large plasma proteins cannot easily cross through the capillary walls, their effect on the osmotic pressure of the capillary interiors will, to some extent, balance out the tendency for fluid to leak out of the capillaries. In other words, the oncotic pressure tends to pull fluid into the capillaries. In conditions where plasma proteins are reduced, e.g. from being lost in the urine (proteinuria), there will be a reduction in oncotic pressure and an increase The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What element is responsible for one-half of the osmotic pressure gradient that exists between the interior of cells and their surrounding environment? A. sodium B. potassium C. nitrogen D. calcium Answer:
sciq-7009
multiple_choice
What are the only organisms that can perform photosynthesis?
[ "heterotrophs", "sponges", "monocots", "autotrophs" ]
D
Relavent Documents: Document 0::: Cyanobacteria (), also called Cyanobacteriota or Cyanophyta, are a phylum of gram-negative bacteria that obtain energy via photosynthesis. The name cyanobacteria refers to their color (), which similarly forms the basis of cyanobacteria's common name, blue-green algae, although they are not usually scientifically classified as algae. They appear to have originated in a freshwater or terrestrial environment. Sericytochromatia, the proposed name of the paraphyletic and most basal group, is the ancestor of both the non-photosynthetic group Melainabacteria and the photosynthetic cyanobacteria, also called Oxyphotobacteria. Cyanobacteria use photosynthetic pigments, such as carotenoids, phycobilins, and various forms of chlorophyll, which absorb energy from light. Unlike heterotrophic prokaryotes, cyanobacteria have internal membranes. These are flattened sacs called thylakoids where photosynthesis is performed. Phototrophic eukaryotes such as green plants perform photosynthesis in plastids that are thought to have their ancestry in cyanobacteria, acquired long ago via a process called endosymbiosis. These endosymbiotic cyanobacteria in eukaryotes then evolved and differentiated into specialized organelles such as chloroplasts, chromoplasts, etioplasts, and leucoplasts, collectively known as plastids. Cyanobacteria are the first organisms known to have produced oxygen. By producing and releasing oxygen as a byproduct of photosynthesis, cyanobacteria are thought to have converted the early oxygen-poor, reducing atmosphere into an oxidizing one, causing the Great Oxidation Event and the "rusting of the Earth", which dramatically changed the composition of life forms on Earth. The cyanobacteria Synechocystis and Cyanothece are important model organisms with potential applications in biotechnology for bioethanol production, food colorings, as a source of human and animal food, dietary supplements and raw materials. Cyanobacteria produce a range of toxins known as cyanotox Document 1::: Photosymbiosis is a type of symbiosis where one of the organisms is capable of photosynthesis. Examples Examples of photosymbiotic relationships include those in lichens, plankton, and many marine organisms including coral, giant clams, and jellyfish. Significance Photosymbiosis is important in the development, maintenance, and evolution of terrestrial and aquatic ecosystems, for example through supporting soil formation, soil stabilization, and coral reef growth and maintenance. Photosymbiotic relationships where microalgae live within a heterotrophic host organism, is believed to have led to eukaryotes acquiring photosynthesis and the evolution of plants. Document 2::: The evolution of photosynthesis refers to the origin and subsequent evolution of photosynthesis, the process by which light energy is used to assemble sugars from carbon dioxide and a hydrogen and electron source such as water. The process of photosynthesis was discovered by Jan Ingenhousz, a Dutch-born British physician and scientist, first publishing about it in 1779. The first photosynthetic organisms probably evolved early in the evolutionary history of life and most likely used reducing agents such as hydrogen rather than water. There are three major metabolic pathways by which photosynthesis is carried out: C3 photosynthesis, C4 photosynthesis, and CAM photosynthesis. C3 photosynthesis is the oldest and most common form. A C3 plant uses the Calvin cycle for the initial steps that incorporate into organic material. A C4 plant prefaces the Calvin cycle with reactions that incorporate into four-carbon compounds. A CAM plant uses crassulacean acid metabolism, an adaptation for photosynthesis in arid conditions. C4 and CAM plants have special adaptations that save water. Origin Available evidence from geobiological studies of Archean (>2500 Ma) sedimentary rocks indicates that life existed 3500 Ma. Fossils of what are thought to be filamentous photosynthetic organisms have been dated at 3.4 billion years old, consistent with recent studies of photosynthesis. Early photosynthetic systems, such as those from green and purple sulfur and green and purple nonsulfur bacteria, are thought to have been anoxygenic, using various molecules as electron donors. Green and purple sulfur bacteria are thought to have used hydrogen and hydrogen sulfide as electron and hydrogen donors. Green nonsulfur bacteria used various amino and other organic acids. Purple nonsulfur bacteria used a variety of nonspecific organic and inorganic molecules. It is suggested that photosynthesis likely originated at low-wavelength geothermal light from acidic hydrothermal vents, Zn-tetrapyrroles w Document 3::: In ecology, primary production is the synthesis of organic compounds from atmospheric or aqueous carbon dioxide. It principally occurs through the process of photosynthesis, which uses light as its source of energy, but it also occurs through chemosynthesis, which uses the oxidation or reduction of inorganic chemical compounds as its source of energy. Almost all life on Earth relies directly or indirectly on primary production. The organisms responsible for primary production are known as primary producers or autotrophs, and form the base of the food chain. In terrestrial ecoregions, these are mainly plants, while in aquatic ecoregions algae predominate in this role. Ecologists distinguish primary production as either net or gross, the former accounting for losses to processes such as cellular respiration, the latter not. Overview Primary production is the production of chemical energy in organic compounds by living organisms. The main source of this energy is sunlight but a minute fraction of primary production is driven by lithotrophic organisms using the chemical energy of inorganic molecules.Regardless of its source, this energy is used to synthesize complex organic molecules from simpler inorganic compounds such as carbon dioxide () and water (H2O). The following two equations are simplified representations of photosynthesis (top) and (one form of) chemosynthesis (bottom): + H2O + light → CH2O + O2 + O2 + 4 H2S → CH2O + 4 S + 3 H2O In both cases, the end point is a polymer of reduced carbohydrate, (CH2O)n, typically molecules such as glucose or other sugars. These relatively simple molecules may be then used to further synthesise more complicated molecules, including proteins, complex carbohydrates, lipids, and nucleic acids, or be respired to perform work. Consumption of primary producers by heterotrophic organisms, such as animals, then transfers these organic molecules (and the energy stored within them) up the food web, fueling all of the Earth' Document 4::: In biochemistry, chemosynthesis is the biological conversion of one or more carbon-containing molecules (usually carbon dioxide or methane) and nutrients into organic matter using the oxidation of inorganic compounds (e.g., hydrogen gas, hydrogen sulfide) or ferrous ions as a source of energy, rather than sunlight, as in photosynthesis. Chemoautotrophs, organisms that obtain carbon from carbon dioxide through chemosynthesis, are phylogenetically diverse. Groups that include conspicuous or biogeochemically important taxa include the sulfur-oxidizing Gammaproteobacteria, the Campylobacterota, the Aquificota, the methanogenic archaea, and the neutrophilic iron-oxidizing bacteria. Many microorganisms in dark regions of the oceans use chemosynthesis to produce biomass from single-carbon molecules. Two categories can be distinguished. In the rare sites where hydrogen molecules (H2) are available, the energy available from the reaction between CO2 and H2 (leading to production of methane, CH4) can be large enough to drive the production of biomass. Alternatively, in most oceanic environments, energy for chemosynthesis derives from reactions in which substances such as hydrogen sulfide or ammonia are oxidized. This may occur with or without the presence of oxygen. Many chemosynthetic microorganisms are consumed by other organisms in the ocean, and symbiotic associations between chemosynthesizers and respiring heterotrophs are quite common. Large populations of animals can be supported by chemosynthetic secondary production at hydrothermal vents, methane clathrates, cold seeps, whale falls, and isolated cave water. It has been hypothesized that anaerobic chemosynthesis may support life below the surface of Mars, Jupiter's moon Europa, and other planets. Chemosynthesis may have also been the first type of metabolism that evolved on Earth, leading the way for cellular respiration and photosynthesis to develop later. Hydrogen sulfide chemosynthesis process Giant tube worms The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What are the only organisms that can perform photosynthesis? A. heterotrophs B. sponges C. monocots D. autotrophs Answer:
sciq-2992
multiple_choice
The ammonium ion is what type of acid?
[ "normal -lowry", "kaon - lowry", "oxidized - lowry", "brønsted-lowry" ]
D
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams. Course content E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are: Electrostatics Conductors, capacitors, and dielectrics Electric circuits Magnetic fields Electromagnetism. Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class. AP test The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution. Registration The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test. Format The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with Document 2::: Advanced Placement (AP) Biology (also known as AP Bio) is an Advanced Placement biology course and exam offered by the College Board in the United States. For the 2012–2013 school year, the College Board unveiled a new curriculum with a greater focus on "scientific practices". This course is designed for students who wish to pursue an interest in the life sciences. The College Board recommends successful completion of high school biology and high school chemistry before commencing AP Biology, although the actual prerequisites vary from school to school and from state to state. This course, nevertheless, is considered very challenging and one of the most difficult AP classes, as shown with AP Finals grade distributions. Topic outline The exam covers the following 8 units. The percentage indicates the portion of the multiple-choice section of the exam focused on each content area: The course is based on and tests six skills, called scientific practices which include: In addition to the topics above, students are required to be familiar with general lab procedure. Students should know how to collect data, analyze data to form conclusions, and apply those conclusions. Exam Students are allowed to use a four-function, scientific, or graphing calculator. The exam has two sections: a 90 minute multiple choice section and a 90 minute free response section. There are 60 multiple choice questions and six free responses, two long and four short. Both sections are worth 50% of the score. Score distribution Commonly used textbooks Biology, AP Edition by Sylvia Mader (2012, hardcover ) Life: The Science of Biology (Sadava, Heller, Orians, Purves, and Hillis, ) Campbell Biology AP Ninth Edition (Reece, Urry, Cain, Wasserman, Minorsky, and Andrew Jackson ) See also Glossary of biology A.P Bio (TV Show) Document 3::: The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591. On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education. Format This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions. The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test. Preparation The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a Document 4::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The ammonium ion is what type of acid? A. normal -lowry B. kaon - lowry C. oxidized - lowry D. brønsted-lowry Answer:
sciq-10721
multiple_choice
Transferring what from sodium to chlorine decreases the radius of sodium by about 50%?
[ "neutron", "proton", "electron", "quark" ]
C
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams. Course content E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are: Electrostatics Conductors, capacitors, and dielectrics Electric circuits Magnetic fields Electromagnetism. Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class. AP test The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution. Registration The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test. Format The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with Document 2::: Advanced Placement (AP) Physics B was a physics course administered by the College Board as part of its Advanced Placement program. It was equivalent to a year-long introductory university course covering Newtonian mechanics, electromagnetism, fluid mechanics, thermal physics, waves, optics, and modern physics. The course was algebra-based and heavily computational; in 2015, it was replaced by the more concept-focused AP Physics 1 and AP Physics 2. Exam The exam consisted of a 70 MCQ section, followed by a 6-7 FRQ section. Each section was 90 minutes and was worth 50% of the final score. The MCQ section banned calculators, while the FRQ allowed calculators and a list of common formulas. Overall, the exam was configured to approximately cover a set percentage of each of the five target categories: Purpose According to the College Board web site, the Physics B course provided "a foundation in physics for students in the life sciences, a pre medical career path, and some applied sciences, as well as other fields not directly related to science." Discontinuation Starting in the 2014–2015 school year, AP Physics B was no longer offered, and AP Physics 1 and AP Physics 2 took its place. Like AP Physics B, both are algebra-based, and both are designed to be taught as year-long courses. Grade distribution The grade distributions for the Physics B scores from 2010 until its discontinuation in 2014 are as follows: Document 3::: There are four Advanced Placement (AP) Physics courses administered by the College Board as part of its Advanced Placement program: the algebra-based Physics 1 and Physics 2 and the calculus-based Physics C: Mechanics and Physics C: Electricity and Magnetism. All are intended to be at the college level. Each AP Physics course has an exam for which high-performing students may receive credit toward their college coursework. AP Physics 1 and 2 AP Physics 1 and AP Physics 2 were introduced in 2015, replacing AP Physics B. The courses were designed to emphasize critical thinking and reasoning as well as learning through inquiry. They are algebra-based and do not require any calculus knowledge. AP Physics 1 AP Physics 1 covers Newtonian mechanics, including: Unit 1: Kinematics Unit 2: Dynamics Unit 3: Circular Motion and Gravitation Unit 4: Energy Unit 5: Momentum Unit 6: Simple Harmonic Motion Unit 7: Torque and Rotational Motion Until 2020, the course also covered topics in electricity (including Coulomb's Law and resistive DC circuits), mechanical waves, and sound. These units were removed because they are included in AP Physics 2. AP Physics 2 AP Physics 2 covers the following topics: Unit 1: Fluids Unit 2: Thermodynamics Unit 3: Electric Force, Field, and Potential Unit 4: Electric Circuits Unit 5: Magnetism and Electromagnetic Induction Unit 6: Geometric and Physical Optics Unit 7: Quantum, Atomic, and Nuclear Physics AP Physics C From 1969 to 1972, AP Physics C was a single course with a single exam that covered all standard introductory university physics topics, including mechanics, fluids, electricity and magnetism, optics, and modern physics. In 1973, the College Board split the course into AP Physics C: Mechanics and AP Physics C: Electricity and Magnetism. The exam was also split into two separate 90-minute tests, each equivalent to a semester-length calculus-based college course. Until 2006, both exams could be taken for a single Document 4::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Transferring what from sodium to chlorine decreases the radius of sodium by about 50%? A. neutron B. proton C. electron D. quark Answer:
sciq-8742
multiple_choice
What does the urethra do?
[ "makes sperm and evacuates bowels", "stores sperm and urine", "filters urine and makes sperm", "carries urine and sperm out of the body" ]
D
Relavent Documents: Document 0::: Urination is the release of urine from the bladder through the urethra to the outside of the body. It is the urinary system's form of excretion. It is also known medically as micturition, voiding, uresis, or, rarely, emiction, and known colloquially by various names including peeing, weeing, pissing, and euphemistically going (for a) number one. In healthy humans and other animals, the process of urination is under voluntary control. In infants, some elderly individuals, and those with neurological injury, urination may occur as a reflex. It is normal for adult humans to urinate up to seven times during the day. In some animals, in addition to expelling waste material, urination can mark territory or express submissiveness. Physiologically, urination involves coordination between the central, autonomic, and somatic nervous systems. Brain centres that regulate urination include the pontine micturition center, periaqueductal gray, and the cerebral cortex. In placental mammals, urine is drained through the urinary meatus, a urethral opening in the male penis or female vulval vestibule. Anatomy and physiology Anatomy of the bladder and outlet The main organs involved in urination are the urinary bladder and the urethra. The smooth muscle of the bladder, known as the detrusor, is innervated by sympathetic nervous system fibers from the lumbar spinal cord and parasympathetic fibers from the sacral spinal cord. Fibers in the pelvic nerves constitute the main afferent limb of the voiding reflex; the parasympathetic fibers to the bladder that constitute the excitatory efferent limb also travel in these nerves. Part of the urethra is surrounded by the male or female external urethral sphincter, which is innervated by the somatic pudendal nerve originating in the cord, in an area termed Onuf's nucleus. Smooth muscle bundles pass on either side of the urethra, and these fibers are sometimes called the internal urethral sphincter, although they do not encircle the urethra. Document 1::: The external sphincter muscle of female urethra is a muscle which controls urination in females. The muscle fibers arise on either side from the margin of the inferior ramus of the pubis. They are directed across the pubic arch in front of the urethra, and pass around it to blend with the muscular fibers of the opposite side, between the urethra and vagina. The term "urethrovaginal sphincter" ("sphincter urethrovaginalis") is sometimes used to describe the component adjacent to the vagina. The "compressor urethrae" is also considered a distinct, adjacent muscle by some sources, Function The muscle helps maintain continence of urine along with the internal urethral sphincter which is under control of the autonomic nervous system. The external sphincter muscle prevents urine leakage as the muscle is tonically contracted via somatic fibers that originate in Onuf's nucleus and pass through sacral spinal nerves S2-S4 then the pudendal nerve to synapse on the muscle. Voiding urine begins with voluntary relaxation of the external urethral sphincter. This is facilitated by inhibition of the somatic neurons in Onuf's nucleus via signals arising in the pontine micturition center and traveling through the descending reticulospinal tracts. See also External sphincter muscle of male urethra Internal urethral sphincter Levator ani Document 2::: The external sphincter muscle of male urethra, also sphincter urethrae membranaceae, sphincter urethrae externus, surrounds the whole length of the membranous urethra, and is enclosed in the fascia of the urogenital diaphragm. Its external fibers arise from the junction of the inferior pubic ramus and ischium to the extent of 1.25 to 2 cm., and from the neighboring fascia. They arch across the front of the urethra and bulbourethral glands, pass around the urethra, and behind it unite with the muscle of the opposite side, by means of a tendinous raphe. Its innermost fibers form a continuous circular investment for the membranous urethra. Function The muscle helps maintain continence of urine along with the internal urethral sphincter which is under control of the autonomic nervous system. The external sphincter muscle prevents urine leakage as the muscle is tonically contracted via somatic fibers that originate in Onuf's nucleus and pass through sacral spinal nerves S2-S4 then the pudendal nerve to synapse on the muscle. Voiding urine begins with voluntary relaxation of the external urethral sphincter. This is facilitated by inhibition of the somatic neurons in Onuf's nucleus via signals arising in the pontine micturition center and traveling through the descending reticulospinal tracts. During ejaculation, the external sphincter opens and the internal sphincter closes. Additional images See also Levator ani External sphincter muscle of female urethra Internal urethral sphincter Prostatic urethra Document 3::: The urethral sphincters are two muscles used to control the exit of urine in the urinary bladder through the urethra. The two muscles are either the male or female external urethral sphincter and the internal urethral sphincter. When either of these muscles contracts, the urethra is sealed shut. The external urethral sphincter originates at the ischiopubic ramus and inserts into the intermeshing muscle fibers from the other side. It is controlled by the deep perineal branch of the pudendal nerve. Activity in the nerve fibers constricts the urethra. The internal sphincter muscle of urethra: located at the bladder's inferior end and the urethra's proximal end at the junction of the urethra with the urinary bladder. The internal sphincter is a continuation of the detrusor muscle and is made of smooth muscle, therefore it is under involuntary or autonomic control. This is the primary muscle for prohibiting the release of urine. The female or male external sphincter muscle of urethra (sphincter urethrae): located in the deep perineal pouch, at the bladder's distal inferior end in females, and inferior to the prostate (at the level of the membranous urethra) in males. It is a secondary sphincter to control the flow of urine through the urethra. Unlike the internal sphincter muscle, the external sphincter is made of skeletal muscle, therefore it is under voluntary control of the somatic nervous system. Function and sex differences In males and females, both internal and external urethral sphincters function to prevent the release of urine. The internal urethral sphincter controls involuntary urine flow from the bladder to the urethra, whereas the external urethral sphincter controls voluntary urine flow from the bladder to the urethra. Any damage to these muscles can lead to urinary incontinence. In males, the internal urethral sphincter has the additional function of preventing the flow of semen into the male bladder during ejaculation. Females do have a more el Document 4::: The prostate () is both an accessory gland of the male reproductive system and a muscle-driven mechanical switch between urination and ejaculation. It is found in all male mammals. It differs between species anatomically, chemically, and physiologically. Anatomically, the prostate is found below the bladder, with the urethra passing through it. It is described in gross anatomy as consisting of lobes and in microanatomy by zone. It is surrounded by an elastic, fibromuscular capsule and contains glandular tissue, as well as connective tissue. The prostate glands produce and contain fluid that forms part of semen, the substance emitted during ejaculation as part of the male sexual response. This prostatic fluid is slightly alkaline, milky or white in appearance. The alkalinity of semen helps neutralize the acidity of the vaginal tract, prolonging the lifespan of sperm. The prostatic fluid is expelled in the first part of ejaculate, together with most of the sperm, because of the action of smooth muscle tissue within the prostate. In comparison with the few spermatozoa expelled together with mainly seminal vesicular fluid, those in prostatic fluid have better motility, longer survival, and better protection of genetic material. Disorders of the prostate include enlargement, inflammation, infection, and cancer. The word prostate comes from Ancient Greek προστάτης, prostátēs, meaning "one who stands before", "protector", "guardian", with the term originally used to describe the seminal vesicles. Structure The prostate is a gland of the male reproductive system. In adults, it is about the size of a walnut, and has an average weight of about 11 grams, usually ranging between 7 and 16 grams. The prostate is located in the pelvis. It sits below the urinary bladder and surrounds the urethra. The part of the urethra passing through it is called the prostatic urethra, which joins with the two ejaculatory ducts. The prostate is covered in a surface called the prostatic capsule The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What does the urethra do? A. makes sperm and evacuates bowels B. stores sperm and urine C. filters urine and makes sperm D. carries urine and sperm out of the body Answer:
sciq-8332
multiple_choice
What term describes the application of knowledge to real-world problems and is practiced by engineers?
[ "science", "invention", "mechanisms", "technology" ]
D
Relavent Documents: Document 0::: Engineering mathematics is a branch of applied mathematics concerning mathematical methods and techniques that are typically used in engineering and industry. Along with fields like engineering physics and engineering geology, both of which may belong in the wider category engineering science, engineering mathematics is an interdisciplinary subject motivated by engineers' needs both for practical, theoretical and other considerations outside their specialization, and to deal with constraints to be effective in their work. Description Historically, engineering mathematics consisted mostly of applied analysis, most notably: differential equations; real and complex analysis (including vector and tensor analysis); approximation theory (broadly construed, to include asymptotic, variational, and perturbative methods, representations, numerical analysis); Fourier analysis; potential theory; as well as linear algebra and applied probability, outside of analysis. These areas of mathematics were intimately tied to the development of Newtonian physics, and the mathematical physics of that period. This history also left a legacy: until the early 20th century subjects such as classical mechanics were often taught in applied mathematics departments at American universities, and fluid mechanics may still be taught in (applied) mathematics as well as engineering departments. The success of modern numerical computer methods and software has led to the emergence of computational mathematics, computational science, and computational engineering (the last two are sometimes lumped together and abbreviated as CS&E), which occasionally use high-performance computing for the simulation of phenomena and the solution of problems in the sciences and engineering. These are often considered interdisciplinary fields, but are also of interest to engineering mathematics. Specialized branches include engineering optimization and engineering statistics. Engineering mathematics in tertiary educ Document 1::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 2::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 3::: The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields. Description The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions. The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.” Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers. Current efforts The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo Document 4::: The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work. History It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council. Function Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres. STEM ambassadors To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell. Funding STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments. See also The WISE Campaign Engineering and Physical Sciences Research Council National Centre for Excellence in Teaching Mathematics Association for Science Education Glossary of areas of mathematics Glossary of astronomy Glossary of biology Glossary of chemistry Glossary of engineering Glossary of physics The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What term describes the application of knowledge to real-world problems and is practiced by engineers? A. science B. invention C. mechanisms D. technology Answer:
scienceQA-6016
multiple_choice
What do these two changes have in common? burning food on a stove an iceberg melting slowly
[ "Both are chemical changes.", "Both are only physical changes.", "Both are caused by heating.", "Both are caused by cooling." ]
C
Step 1: Think about each change. Burning food on a stove is a chemical change. When the food burns, the type of matter in it changes. The food turns black and gives off smoke. An iceberg melting is a change of state. So, it is a physical change. An iceberg is made of frozen water. As it melts, the water changes from a solid to a liquid. But a different type of matter is not formed. Step 2: Look at each answer choice. Both are only physical changes. An iceberg melting is a physical change. But burning food on a stove is not. Both are chemical changes. Burning food on a stove is a chemical change. But an iceberg melting is not. Both are caused by heating. Both changes are caused by heating. Both are caused by cooling. Neither change is caused by cooling.
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds. Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate. A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density. An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge. Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change. Examples Heating and cooling Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation. Magnetism Ferro-magnetic materials can become magnetic. The process is reve Document 2::: Thermofluids is a branch of science and engineering encompassing four intersecting fields: Heat transfer Thermodynamics Fluid mechanics Combustion The term is a combination of "thermo", referring to heat, and "fluids", which refers to liquids, gases and vapors. Temperature, pressure, equations of state, and transport laws all play an important role in thermofluid problems. Phase transition and chemical reactions may also be important in a thermofluid context. The subject is sometimes also referred to as "thermal fluids". Heat transfer Heat transfer is a discipline of thermal engineering that concerns the transfer of thermal energy from one physical system to another. Heat transfer is classified into various mechanisms, such as heat conduction, convection, thermal radiation, and phase-change transfer. Engineers also consider the transfer of mass of differing chemical species, either cold or hot, to achieve heat transfer. Sections include : Energy transfer by heat, work and mass Laws of thermodynamics Entropy Refrigeration Techniques Properties and nature of pure substances Applications Engineering : Predicting and analysing the performance of machines Thermodynamics Thermodynamics is the science of energy conversion involving heat and other forms of energy, most notably mechanical work. It studies and interrelates the macroscopic variables, such as temperature, volume and pressure, which describe physical, thermodynamic systems. Fluid mechanics Fluid Mechanics the study of the physical forces at work during fluid flow. Fluid mechanics can be divided into fluid kinematics, the study of fluid motion, and fluid kinetics, the study of the effect of forces on fluid motion. Fluid mechanics can further be divided into fluid statics, the study of fluids at rest, and fluid dynamics, the study of fluids in motion. Some of its more interesting concepts include momentum and reactive forces in fluid flow and fluid machinery theory and performance. Sections include: Flu Document 3::: Analysis (: analyses) is the process of breaking a complex topic or substance into smaller parts in order to gain a better understanding of it. The technique has been applied in the study of mathematics and logic since before Aristotle (384–322 B.C.), though analysis as a formal concept is a relatively recent development. The word comes from the Ancient Greek (analysis, "a breaking-up" or "an untying;" from ana- "up, throughout" and lysis "a loosening"). From it also comes the word's plural, analyses. As a formal concept, the method has variously been ascribed to Alhazen, René Descartes (Discourse on the Method), and Galileo Galilei. It has also been ascribed to Isaac Newton, in the form of a practical method of physical discovery (which he did not name). The converse of analysis is synthesis: putting the pieces back together again in a new or different whole. Applications Science The field of chemistry uses analysis in three ways: to identify the components of a particular chemical compound (qualitative analysis), to identify the proportions of components in a mixture (quantitative analysis), and to break down chemical processes and examine chemical reactions between elements of matter. For an example of its use, analysis of the concentration of elements is important in managing a nuclear reactor, so nuclear scientists will analyze neutron activation to develop discrete measurements within vast samples. A matrix can have a considerable effect on the way a chemical analysis is conducted and the quality of its results. Analysis can be done manually or with a device. Types of Analysis: A) Qualitative Analysis: It is concerned with which components are in a given sample or compound. Example: Precipitation reaction B) Quantitative Analysis: It is to determine the quantity of individual component present in a given sample or compound. Example: To find concentration by uv-spectrophotometer. Isotopes Chemists can use isotope analysis to assist analysts with i Document 4::: Heat transfer is a discipline of thermal engineering that concerns the generation, use, conversion, and exchange of thermal energy (heat) between physical systems. Heat transfer is classified into various mechanisms, such as thermal conduction, thermal convection, thermal radiation, and transfer of energy by phase changes. Engineers also consider the transfer of mass of differing chemical species (mass transfer in the form of advection), either cold or hot, to achieve heat transfer. While these mechanisms have distinct characteristics, they often occur simultaneously in the same system. Heat conduction, also called diffusion, is the direct microscopic exchanges of kinetic energy of particles (such as molecules) or quasiparticles (such as lattice waves) through the boundary between two systems. When an object is at a different temperature from another body or its surroundings, heat flows so that the body and the surroundings reach the same temperature, at which point they are in thermal equilibrium. Such spontaneous heat transfer always occurs from a region of high temperature to another region of lower temperature, as described in the second law of thermodynamics. Heat convection occurs when the bulk flow of a fluid (gas or liquid) carries its heat through the fluid. All convective processes also move heat partly by diffusion, as well. The flow of fluid may be forced by external processes, or sometimes (in gravitational fields) by buoyancy forces caused when thermal energy expands the fluid (for example in a fire plume), thus influencing its own transfer. The latter process is often called "natural convection". The former process is often called "forced convection." In this case, the fluid is forced to flow by use of a pump, fan, or other mechanical means. Thermal radiation occurs through a vacuum or any transparent medium (solid or fluid or gas). It is the transfer of energy by means of photons or electromagnetic waves governed by the same laws. Overview Heat The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What do these two changes have in common? burning food on a stove an iceberg melting slowly A. Both are chemical changes. B. Both are only physical changes. C. Both are caused by heating. D. Both are caused by cooling. Answer:
ai2_arc-987
multiple_choice
Which of these has the greatest capacity for storing thermal energy from the Sun?
[ "air", "land", "oceans", "plants" ]
C
Relavent Documents: Document 0::: Thermal energy storage (TES) is achieved with widely different technologies. Depending on the specific technology, it allows excess thermal energy to be stored and used hours, days, months later, at scales ranging from the individual process, building, multiuser-building, district, town, or region. Usage examples are the balancing of energy demand between daytime and nighttime, storing summer heat for winter heating, or winter cold for summer air conditioning (Seasonal thermal energy storage). Storage media include water or ice-slush tanks, masses of native earth or bedrock accessed with heat exchangers by means of boreholes, deep aquifers contained between impermeable strata; shallow, lined pits filled with gravel and water and insulated at the top, as well as eutectic solutions and phase-change materials. Other sources of thermal energy for storage include heat or cold produced with heat pumps from off-peak, lower cost electric power, a practice called peak shaving; heat from combined heat and power (CHP) power plants; heat produced by renewable electrical energy that exceeds grid demand and waste heat from industrial processes. Heat storage, both seasonal and short term, is considered an important means for cheaply balancing high shares of variable renewable electricity production and integration of electricity and heating sectors in energy systems almost or completely fed by renewable energy. Categories The different kinds of thermal energy storage can be divided into three separate categories: sensible heat, latent heat, and thermo-chemical heat storage. Each of these has different advantages and disadvantages that determine their applications. Sensible heat storage Sensible heat storage (SHS) is the most straightforward method. It simply means the temperature of some medium is either increased or decreased. This type of storage is the most commercially available out of the three; other techniques are less developed. The materials are generally inexpens Document 1::: The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work. History It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council. Function Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres. STEM ambassadors To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell. Funding STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments. See also The WISE Campaign Engineering and Physical Sciences Research Council National Centre for Excellence in Teaching Mathematics Association for Science Education Glossary of areas of mathematics Glossary of astronomy Glossary of biology Glossary of chemistry Glossary of engineering Glossary of physics Document 2::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 3::: Waste heat is heat that is produced by a machine, or other process that uses energy, as a byproduct of doing work. All such processes give off some waste heat as a fundamental result of the laws of thermodynamics. Waste heat has lower utility (or in thermodynamics lexicon a lower exergy or higher entropy) than the original energy source. Sources of waste heat include all manner of human activities, natural systems, and all organisms, for example, incandescent light bulbs get hot, a refrigerator warms the room air, a building gets hot during peak hours, an internal combustion engine generates high-temperature exhaust gases, and electronic components get warm when in operation. Instead of being "wasted" by release into the ambient environment, sometimes waste heat (or cold) can be used by another process (such as using hot engine coolant to heat a vehicle), or a portion of heat that would otherwise be wasted can be reused in the same process if make-up heat is added to the system (as with heat recovery ventilation in a building). Thermal energy storage, which includes technologies both for short- and long-term retention of heat or cold, can create or improve the utility of waste heat (or cold). One example is waste heat from air conditioning machinery stored in a buffer tank to aid in night time heating. Another is seasonal thermal energy storage (STES) at a foundry in Sweden. The heat is stored in the bedrock surrounding a cluster of heat exchanger equipped boreholes, and is used for space heating in an adjacent factory as needed, even months later. An example of using STES to use natural waste heat is the Drake Landing Solar Community in Alberta, Canada, which, by using a cluster of boreholes in bedrock for interseasonal heat storage, obtains 97 percent of its year-round heat from solar thermal collectors on the garage roofs. Another STES application is storing winter cold underground, for summer air conditioning. On a biological scale, all organisms reject w Document 4::: Tech City College (Formerly STEM Academy) is a free school sixth form located in the Islington area of the London Borough of Islington, England. It originally opened in September 2013, as STEM Academy Tech City and specialised in Science, Technology, Engineering and Maths (STEM) and the Creative Application of Maths and Science. In September 2015, STEM Academy joined the Aspirations Academy Trust was renamed Tech City College. Tech City College offers A-levels and BTECs as programmes of study for students. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Which of these has the greatest capacity for storing thermal energy from the Sun? A. air B. land C. oceans D. plants Answer:
sciq-9279
multiple_choice
Calcium hydroxide and calcium carbonate are effective in neutralizing the effects of what on lakes?
[ "oil rain", "dioxide rain", "ozone rain", "acid rain" ]
D
Relavent Documents: Document 0::: Lake 226 is one lake in Canada's Experimental Lakes Area (ELA) in Ontario. The ELA is a freshwater and fisheries research facility that operated these experiments alongside Fisheries and Oceans Canada and Environment Canada. In 1968 this area in northwest Ontario was set aside for limnological research, aiming to study the watershed of the 58 small lakes in this area. The ELA projects began as a response to the claim that carbon was the limiting agent causing eutrophication of lakes rather than phosphorus, and that monitoring phosphorus in the water would be a waste of money. This claim was made by soap and detergent companies, as these products do not biodegrade and can cause buildup of phosphates in water supplies that lead to eutrophication. The theory that carbon was the limiting agent was quickly debunked by the ELA Lake 227 experiment that began in 1969, which found that carbon could be drawn from the atmosphere to remain proportional to the input of phosphorus in the water. Experimental Lake 226 was then created to test phosphorus' impact on eutrophication by itself. Lake ecosystem Geography The ELA lakes were far from human activities, therefore allowing the study of environmental conditions without human interaction. Lake 226 was specifically studied over a four-year period, from 1973–1977 to test eutrophication. Lake 226 itself is a 16.2 ha double basin lake located on highly metamorphosed granite known as Precambrian granite. The depth of the lake was measured in 1994 to be 14.7 m for the northeast basin and 11.6 m for the southeast basin. Lake 226 had a total lake volume of 9.6 × 105 m3, prior to the lake being additionally studied for drawdown alongside other ELA lakes. Due to this relatively small fetch of Lake 226, wind action is minimized, preventing resuspension of epilimnetic sediments. Eutrophication experiment To test the effects of fertilization on water quality and algae blooms, Lake 226 was split in half with a curtain. This curtain divi Document 1::: Lake 227 is one of 58 lakes located in the Experimental Lakes Area (ELA) in the Kenora District of Ontario, Canada. Lake 227 is one of only 5 lakes in the Experimental Lakes Area currently involved in long-term research projects, and is of particular note for its importance in long term lake eutrophication studies. The relative absence human activity and pollution makes Lake 227 ideal for limnological research, and the nature of the ELA makes it one of the only places in the world accessible for full lake experiments. At its deepest, Lake 227 is 10 meters deep, and the area of the lake is approximately 5 hectares. Funding and governmental permissions for access to Lake 227 have been unstable in recent years, as control of the ELA was handed off by the Canadian government to the International Institute for Sustainable Development (IISD). Ecology Lake 227 is a freshwater lake. The ELA region is home to a variety of native fish, many of which are planktivorous. Fathead minnows, Fine-scale Dace, and Pearl Dace are all examples of fish that can be found in the lake. The presence of planktivorous fish reduces the relative abundance of larger zooplankton species in the lake, as species like the fathead minnow primarily feed on them. The fish populations in Lake 227 were removed in the 1990s, this resulted in a noticeable increase in the Chaoborus and daphnia populations, in the absence of predation. The removal of fish from the lake negates the top-down effect that repressed larger species of zooplankton and aquatic larvae. Research The research in lake 227 is mainly focused on the effects of manipulated nutrients on the interrelated independent variables of microorganism activity and eutrophication. Lake 227 was home to the longest running experiment ever to take place in the ELA. Lake eutrophication and nutrient factors Lake 227 has been used as a real life model for the study of the connection between nutrient input and lake eutrophication. The results of these Document 2::: Nutrient cycling in the Columbia River Basin involves the transport of nutrients through the system, as well as transformations from among dissolved, solid, and gaseous phases, depending on the element. The elements that constitute important nutrient cycles include macronutrients such as nitrogen (as ammonium, nitrite, and nitrate), silicate, phosphorus, and micronutrients, which are found in trace amounts, such as iron. Their cycling within a system is controlled by many biological, chemical, and physical processes. The Columbia River Basin is the largest freshwater system of the Pacific Northwest, and due to its complexity, size, and modification by humans, nutrient cycling within the system is affected by many different components. Both natural and anthropogenic processes are involved in the cycling of nutrients. Natural processes in the system include estuarine mixing of fresh and ocean waters, and climate variability patterns such as the Pacific Decadal Oscillation and the El Nino Southern Oscillation (both climatic cycles that affect the amount of regional snowpack and river discharge). Natural sources of nutrients in the Columbia River include weathering, leaf litter, salmon carcasses, runoff from its tributaries, and ocean estuary exchange. Major anthropogenic impacts to nutrients in the basin are due to fertilizers from agriculture, sewage systems, logging, and the construction of dams. Nutrients dynamics vary in the river basin from the headwaters to the main river and dams, to finally reaching the Columbia River estuary and ocean. Upstream in the headwaters, salmon runs are the main source of nutrients. Dams along the river impact nutrient cycling by increasing residence time of nutrients, and reducing the transport of silicate to the estuary, which directly impacts diatoms, a type of phytoplankton. The dams are also a barrier to salmon migration, and can increase the amount of methane locally produced. The Columbia River estuary exports high rates of n Document 3::: Assimilative capacity is the ability for pollutants to be absorbed by an environment without detrimental effects to the environment or those who use of it. Natural absorption into an environment is achieved through dilution, dispersion and removal through chemical or biological processes. The term assimilative capacity has been used interchangeably with environmental capacity, receiving capacity and absorptive capacity. It is used as a measurement perimeter in hydrology, meteorology and pedology for a variety of environments examples consist of: lakes, rivers, oceans, cities and soils. Assimilative capacity is a subjective measurement that is quantified by governments and institutions such as Environmental Protection Agency (EPA) of environments into guidelines. Using assimilative capacity as a guideline can help the allocation of resources while reducing the impact on organisms in an environment. This concept is paired with carrying capacity in order to facilitate sustainable development of city regions. Assimilative capacity has been critiqued as to its effectiveness due to ambiguity in its definition that can confuses readers and false assumptions that a small amount of pollutants has no harmful effect on an environment. Hydrosphere Assimilative capacity in hydrology is defined as the maximum amount of contaminating pollutants that a body of water can naturally absorb without exceeding the water quality guidelines and criteria. This determines the concentration of pollutants that can cause detrimental effects on aquatic life and humans that use it. Self-purification and dilution are the main factors effecting the total amount of assimilative capacity a body of water has. Estimations of breaches of assimilative capacity focus on the health of aquatic organisms in order to predict an excess of pollutants in a body of water. Dilution is the main way that bodies of water reduce the concentration of contaminants to levels under their assimilative capacity. This mea Document 4::: In ecology, base-richness is the level of chemical bases in water or soil, such as calcium or magnesium ions. Many organisms prefer base-rich environments. Chemical bases are alkalis, hence base-rich environments are either neutral or alkaline. Because acid-rich environments have few bases, they are dominated by environmental acids (usually organic acids). However, the relationship between base-richness and acidity is not a rigid one – changes in the levels of acids (such as dissolved carbon dioxide) may significantly change acidity without affecting base-richness. Base-rich terrestrial environments are characteristic of areas where underlying rocks (below soil) are limestone. Seawater is also base-rich, so maritime and marine environments are themselves base-rich. Base-poor environments are characteristic of areas where underlying rocks (below soil) are sandstone or granite, or where the water is derived directly from rainfall (ombrotrophic). Examples of base-rich environments Calcareous grassland Fen Limestone pavement Maquis shrubland Yew woodland Examples of base-poor environments Bog Heath (habitat) Poor fen Moorland Pine woodland Tundra See also Soil Calcicole Calcifuge Ecology Soil chemistry The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Calcium hydroxide and calcium carbonate are effective in neutralizing the effects of what on lakes? A. oil rain B. dioxide rain C. ozone rain D. acid rain Answer:
sciq-960
multiple_choice
What kind of gland produces an oily substance that waterproofs the hair and skin?
[ "mucous gland", "nail gland", "sebaceous gland", "secretion gland" ]
C
Relavent Documents: Document 0::: Serous glands secrete serous fluid. They contain serous acini, a grouping of serous cells that secrete serous fluid, isotonic with blood plasma, that contains enzymes such as alpha-amylase. Serous glands are most common in the parotid gland and lacrimal gland but are also present in the submandibular gland and, to a far lesser extent, the sublingual gland. Document 1::: A sebaceous gland, or oil gland, is a microscopic exocrine gland in the skin that opens into a hair follicle to secrete an oily or waxy matter, called sebum, which lubricates the hair and skin of mammals. In humans, sebaceous glands occur in the greatest number on the face and scalp, but also on all parts of the skin except the palms of the hands and soles of the feet. In the eyelids, meibomian glands, also called tarsal glands, are a type of sebaceous gland that secrete a special type of sebum into tears. Surrounding the female nipple, areolar glands are specialized sebaceous glands for lubricating the nipple. Fordyce spots are benign, visible, sebaceous glands found usually on the lips, gums and inner cheeks, and genitals. Structure Location Sebaceous glands are found throughout all areas of the skin, except the palms of the hands and soles of the feet. There are two types of sebaceous glands, those connected to hair follicles and those that exist independently. Sebaceous glands are found in hair-covered areas, where they are connected to hair follicles. One or more glands may surround each hair follicle, and the glands themselves are surrounded by arrector pili muscles, forming a pilosebaceous unit. The glands have an acinar structure (like a many-lobed berry), in which multiple glands branch off a central duct. The glands deposit sebum on the hairs and bring it to the skin surface along the hair shaft. The structure, consisting of hair, hair follicle, arrector pili muscles, and sebaceous gland, is an epidermal invagination known as a pilosebaceous unit. Sebaceous glands are also found in hairless areas (glabrous skin) of the eyelids, nose, penis, labia minora, the inner mucosal membrane of the cheek, and nipples. Some sebaceous glands have unique names. Sebaceous glands on the lip and mucosa of the cheek, and on the genitalia, are known as Fordyce spots, and glands on the eyelids are known as meibomian glands. Sebaceous glands of the breast are also known as Document 2::: The Harderian gland is a gland found within the eye's orbit that occurs in tetrapods (reptiles, amphibians, birds and mammals) that possess a nictitating membrane. The gland can be compound tubular or compound tubuloalveolar, and the fluid it secretes (mucous, serous or lipid) varies between different groups of animals. In some animals, it acts as an accessory to the lacrimal gland, secreting fluid that eases movement of the nictitating membrane. Research has proposed that the gland has several other functions, including that of a photoprotective organ, a location of immune response, a source of thermoregulatory lipids, a source of pheromones, and a site of osmoregulation. In mammals, the gland secretes an oily substance used to preen the fur. The presence or absence of this gland is one of the cues used by palaeontologists to determine when fur evolved in the ancestors of mammals. The Harderian gland was first described in 1694 by Swiss anatomist Johann Jacob Harder (1656–1711). He documented his findings in a paper titled Glandula nova lachrymalis una cum ductu excretorio in cervis et damis, ("A new lachrymal gland with an excretory duct in red and fallow deer", English translation). Document 3::: In mammals, trichocytes are the specialized epithelial cells from which the highly mechanically resilient tissues hair and nails are formed. They can be identified by the fact that they express "hard", "trichocyte" or "hair" keratin proteins. These are modified keratins containing large amounts of the amino acid cysteine, which facilitates chemical cross-linking of these proteins to form the tough material from which hair and nail is composed. These cells give rise to non-hair non-keratinized IRSC (inner root sheath cell) as well. See also List of human cell types derived from the germ layers List of distinct cell types in the adult human body Document 4::: Uterine glands or endometrial glands are tubular glands, lined by a simple columnar epithelium, found in the functional layer of the endometrium that lines the uterus. Their appearance varies during the menstrual cycle. During the proliferative phase, uterine glands appear long due to estrogen secretion by the ovaries. During the secretory phase, the uterine glands become very coiled with wide lumens and produce a glycogen-rich secretion known as histotroph or uterine milk. This change corresponds with an increase in blood flow to spiral arteries due to increased progesterone secretion from the corpus luteum. During the pre-menstrual phase, progesterone secretion decreases as the corpus luteum degenerates, which results in decreased blood flow to the spiral arteries. The functional layer of the uterus containing the glands becomes necrotic, and eventually sloughs off during the menstrual phase of the cycle. They are of small size in the unimpregnated uterus, but shortly after impregnation become enlarged and elongated, presenting a contorted or waved appearance. Function Hormones produced in early pregnancy stimulate the uterine glands to secrete a number of substances to give nutrition and protection to the embryo and fetus, and the fetal membranes. These secretions are known as histiotroph, alternatively histotroph, and also as uterine milk. Important uterine milk proteins are glycodelin-A, and osteopontin. Some secretory components from the uterine glands are taken up by the secondary yolk sac lining the exocoelomic cavity during pregnancy, and may thereby assist in providing fetal nutrition. Additional images The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What kind of gland produces an oily substance that waterproofs the hair and skin? A. mucous gland B. nail gland C. sebaceous gland D. secretion gland Answer:
sciq-6822
multiple_choice
The chemistry of each element is determined by its number of what?
[ "nuclei and neutrons", "protons and electrons", "protons and neutrons", "electrons and neutrons" ]
B
Relavent Documents: Document 0::: The periodic table is an arrangement of the chemical elements, structured by their atomic number, electron configuration and recurring chemical properties. In the basic form, elements are presented in order of increasing atomic number, in the reading sequence. Then, rows and columns are created by starting new rows and inserting blank cells, so that rows (periods) and columns (groups) show elements with recurring properties (called periodicity). For example, all elements in group (column) 18 are noble gases that are largely—though not completely—unreactive. The history of the periodic table reflects over two centuries of growth in the understanding of the chemical and physical properties of the elements, with major contributions made by Antoine-Laurent de Lavoisier, Johann Wolfgang Döbereiner, John Newlands, Julius Lothar Meyer, Dmitri Mendeleev, Glenn T. Seaborg, and others. Early history Nine chemical elements – carbon, sulfur, iron, copper, silver, tin, gold, mercury, and lead, have been known since before antiquity, as they are found in their native form and are relatively simple to mine with primitive tools. Around 330 BCE, the Greek philosopher Aristotle proposed that everything is made up of a mixture of one or more roots, an idea originally suggested by the Sicilian philosopher Empedocles. The four roots, which the Athenian philosopher Plato called elements, were earth, water, air and fire. Similar ideas about these four elements existed in other ancient traditions, such as Indian philosophy. A few extra elements were known in the age of alchemy: zinc, arsenic, antimony, and bismuth. Platinum was also known to pre-Columbian South Americans, but knowledge of it did not reach Europe until the 16th century. First categorizations The history of the periodic table is also a history of the discovery of the chemical elements. The first person in recorded history to discover a new element was Hennig Brand, a bankrupt German merchant. Brand tried to discover Document 1::: Isotopes are distinct nuclear species (or nuclides, as technical term) of the same chemical element. They have the same atomic number (number of protons in their nuclei) and position in the periodic table (and hence belong to the same chemical element), but differ in nucleon numbers (mass numbers) due to different numbers of neutrons in their nuclei. While all isotopes of a given element have almost the same chemical properties, they have different atomic masses and physical properties. The term isotope is formed from the Greek roots isos (ἴσος "equal") and topos (τόπος "place"), meaning "the same place"; thus, the meaning behind the name is that different isotopes of a single element occupy the same position on the periodic table. It was coined by Scottish doctor and writer Margaret Todd in 1913 in a suggestion to the British chemist Frederick Soddy. The number of protons within the atom's nucleus is called its atomic number and is equal to the number of electrons in the neutral (non-ionized) atom. Each atomic number identifies a specific element, but not the isotope; an atom of a given element may have a wide range in its number of neutrons. The number of nucleons (both protons and neutrons) in the nucleus is the atom's mass number, and each isotope of a given element has a different mass number. For example, carbon-12, carbon-13, and carbon-14 are three isotopes of the element carbon with mass numbers 12, 13, and 14, respectively. The atomic number of carbon is 6, which means that every carbon atom has 6 protons so that the neutron numbers of these isotopes are 6, 7, and 8 respectively. Isotope vs. nuclide A nuclide is a species of an atom with a specific number of protons and neutrons in the nucleus, for example, carbon-13 with 6 protons and 7 neutrons. The nuclide concept (referring to individual nuclear species) emphasizes nuclear properties over chemical properties, whereas the isotope concept (grouping all atoms of each element) emphasizes chemical over Document 2::: In chemistry and physics, the iron group refers to elements that are in some way related to iron; mostly in period (row) 4 of the periodic table. The term has different meanings in different contexts. In chemistry, the term is largely obsolete, but it often means iron, cobalt, and nickel, also called the iron triad; or, sometimes, other elements that resemble iron in some chemical aspects. In astrophysics and nuclear physics, the term is still quite common, and it typically means those three plus chromium and manganese—five elements that are exceptionally abundant, both on Earth and elsewhere in the universe, compared to their neighbors in the periodic table. Titanium and vanadium are also produced in Type Ia supernovae. General chemistry In chemistry, "iron group" used to refer to iron and the next two elements in the periodic table, namely cobalt and nickel. These three comprised the "iron triad". They are the top elements of groups 8, 9, and 10 of the periodic table; or the top row of "group VIII" in the old (pre-1990) IUPAC system, or of "group VIIIB" in the CAS system. These three metals (and the three of the platinum group, immediately below them) were set aside from the other elements because they have obvious similarities in their chemistry, but are not obviously related to any of the other groups. The iron group and its alloys exhibit ferromagnetism. The similarities in chemistry were noted as one of Döbereiner's triads and by Adolph Strecker in 1859. Indeed, Newlands' "octaves" (1865) were harshly criticized for separating iron from cobalt and nickel. Mendeleev stressed that groups of "chemically analogous elements" could have similar atomic weights as well as atomic weights which increase by equal increments, both in his original 1869 paper and his 1889 Faraday Lecture. Analytical chemistry In the traditional methods of qualitative inorganic analysis, the iron group consists of those cations which have soluble chlorides; and are not precipitated Document 3::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 4::: In nuclear chemistry, the actinide concept (also known as actinide hypothesis) proposed that the actinides form a second inner transition series homologous to the lanthanides. Its origins stem from observation of lanthanide-like properties in transuranic elements in contrast to the distinct complex chemistry of previously known actinides. Glenn Theodore Seaborg, one of the researchers who synthesized transuranic elements, proposed the actinide concept in 1944 as an explanation for observed deviations and a hypothesis to guide future experiments. It was accepted shortly thereafter, resulting in the placement of a new actinide series comprising elements 89 (actinium) to 103 (lawrencium) below the lanthanides in Dmitri Mendeleev's periodic table of the elements. Origin In the late 1930s, the first four actinides (actinium, thorium, protactinium, and uranium) were known. They were believed to form a fourth series of transition metals, characterized by the filling of 6d orbitals, in which thorium, protactinium, and uranium were respective homologs of hafnium, tantalum, and tungsten. This view was widely accepted as chemical investigations of these elements revealed various high oxidation states and characteristics that closely resembled the 5d transition metals. Nevertheless, research into quantum theory by Niels Bohr and subsequent publications proposed that these elements should constitute a 5f series analogous to the lanthanides, with calculations that the first 5f electron should appear in the range from atomic number 90 (thorium) to 99 (einsteinium). Inconsistencies between theoretical models and known chemical properties thus made it difficult to place these elements in the periodic table. The first appearance of the actinide concept may have been in a 32-column periodic table constructed by Alfred Werner in 1905. Upon determining the arrangement of the lanthanides in the periodic table, he placed thorium as a heavier homolog of cerium, and left spaces for hypot The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The chemistry of each element is determined by its number of what? A. nuclei and neutrons B. protons and electrons C. protons and neutrons D. electrons and neutrons Answer:
sciq-7805
multiple_choice
What is the apparatus used for carrying out an electrolysis reaction called?
[ "electrolytic cell", "reversible cell", "fluorescent cell", "biochemical cell" ]
A
Relavent Documents: Document 0::: Bioelectrochemistry is a branch of electrochemistry and biophysical chemistry concerned with electrophysiological topics like cell electron-proton transport, cell membrane potentials and electrode reactions of redox enzymes. History The beginnings of bioelectrochemistry, as well as those of electrochemistry, are closely related to physiology through the works of Luigi Galvani and then Alessandro Volta. The first modern work in this field is considered that of the German physiologist Julius Bernstein (1902) concerning the source of biopotentials due to different ion concentration through the cell's membrane. The domain of bioelectrochemistry has grown considerably over the past century, maintaining the close connections to various medical and biological and engineering disciplines like electrophysiology, biomedical engineering, and enzyme kinetics. The achievements in this field have been awarded several Nobel prizes for Physiology or Medicine. Among prominent electrochemists who have contributed to this field one could mention John Bockris. See also Biomedical engineering Bioelectronics Bioelectrochemical reactor Biomagnetism Enzymatic biofuel cell Protein Film Voltammetry Saltatory conduction Notes External links Johann Wilhelm Ritter contribution to the field Electrochemistry Document 1::: In analytical chemistry, a rotating ring-disk electrode (RRDE) is a double working electrode used in hydrodynamic voltammetry, very similar to a rotating disk electrode (RDE). The electrode rotates during experiments inducing a flux of analyte to the electrode. This system used in electrochemical studies when investigating reaction mechanisms related to redox chemistry and other chemical phenomena. Structure The difference between a rotating ring-disk electrode and a rotating disk electrode is the addition of a second working electrode in the form of a ring around the central disk of the first working electrode. To operate such an electrode, it is necessary to use a potentiostat, such as a bipotentiostat, capable of controlling a four-electrode system. The two electrodes are separated by a non-conductive barrier and connected to the potentiostat through different leads. This rotating hydrodynamic electrode motif can be extended to rotating double-ring electrodes, rotating double-ring-disk electrodes, and even more esoteric constructions, as suited to the experiment. Function The RRDE takes advantage of the laminar flow created during rotation. As the system is rotated, the solution in contact with the electrode is driven to its side, similar to the situation of a rotating disk electrode. As the solution flows to the side, it crosses the ring electrode and flows back into the bulk solution. If the flow in the solution is laminar, the solution is brought in contact with the disk and with the ring quickly afterward, in a very controlled manner. The resulting currents depend on the potential, area, and spacing of the electrodes, as well as the rotation speed and the substrate. This design makes a variety of experiments possible, for example a complex could be oxidized at the disk and then reduced back to the starting material at the ring. It is easy to predict what the ring/disk current ratios is if this process is entirely controlled by the flow of solu Document 2::: A potentiostat is the electronic hardware required to control a three electrode cell and run most electroanalytical experiments. A Bipotentiostat and polypotentiostat are potentiostats capable of controlling two working electrodes and more than two working electrodes, respectively. The system functions by maintaining the potential of the working electrode at a constant level with respect to the reference electrode by adjusting the current at an auxiliary electrode. The heart of the different potentiostatic electronic circuits is an operational amplifier (op amp). It consists of an electric circuit which is usually described in terms of simple op amps. Primary use This equipment is fundamental to modern electrochemical studies using three electrode systems for investigations of reaction mechanisms related to redox chemistry and other chemical phenomena. The dimensions of the resulting data depend on the experiment. In voltammetry, electric current in amps is plotted against electric potential in voltage. In a bulk electrolysis total coulombs passed (total electric charge) is plotted against time in seconds even though the experiment measures electric current (amperes) over time. This is done to show that the experiment is approaching an expected number of coulombs. Most early potentiostats could function independently, providing data output through a physical data trace. Modern potentiostats are designed to interface with a personal computer and operate through a dedicated software package. The automated software allows the user rapidly to shift between experiments and experimental conditions. The computer allows data to be stored and analyzed more effectively, rapidly, and accurately than the earlier standalone devices. Basic relationships A potentiostat is a control and measuring device. It comprises an electric circuit which controls the potential across the cell by sensing changes in its resistance, varying accordingly the current supplied Document 3::: The Biopac Student Lab is a proprietary teaching device and method introduced in 1995 as a digital replacement for aging chart recorders and oscilloscopes that were widely used in undergraduate teaching laboratories prior to that time. It is manufactured by BIOPAC Systems, Inc., of Goleta, California. The advent of low cost personal computers meant that older analog technologies could be replaced with powerful and less expensive computerized alternatives. Students in undergraduate teaching labs use the BSL system to record data from their own bodies, animals or tissue preparations. The BSL system integrates hardware, software and curriculum materials including over sixty experiments that students use to study the cardiovascular system, muscles, pulmonary function, autonomic nervous system, and the brain. History of physiology and electricity One of the more complicated concepts for students to grasp is the fact that electricity is flowing throughout a living body at all times and that it is possible to use the signals to measure the performance and health of individual parts of the body. The Biopac Student Lab System helps to explain the concept and allows students to understand physiology. Physiology and electricity share a common history, with some of the pioneering work in each field being done in the late 18th century by Count Alessandro Giuseppe Antonio Anastasio Volta and Luigi Galvani. Count Volta invented the battery and had a unit of electrical measurement named in his honor (the Volt). These early researchers studied "animal electricity" and were among the first to realize that applying an electrical signal to an isolated animal muscle caused it to twitch. The Biopac Student Lab uses procedures similar to Count Volta’s to demonstrate how muscles can be electrically stimulated. Concept The BSL system includes data acquisition hardware with built-in universal amplifiers to record and condition electrical signals from the heart, muscle, nerve, brain, eye, Document 4::: Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams. Course content E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are: Electrostatics Conductors, capacitors, and dielectrics Electric circuits Magnetic fields Electromagnetism. Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class. AP test The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution. Registration The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test. Format The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the apparatus used for carrying out an electrolysis reaction called? A. electrolytic cell B. reversible cell C. fluorescent cell D. biochemical cell Answer:
sciq-6274
multiple_choice
Work is done only if a force is exerted in the direction of what?
[ "wind", "gravity", "north", "motion" ]
D
Relavent Documents: Document 0::: In physics, work is the energy transferred to or from an object via the application of force along a displacement. In its simplest form, for a constant force aligned with the direction of motion, the work equals the product of the force strength and the distance traveled. A force is said to do positive work if when applied it has a component in the direction of the displacement of the point of application. A force does negative work if it has a component opposite to the direction of the displacement at the point of application of the force. For example, when a ball is held above the ground and then dropped, the work done by the gravitational force on the ball as it falls is positive, and is equal to the weight of the ball (a force) multiplied by the distance to the ground (a displacement). If the ball is thrown upwards, the work done by the gravitational force is negative, and is equal to the weight multiplied by the displacement in the upwards direction. Both force and displacement are vectors. The work done is given by the dot product of the two vectors. When the force is constant and the angle between the force and the displacement is also constant, then the work done is given by: Work is a scalar quantity, so it has only magnitude and no direction. Work transfers energy from one place to another, or one form to another. The SI unit of work is the joule (J), the same unit as for energy. History The ancient Greek understanding of physics was limited to the statics of simple machines (the balance of forces), and did not include dynamics or the concept of work. During the Renaissance the dynamics of the Mechanical Powers, as the simple machines were called, began to be studied from the standpoint of how far they could lift a load, in addition to the force they could apply, leading eventually to the new concept of mechanical work. The complete dynamic theory of simple machines was worked out by Italian scientist Galileo Galilei in 1600 in Le Meccaniche (On Me Document 1::: The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered. The 1995 version has 30 five-way multiple choice questions. Example question (question 4): Gender differences The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher. Document 2::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 3::: Advanced Placement (AP) Physics C: Mechanics (also known as AP Mechanics) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a one-semester calculus-based university course in mechanics. The content of Physics C: Mechanics overlaps with that of AP Physics 1, but Physics 1 is algebra-based, while Physics C is calculus-based. Physics C: Mechanics may be combined with its electricity and magnetism counterpart to form a year-long course that prepares for both exams. Course content Intended to be equivalent to an introductory college course in mechanics for physics or engineering majors, the course modules are: Kinematics Newton's laws of motion Work, energy and power Systems of particles and linear momentum Circular motion and rotation Oscillations and gravitation. Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a Calculus I class. This course is often compared to AP Physics 1: Algebra Based for its similar course material involving kinematics, work, motion, forces, rotation, and oscillations. However, AP Physics 1: Algebra Based lacks concepts found in Calculus I, like derivatives or integrals. This course may be combined with AP Physics C: Electricity and Magnetism to make a unified Physics C course that prepares for both exams. AP test The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution. Registration The AP examination for AP Physics C: Mechanics is separate from the AP examination for AP Physics C: Electricity and Magnetism. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test. Format The exam is typically administered on a Monday aftern Document 4::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Work is done only if a force is exerted in the direction of what? A. wind B. gravity C. north D. motion Answer:
sciq-4584
multiple_choice
Cell walls, plastids, and a large central vacuole distinguish plant cells from what?
[ "phloem cells", "parenchyma cells", "eukaryotic cells", "animal cells" ]
D
Relavent Documents: Document 0::: Plant stem cells Plant stem cells are innately undifferentiated cells located in the meristems of plants. Plant stem cells serve as the origin of plant vitality, as they maintain themselves while providing a steady supply of precursor cells to form differentiated tissues and organs in plants. Two distinct areas of stem cells are recognised: the apical meristem and the lateral meristem. Plant stem cells are characterized by two distinctive properties, which are: the ability to create all differentiated cell types and the ability to self-renew such that the number of stem cells is maintained. Plant stem cells never undergo aging process but immortally give rise to new specialized and unspecialized cells, and they have the potential to grow into any organ, tissue, or cell in the body. Thus they are totipotent cells equipped with regenerative powers that facilitate plant growth and production of new organs throughout lifetime. Unlike animals, plants are immobile. As plants cannot escape from danger by taking motion, they need a special mechanism to withstand various and sometimes unforeseen environmental stress. Here, what empowers them to withstand harsh external influence and preserve life is stem cells. In fact, plants comprise the oldest and the largest living organisms on earth, including Bristlecone Pines in California, U.S. (4,842 years old), and the Giant Sequoia in mountainous regions of California, U.S. (87 meters in height and 2,000 tons in weight). This is possible because they have a modular body plan that enables them to survive substantial damage by initiating continuous and repetitive formation of new structures and organs such as leaves and flowers. Plant stem cells are also characterized by their location in specialized structures called meristematic tissues, which are located in root apical meristem (RAM), shoot apical meristem (SAM), and vascular system ((pro)cambium or vascular meristem.) Research and development Traditionally, plant stem ce Document 1::: The ground tissue of plants includes all tissues that are neither dermal nor vascular. It can be divided into three types based on the nature of the cell walls. This tissue system is present between the dermal tissue and forms the main bulk of the plant body. Parenchyma cells have thin primary walls and usually remain alive after they become mature. Parenchyma forms the "filler" tissue in the soft parts of plants, and is usually present in cortex, pericycle, pith, and medullary rays in primary stem and root. Collenchyma cells have thin primary walls with some areas of secondary thickening. Collenchyma provides extra mechanical and structural support, particularly in regions of new growth. Sclerenchyma cells have thick lignified secondary walls and often die when mature. Sclerenchyma provides the main structural support to a plant. Parenchyma Parenchyma is a versatile ground tissue that generally constitutes the "filler" tissue in soft parts of plants. It forms, among other things, the cortex (outer region) and pith (central region) of stems, the cortex of roots, the mesophyll of leaves, the pulp of fruits, and the endosperm of seeds. Parenchyma cells are often living cells and may remain meristematic, meaning that they are capable of cell division if stimulated. They have thin and flexible cellulose cell walls and are generally polyhedral when close-packed, but can be roughly spherical when isolated from their neighbors. Parenchyma cells are generally large. They have large central vacuoles, which allow the cells to store and regulate ions, waste products, and water. Tissue specialised for food storage is commonly formed of parenchyma cells. Parenchyma cells have a variety of functions: In leaves, they form two layers of mesophyll cells immediately beneath the epidermis of the leaf, that are responsible for photosynthesis and the exchange of gases. These layers are called the palisade parenchyma and spongy mesophyll. Palisade parenchyma cells can be either cu Document 2::: In biology, tissue is a historically derived biological organizational level between cells and a complete organ. A tissue is therefore often thought of as an assembly of similar cells and their extracellular matrix from the same embryonic origin that together carry out a specific function. Organs are then formed by the functional grouping together of multiple tissues. Biological organisms follow this hierarchy: Cells < Tissue < Organ < Organ System < Organism The English word "tissue" derives from the French word "tissu", the past participle of the verb tisser, "to weave". The study of tissues is known as histology or, in connection with disease, as histopathology. Xavier Bichat is considered as the "Father of Histology". Plant histology is studied in both plant anatomy and physiology. The classical tools for studying tissues are the paraffin block in which tissue is embedded and then sectioned, the histological stain, and the optical microscope. Developments in electron microscopy, immunofluorescence, and the use of frozen tissue-sections have enhanced the detail that can be observed in tissues. With these tools, the classical appearances of tissues can be examined in health and disease, enabling considerable refinement of medical diagnosis and prognosis. Plant tissue In plant anatomy, tissues are categorized broadly into three tissue systems: the epidermis, the ground tissue, and the vascular tissue. Epidermis – Cells forming the outer surface of the leaves and of the young plant body. Vascular tissue – The primary components of vascular tissue are the xylem and phloem. These transport fluids and nutrients internally. Ground tissue – Ground tissue is less differentiated than other tissues. Ground tissue manufactures nutrients by photosynthesis and stores reserve nutrients. Plant tissues can also be divided differently into two types: Meristematic tissues Permanent tissues. Meristematic tissue Meristematic tissue consists of actively dividing cell Document 3::: A stem is one of two main structural axes of a vascular plant, the other being the root. It supports leaves, flowers and fruits, transports water and dissolved substances between the roots and the shoots in the xylem and phloem, photosynthesis takes place here, stores nutrients, and produces new living tissue. The stem can also be called halm or haulm or culms. The stem is normally divided into nodes and internodes: The nodes are the points of attachment for leaves and can hold one or more leaves. There are sometimes axillary buds between the stem and leaf which can grow into branches (with leaves, conifer cones, or flowers). Adventitious roots may also be produced from the nodes. Vines may produce tendrils from nodes. The internodes distance one node from another. The term "shoots" is often confused with "stems"; "shoots" generally refers to new fresh plant growth, including both stems and other structures like leaves or flowers. In most plants, stems are located above the soil surface, but some plants have underground stems. Stems have several main functions: Support for and the elevation of leaves, flowers, and fruits. The stems keep the leaves in the light and provide a place for the plant to keep its flowers and fruits. Transport of fluids between the roots and the shoots in the xylem and phloem. Storage of nutrients. Production of new living tissue. The normal lifespan of plant cells is one to three years. Stems have cells called meristems that annually generate new living tissue. Photosynthesis. Stems have two pipe-like tissues called xylem and phloem. The xylem tissue arises from the cell facing inside and transports water by the action of transpiration pull, capillary action, and root pressure. The phloem tissue arises from the cell facing outside and consists of sieve tubes and their companion cells. The function of phloem tissue is to distribute food from photosynthetic tissue to other tissues. The two tissues are separated by cambium, a tis Document 4::: The meristem is a type of tissue found in plants. It consists of undifferentiated cells (meristematic cells) capable of cell division. Cells in the meristem can develop into all the other tissues and organs that occur in plants. These cells continue to divide until a time when they get differentiated and then lose the ability to divide. Differentiated plant cells generally cannot divide or produce cells of a different type. Meristematic cells are undifferentiated or incompletely differentiated. They are totipotent and capable of continued cell division. Division of meristematic cells provides new cells for expansion and differentiation of tissues and the initiation of new organs, providing the basic structure of the plant body. The cells are small, with small vacuoles or none, and protoplasm filling the cell completely. The plastids (chloroplasts or chromoplasts), are undifferentiated, but are present in rudimentary form (proplastids). Meristematic cells are packed closely together without intercellular spaces. The cell wall is a very thin primary cell wall. The term meristem was first used in 1858 by Carl Wilhelm von Nägeli (1817–1891) in his book Beiträge zur Wissenschaftlichen Botanik ("Contributions to Scientific Botany"). It is derived from the Greek word merizein (μερίζειν), meaning to divide, in recognition of its inherent function. There are three types of meristematic tissues: apical (at the tips), intercalary or basal (in the middle), and lateral (at the sides). At the meristem summit, there is a small group of slowly dividing cells, which is commonly called the central zone. Cells of this zone have a stem cell function and are essential for meristem maintenance. The proliferation and growth rates at the meristem summit usually differ considerably from those at the periphery. Apical meristems Apical meristems are the completely undifferentiated (indeterminate) meristems in a plant. These differentiate into three kinds of primary meristems. The primary The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Cell walls, plastids, and a large central vacuole distinguish plant cells from what? A. phloem cells B. parenchyma cells C. eukaryotic cells D. animal cells Answer:
sciq-6199
multiple_choice
Sulfur can combine with oxygen to produce what?
[ "sulfur bioxide", "sulfur trioxide", "sulfur dioxide", "sulfur oxide" ]
B
Relavent Documents: Document 0::: Sulfur monoxide is an inorganic compound with formula . It is only found as a dilute gas phase. When concentrated or condensed, it converts to S2O2 (disulfur dioxide). It has been detected in space but is rarely encountered intact otherwise. Structure and bonding The SO molecule has a triplet ground state similar to O2 and S2, that is, each molecule has two unpaired electrons. The S−O bond length of 148.1 pm is similar to that found in lower sulfur oxides (e.g. S8O, S−O = 148 pm) but is longer than the S−O bond in gaseous S2O (146 pm), SO2 (143.1 pm) and SO3 (142 pm). The molecule is excited with near infrared radiation to the singlet state (with no unpaired electrons). The singlet state is believed to be more reactive than the ground triplet state, in the same way that singlet oxygen is more reactive than triplet oxygen. Production and reactions Production of SO as a reagent in organic syntheses has centred on using compounds that "extrude" SO. Examples include the decomposition of the relatively simple molecule ethylene episulfoxide: as well as more complex examples, such as a trisulfide oxide, C10H6S3O. C2H4SO → C2H4 + SO The SO molecule is thermodynamically unstable, converting initially to S2O2. SO inserts into alkenes, alkynes and dienes producing thiiranes, molecules with three-membered rings containing sulfur. Generation under extreme conditions In the laboratory, sulfur monoxide can be produced by treating sulfur dioxide with sulfur vapor in a glow discharge. It has been detected in single-bubble sonoluminescence of concentrated sulfuric acid containing some dissolved noble gas. Benner and Stedman developed a chemiluminescence detector for sulfur via the reaction between sulfur monoxide and ozone: SO + O3 → SO2* + O2 SO2* → SO2 + hν Occurrence Ligand for transition metals As a ligand SO can bond in a number different ways: a terminal ligand, with a bent M−O−S arrangement, for example with titanium oxyfluoride a terminal ligand, with a bent M−S Document 1::: Sulfur dioxide (IUPAC-recommended spelling) or sulphur dioxide (traditional Commonwealth English) is the chemical compound with the formula . It is a toxic gas responsible for the odor of burnt matches. It is released naturally by volcanic activity and is produced as a by-product of copper extraction and the burning of sulfur-bearing fossil fuels. Structure and bonding SO2 is a bent molecule with C2v symmetry point group. A valence bond theory approach considering just s and p orbitals would describe the bonding in terms of resonance between two resonance structures. The sulfur–oxygen bond has a bond order of 1.5. There is support for this simple approach that does not invoke d orbital participation. In terms of electron-counting formalism, the sulfur atom has an oxidation state of +4 and a formal charge of +1. Occurrence Sulfur dioxide is found on Earth and exists in very small concentrations in the atmosphere at about 15 ppb. On other planets, sulfur dioxide can be found in various concentrations, the most significant being the atmosphere of Venus, where it is the third-most abundant atmospheric gas at 150 ppm. There, it reacts with water to form clouds of sulfuric acid, and is a key component of the planet's global atmospheric sulfur cycle and contributes to global warming. It has been implicated as a key agent in the warming of early Mars, with estimates of concentrations in the lower atmosphere as high as 100 ppm, though it only exists in trace amounts. On both Venus and Mars, as on Earth, its primary source is thought to be volcanic. The atmosphere of Io, a natural satellite of Jupiter, is 90% sulfur dioxide and trace amounts are thought to also exist in the atmosphere of Jupiter. The James Webb Space Telescope has observed the presence of sulfur dioxide on the exoplanet WASP-39b, where it is formed through photochemistry in the planet's atmosphere. As an ice, it is thought to exist in abundance on the Galilean moons—as subliming ice or frost on the tra Document 2::: The element sulfur exists as many allotropes. In number of allotropes, sulfur is second only to carbon. In addition to the allotropes, each allotrope often exists in polymorphs (different crystal structures of the same covalently bonded S molecules) delineated by Greek prefixes (α, β, etc.). Furthermore, because elemental sulfur has been an item of commerce for centuries, its various forms are given traditional names. Early workers identified some forms that have later proved to be single or mixtures of allotropes. Some forms have been named for their appearance, e.g. "mother of pearl sulfur", or alternatively named for a chemist who was pre-eminent in identifying them, e.g. "Muthmann's sulfur I" or "Engel's sulfur". The most commonly encountered form of sulfur is the orthorhombic polymorph of , which adopts a puckered ring – or "crown" – structure. Two other polymorphs are known, also with nearly identical molecular structures. In addition to , sulfur rings of 6, 7, 9–15, 18, and 20 atoms are known. At least five allotropes are uniquely formed at high pressures, two of which are metallic. The number of sulfur allotropes reflects the relatively strong S−S bond of 265 kJ/mol. Furthermore, unlike most elements, the allotropes of sulfur can be manipulated in solutions of organic solvents and are analysed by HPLC. Phase diagram The pressure-temperature (P-T) phase diagram for sulfur is complex (see image). The region labeled I (a solid region), is α-sulfur. High pressure solid allotropes In a high-pressure study at ambient temperatures, four new solid forms, termed II, III, IV, V have been characterized, where α-sulfur is form I. Solid forms II and III are polymeric, while IV and V are metallic (and are superconductive below 10 K and 17 K, respectively). Laser irradiation of solid samples produces three sulfur forms below 200–300 kbar (20–30 GPa). Solid cyclo allotrope preparation Two methods exist for the preparation of the cyclo-sulfur allotropes. One of the m Document 3::: Sulfidation (British spelling also sulphidation) is a process of installing sulfide ions in a material or molecule. The process is widely used to convert oxides to sulfides but is also related to corrosion and surface modification. Inorganic, materials, and organic chemistry Sulfidation is relevant to the formation of sulfide minerals. A large scale application of sulfidation is the conversion of molybdenum oxides to the corresponding sulfides. This conversion is a step in the preparation of catalysts for hydrodesulfurization wherein alumina impregnated with molybdate salts are converted to molybdenum disulfide by the action of hydrogen sulfide. In organosulfur chemistry, sulfiding is often called thiation. The preparation of thioamides from amides involves thiation. A typical reagent is phosphorus pentasulfide (P4S10). The idealized equation for this conversion is: RC(O)NH2 + 1/4 P4S10 → RC(S)NH2 + 1/4 P4S6O4 This conversion where an oxygen atom in the amide function is replaced by a sulfur atom involves no redox reaction. Sulfidation of metals It is known that aluminum improves the sulfidation resistance of iron alloys. The sulfidation of tungsten is a multiple step process. The first step is an oxidation reaction, converting the tungsten to a tungsten bronze on the surface of the object. The tungsten bronze coating is then converted to a sulfide. One commonly encountered occurrence of sulfidation in manufacturing environments involves the sulfidic corrosion of metal piping. The increased resistance to corrosion found in stainless steel is attributed to a layer of chromium oxide that forms due to oxidation of the chromium found in the alloy. The process of liquid sulfidation has also been used in the manufacturing of diamond-like carbon films. These films are generally used to coat surfaces to reduce the wear due to friction. The inclusion of sulfidation in the process has been shown to reduce the friction coefficient of the diamond-like car Document 4::: The sulfur cycle is a biogeochemical cycle in which the sulfur moves between rocks, waterways and living systems. It is important in geology as it affects many minerals and in life because sulfur is an essential element (CHNOPS), being a constituent of many proteins and cofactors, and sulfur compounds can be used as oxidants or reductants in microbial respiration. The global sulfur cycle involves the transformations of sulfur species through different oxidation states, which play an important role in both geological and biological processes. Steps of the sulfur cycle are: Mineralization of organic sulfur into inorganic forms, such as hydrogen sulfide (H2S), elemental sulfur, as well as sulfide minerals. Oxidation of hydrogen sulfide, sulfide, and elemental sulfur (S) to sulfate (). Reduction of sulfate to sulfide. Incorporation of sulfide into organic compounds (including metal-containing derivatives). Disproportionation of sulfur compounds (elemental sulfur, sulfite, thiosulfate) into sulfate and hydrogen sulfide. These are often termed as follows: Assimilative sulfate reduction (see also sulfur assimilation) in which sulfate () is reduced by plants, fungi and various prokaryotes. The oxidation states of sulfur are +6 in sulfate and –2 in R–SH. Desulfurization in which organic molecules containing sulfur can be desulfurized, producing hydrogen sulfide gas (H2S, oxidation state = –2). An analogous process for organic nitrogen compounds is deamination. Oxidation of hydrogen sulfide produces elemental sulfur (S8), oxidation state = 0. This reaction occurs in the photosynthetic green and purple sulfur bacteria and some chemolithotrophs. Often the elemental sulfur is stored as polysulfides. Oxidation in elemental sulfur by sulfur oxidizers produces sulfate. Dissimilative sulfur reduction in which elemental sulfur can be reduced to hydrogen sulfide. Dissimilative sulfate reduction in which sulfate reducers generate hydrogen sulfide from sulfate. Sulfur oxidation The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Sulfur can combine with oxygen to produce what? A. sulfur bioxide B. sulfur trioxide C. sulfur dioxide D. sulfur oxide Answer:
sciq-4935
multiple_choice
How can you prevent your ice cream from getting a sandy texture?
[ "by adding salt", "by using fructose", "by adding oil", "by using lowfat milk" ]
B
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 2::: Science, technology, engineering, and mathematics (STEM) is an umbrella term used to group together the distinct but related technical disciplines of science, technology, engineering, and mathematics. The term is typically used in the context of education policy or curriculum choices in schools. It has implications for workforce development, national security concerns (as a shortage of STEM-educated citizens can reduce effectiveness in this area), and immigration policy, with regard to admitting foreign students and tech workers. There is no universal agreement on which disciplines are included in STEM; in particular, whether or not the science in STEM includes social sciences, such as psychology, sociology, economics, and political science. In the United States, these are typically included by organizations such as the National Science Foundation (NSF), the Department of Labor's O*Net online database for job seekers, and the Department of Homeland Security. In the United Kingdom, the social sciences are categorized separately and are instead grouped with humanities and arts to form another counterpart acronym HASS (Humanities, Arts, and Social Sciences), rebranded in 2020 as SHAPE (Social Sciences, Humanities and the Arts for People and the Economy). Some sources also use HEAL (health, education, administration, and literacy) as the counterpart of STEM. Terminology History Previously referred to as SMET by the NSF, in the early 1990s the acronym STEM was used by a variety of educators, including Charles E. Vela, the founder and director of the Center for the Advancement of Hispanics in Science and Engineering Education (CAHSEE). Moreover, the CAHSEE started a summer program for talented under-represented students in the Washington, D.C., area called the STEM Institute. Based on the program's recognized success and his expertise in STEM education, Charles Vela was asked to serve on numerous NSF and Congressional panels in science, mathematics, and engineering edu Document 3::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of Document 4::: Computer science and engineering (CSE) is an academic program at many universities which comprises computer science classes (e.g. data structures and algorithms) and computer engineering classes (e.g computer architecture). There is no clear division in computing between science and engineering, just like in the field of materials science and engineering. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both undergraduate as well postgraduate with specializations. Academic courses Academic programs vary between colleges, but typically include a combination of topics in computer science, computer engineering, and electrical engineering. Undergraduate courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most CSE programs require introductory mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, probability, and statistics, as well as the basics of electrical and electronic engineering, physics, and electromagnetism. Example universities with CSE majors and departments APJ Abdul Kalam Technological University American International University-B The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. How can you prevent your ice cream from getting a sandy texture? A. by adding salt B. by using fructose C. by adding oil D. by using lowfat milk Answer:
sciq-10460
multiple_choice
In science, what process produces evidence that helps answer questions and solve problems?
[ "manipulation", "information", "investigation", "suspension" ]
C
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Discovery is the act of detecting something new, or something previously unrecognized as meaningful. Concerning sciences and academic disciplines, discovery is the observation of new phenomena, new actions, or new events and providing new reasoning to explain the knowledge gathered through such observations with previously acquired knowledge from abstract thought and everyday experiences. A discovery may sometimes be based on earlier discoveries, collaborations, or ideas. Some discoveries represent a radical breakthrough in knowledge or technology. New discoveries are acquired through various senses and are usually assimilated, merging with pre-existing knowledge and actions. Questioning is a major form of human thought and interpersonal communication, and plays a key role in discovery. Discoveries are often made due to questions. Some discoveries lead to the invention of objects, processes, or techniques. A discovery may sometimes be based on earlier discoveries, collaborations or ideas, and the process of discovery requires at least the awareness that an existing concept or method can be modified or transformed. However, some discoveries also represent a radical breakthrough in knowledge. Science Within scientific disciplines, discovery is the observation of new phenomena, actions, or events which help explain the knowledge gathered through previously acquired scientific evidence. In science, exploration is one of three purposes of research, the other two being description and explanation. Discovery is made by providing observational evidence and attempts to develop an initial, rough understanding of some phenomenon. Discovery within the field of particle physics has an accepted definition for what constitutes a discovery: a five-sigma level of certainty. Such a level defines statistically how unlikely it is that an experimental result is due to chance. The combination of a five-sigma level of certainty, and independent confirmation by other experiments, turn f Document 2::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of Document 3::: The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields. Description The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions. The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.” Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers. Current efforts The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo Document 4::: In biostatistics, strength of evidence is the strength of a conducted study that can be assessed in health care interventions, e.g. to identify effective health care programs and evaluate the quality of the research in health care. It can be graded with different descriptive or analytical statistical methods. Hierarchy of study design, for example using a case-study, ecological study, cross-sectional, case-control, cohort, or experimental, although not always in this order is a general rule to a high "strength of evidence" of a clinical study. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. In science, what process produces evidence that helps answer questions and solve problems? A. manipulation B. information C. investigation D. suspension Answer:
sciq-5551
multiple_choice
Females are not influenced by the male sex hormone testosterone during embryonic development because they lack what?
[ "m chromosome​", "x chromosome​", "y chromosome", "z chromosome​" ]
C
Relavent Documents: Document 0::: Prenatal Testosterone Transfer (also known as prenatal androgen transfer or prenatal hormone transfer) refers to the phenomenon in which testosterone synthesized by a developing male fetus transfers to one or more developing fetuses within the womb and influences development. This typically results in the partial masculinization of specific aspects of female behavior, cognition, and morphology, though some studies have found that testosterone transfer can cause an exaggerated masculinization in males. There is strong evidence supporting the occurrence of prenatal testosterone transfer in rodents and other litter-bearing species, such as pigs. When it comes to humans, studies comparing dizygotic opposite-sex and same-sex twins suggest the phenomenon may occur, though the results of these studies are often inconsistent. Mechanisms of transfer Testosterone is a steroid hormone; therefore it has the ability to diffuse through the amniotic fluid between fetuses. In addition, hormones can transfer among fetuses through the mother's bloodstream. Consequences of testosterone transfer During prenatal development, testosterone exposure is directly responsible for masculinizing the genitals and brain structures. This exposure leads to an increase in male-typical behavior. Animal studies Most animal studies are performed on rats or mice. In these studies, the amount of testosterone each individual fetus is exposed to depends on its intrauterine position (IUP). Each gestating fetus not at either end of the uterine horn is surrounded by either two males (2M), two females (0M), or one female and one male (1M). Development of the fetus varies widely according to its IUP. Mice In mice, prenatal testosterone transfer causes higher blood concentrations of testosterone in 2M females when compared to 1M or 0M females. This has a variety of consequences on later female behavior, physiology, and morphology. Below is a table comparing physiological, morphological, and behavioral diffe Document 1::: Reproductive biology includes both sexual and asexual reproduction. Reproductive biology includes a wide number of fields: Reproductive systems Endocrinology Sexual development (Puberty) Sexual maturity Reproduction Fertility Human reproductive biology Endocrinology Human reproductive biology is primarily controlled through hormones, which send signals to the human reproductive structures to influence growth and maturation. These hormones are secreted by endocrine glands, and spread to different tissues in the human body. In humans, the pituitary gland synthesizes hormones used to control the activity of endocrine glands. Reproductive systems Internal and external organs are included in the reproductive system. There are two reproductive systems including the male and female, which contain different organs from one another. These systems work together in order to produce offspring. Female reproductive system The female reproductive system includes the structures involved in ovulation, fertilization, development of an embryo, and birth. These structures include: Ovaries Oviducts Uterus Vagina Mammary Glands Estrogen is one of the sexual reproductive hormones that aid in the sexual reproductive system of the female. Male reproductive system The male reproductive system includes testes, rete testis, efferent ductules, epididymis, sex accessory glands, sex accessory ducts and external genitalia. Testosterone, an androgen, although present in both males and females, is relatively more abundant in males. Testosterone serves as one of the major sexual reproductive hormones in the male reproductive system However, the enzyme aromatase is present in testes and capable of synthesizing estrogens from androgens. Estrogens are present in high concentrations in luminal fluids of the male reproductive tract. Androgen and estrogen receptors are abundant in epithelial cells of the male reproductive tract. Animal Reproductive Biology Animal reproduction oc Document 2::: Sexual differentiation in humans is the process of development of sex differences in humans. It is defined as the development of phenotypic structures consequent to the action of hormones produced following gonadal determination. Sexual differentiation includes development of different genitalia and the internal genital tracts and body hair plays a role in sex identification. The development of sexual differences begins with the XY sex-determination system that is present in humans, and complex mechanisms are responsible for the development of the phenotypic differences between male and female humans from an undifferentiated zygote. Females typically have two X chromosomes, and males typically have a Y chromosome and an X chromosome. At an early stage in embryonic development, both sexes possess equivalent internal structures. These are the mesonephric ducts and paramesonephric ducts. The presence of the SRY gene on the Y chromosome causes the development of the testes in males, and the subsequent release of hormones which cause the paramesonephric ducts to regress. In females, the mesonephric ducts regress. Divergent sexual development, known as intersex, can be a result of genetic and hormonal factors. Sex determination Most mammals, including humans, have an XY sex-determination system: the Y chromosome carries factors responsible for triggering male development. In the absence of a Y chromosome, the fetus will undergo female development. This is because of the presence of the sex-determining region of the Y chromosome, also known as the SRY gene. Thus, male mammals typically have an X and a Y chromosome (XY), while female mammals typically have two X chromosomes (XX). Chromosomal sex is determined at the time of fertilization; a chromosome from the sperm cell, either X or Y, fuses with the X chromosome in the egg cell. Gonadal sex refers to the gonads, that is the testis or ovaries, depending on which genes are expressed. Phenotypic sex refers to the struct Document 3::: H-Y antigen is a male tissue specific antigen. Originally thought to trigger the formation of testes (via loci, an autosomal gene that generates the antigen and one that generates the receptor) it is now known that it does not trigger the formation of testes but may be activated by the formation of testes. There are several antigens which qualify as H-Y as defined by rejection of male skin grafts in female hosts or detected by cytotoxic T cells or antibodies. One H-Y, secreted by the testis, defined by antibodies, is identical to müllerian-inhibiting substance (AMH gene). Another H-Y, minor histocompatibility antigen, seemed to be encoded in the SMCY gene (acronym for 'selected mouse cDNA on Y'), later identified as an 11-residue peptide from the Lysine-Specific Demethylase 5D protein (KDM5D gene) presented by HLA-B7. A third example is MEA1. Association with spermatogenesis It has been shown that male mice lacking in the H-Y antigen, hence lacking in the gene producing it, have also lost genetic information responsible for spermatogenesis. This result also identified a gene on the mouse Y chromosome, distinct from the testis-determining gene, that was essential for spermatogenesis, thus raising the possibility that the very product of this "spermatogenesis gene" is the H-Y antigen. Male homosexuality and the birth order effect Among humans, it has been observed that men with more older brothers tend to have a higher chance of being homosexual (see Fraternal birth order and male sexual orientation). For every additional older brother, a man's chance of being homosexual can rise by up to 33%. One theory to explain this involves H-Y antigens, which suggests that a maternal immune reaction to these antigens has, to an extent, an inhibitory effect on the masculinization of the brain, and therefore, the more male foetuses that the mother of a man has had, the greater the maternal immune response towards him and thus the greater the inhibitory effect on brain masculin Document 4::: Sex-determining region Y protein (SRY), or testis-determining factor (TDF), is a DNA-binding protein (also known as gene-regulatory protein/transcription factor) encoded by the SRY gene that is responsible for the initiation of male sex determination in therian mammals (placental mammals and marsupials). SRY is an intronless sex-determining gene on the Y chromosome. Mutations in this gene lead to a range of disorders of sex development with varying effects on an individual's phenotype and genotype. SRY is a member of the SOX (SRY-like box) gene family of DNA-binding proteins. When complexed with the (SF-1) protein, SRY acts as a transcription factor that causes upregulation of other transcription factors, most importantly SOX9. Its expression causes the development of primary sex cords, which later develop into seminiferous tubules. These cords form in the central part of the yet-undifferentiated gonad, turning it into a testis. The now-induced Leydig cells of the testis then start secreting testosterone, while the Sertoli cells produce anti-Müllerian hormone. SRY gene effects normally take place 6–8 weeks after fetus formation which inhibits the female anatomical structural growth in males. It also works towards developing the secondary sexual characteristics of males. Gene evolution and regulation Evolution SRY may have arisen from a gene duplication of the X chromosome bound gene SOX3, a member of the SOX family. This duplication occurred after the split between monotremes and therians. Monotremes lack SRY and some of their sex chromosomes share homology with bird sex chromosomes. SRY is a quickly evolving gene, and its regulation has been difficult to study because sex determination is not a highly conserved phenomenon within the animal kingdom. Even within marsupials and placentals, which use SRY in their sex determination process, the action of SRY differs between species. The gene sequence also changes; while the core of the gene, the high-mobility group The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Females are not influenced by the male sex hormone testosterone during embryonic development because they lack what? A. m chromosome​ B. x chromosome​ C. y chromosome D. z chromosome​ Answer: