id
stringlengths
6
15
question_type
stringclasses
1 value
question
stringlengths
15
683
choices
listlengths
4
4
answer
stringclasses
5 values
explanation
stringclasses
481 values
prompt
stringlengths
1.75k
10.9k
sciq-2106
multiple_choice
What is the most common fossil fuel?
[ "methane", "uranium", "coal", "diesel oil" ]
C
Relavent Documents: Document 0::: Phyllocladane is a tricyclic diterpane which is generally found in gymnosperm resins. It has a formula of C20H34 and a molecular weight of 274.4840. As a biomarker, it can be used to learn about the gymnosperm input into a hydrocarbon deposit, and about the age of the deposit in general. It indicates a terrogenous origin of the source rock. Diterpanes, such as Phyllocladane are found in source rocks as early as the middle and late Devonian periods, which indicates any rock containing them must be no more than approximately 360 Ma. Phyllocladane is commonly found in lignite, and like other resinites derived from gymnosperms, is naturally enriched in 13C. This enrichment is a result of the enzymatic pathways used to synthesize the compound. The compound can be identified by GC-MS. A peak of m/z 123 is indicative of tricyclic diterpenoids in general, and phyllocladane in particular is further characterized by strong peaks at m/z 231 and m/z 189. Presence of phyllocladane and its relative abundance to other tricyclic diterpanes can be used to differentiate between various oil fields. Document 1::: Cellulosic ethanol is ethanol (ethyl alcohol) produced from cellulose (the stringy fiber of a plant) rather than from the plant's seeds or fruit. It can be produced from grasses, wood, algae, or other plants. It is generally discussed for use as a biofuel. The carbon dioxide that plants absorb as they grow offsets some of the carbon dioxide emitted when ethanol made from them is burned, so cellulosic ethanol fuel has the potential to have a lower carbon footprint than fossil fuels. Interest in cellulosic ethanol is driven by its potential to replace ethanol made from corn or sugarcane. Since these plants are also used for food products, diverting them for ethanol production can cause food prices to rise; cellulose-based sources, on the other hand, generally do not compete with food, since the fibrous parts of plants are mostly inedible to humans. Another potential advantage is the high diversity and abundance of cellulose sources; grasses, trees and algae are found in almost every environment on Earth. Even municipal solid waste components like paper could conceivably be made into ethanol. The main current disadvantage of cellulosic ethanol is its high cost of production, which is more complex and requires more steps than corn-based or sugarcane-based ethanol. Cellulosic ethanol received significant attention in the 2000s and early 2010s. The United States government in particular funded research into its commercialization and set targets for the proportion of cellulosic ethanol added to vehicle fuel. A large number of new companies specializing in cellulosic ethanol, in addition to many existing companies, invested in pilot-scale production plants. However, the much cheaper manufacturing of grain-based ethanol, along with the low price of oil in the 2010s, meant that cellulosic ethanol was not competitive with these established fuels. As a result, most of the new refineries were closed by the mid-2010s and many of the newly founded companies became insolvent. A f Document 2::: Biotic material or biological derived material is any material that originates from living organisms. Most such materials contain carbon and are capable of decay. The earliest life on Earth arose at least 3.5 billion years ago. Earlier physical evidences of life include graphite, a biogenic substance, in 3.7 billion-year-old metasedimentary rocks discovered in southwestern Greenland, as well as, "remains of biotic life" found in 4.1 billion-year-old rocks in Western Australia. Earth's biodiversity has expanded continually except when interrupted by mass extinctions. Although scholars estimate that over 99 percent of all species of life (over five billion) that ever lived on Earth are extinct, there are still an estimated 10–14 million extant species, of which about 1.2 million have been documented and over 86% have not yet been described. Examples of biotic materials are wood, straw, humus, manure, bark, crude oil, cotton, spider silk, chitin, fibrin, and bone. The use of biotic materials, and processed biotic materials (bio-based material) as alternative natural materials, over synthetics is popular with those who are environmentally conscious because such materials are usually biodegradable, renewable, and the processing is commonly understood and has minimal environmental impact. However, not all biotic materials are used in an environmentally friendly way, such as those that require high levels of processing, are harvested unsustainably, or are used to produce carbon emissions. When the source of the recently living material has little importance to the product produced, such as in the production of biofuels, biotic material is simply called biomass. Many fuel sources may have biological sources, and may be divided roughly into fossil fuels, and biofuel. In soil science, biotic material is often referred to as organic matter. Biotic materials in soil include glomalin, Dopplerite and humic acid. Some biotic material may not be considered to be organic matte Document 3::: Corn ethanol is ethanol produced from corn biomass and is the main source of ethanol fuel in the United States, mandated to be blended with gasoline in the Renewable Fuel Standard. Corn ethanol is produced by ethanol fermentation and distillation. It is debatable whether the production and use of corn ethanol results in lower greenhouse gas emissions than gasoline. Approximately 45% of U.S. corn croplands are used for ethanol production. Uses Since 2001, corn ethanol production has increased by more than several times. Out of 9.50 billions of bushels of corn produced in 2001, 0.71 billions of bushels were used to produce corn ethanol. Compared to 2018, out of 14.62 billions of bushels of corn produced, 5.60 billion bushels were used to produce corn ethanol, reported by the United States Department of Energy. Overall, 94% of ethanol in the United States is produced from corn. Currently, corn ethanol is mainly used in blends with gasoline to create mixtures such as E10, E15, and E85. Ethanol is mixed into more than 98% of United States gasoline to reduce air pollution. Corn ethanol is used as an oxygenate when mixed with gasoline. E10 and E15 can be used in all engines without modification. However, blends like E85, with a much greater ethanol content, require significant modifications to be made before an engine can run on the mixture without damaging the engine. Some vehicles that currently use E85 fuel, also called flex fuel, include, the Ford Focus, Dodge Durango, and Toyota Tundra, among others. The future use of corn ethanol as a main gasoline replacement is unknown. Corn ethanol has yet to be proven to be as cost effective as gasoline due to corn ethanol being much more expensive to create compared to gasoline. Corn ethanol has to go through an extensive milling process before it can be used as a fuel source. One major drawback with corn ethanol, is the energy returned on energy invested (EROI), meaning the energy outputted in comparison to the energy requ Document 4::: Biodesulfurization is the process of removing sulfur from crude oil through the use of microorganisms or their enzymes. Background Crude oil contains sulfur in its composition, with the latter being the most abundant element after carbon and hydrogen. Depending on its source, the amount of sulfur present in crude oil can range from 0.05 to 10%. Accordingly, the oil can be classified as sweet or sour if the sulfur concentration is below or above 0.5%, respectively. The combustion of crude oil releases sulfur oxides (SOx) to the atmosphere, which are harmful to public health and contribute to serious environmental effects such as air pollution and acid rains. In addition, the sulfur content in crude oil is a major problem for refineries, as it promotes the corrosion of the equipment and the poisoning of the noble metal catalysts. The levels of sulfur in any oil field are too high for the fossil fuels derived from it (such as gasoline, diesel, or jet fuel ) to be used in combustion engines without pre-treatment to remove organosulfur compounds. The reduction of the concentration of sulfur in crude oil becomes necessary to mitigate one of the leading sources of the harmful health and environmental effects caused by its combustion. In this sense, the European union has taken steps to decrease the sulfur content in diesel below 10 ppm, while the US has made efforts to restrict the sulfur content in diesel and gasoline to a maximum of 15 ppm. The reduction of sulfur compounds in oil fuels can be achieved by a process named desulfurization. Methods used for desulfurization include, among others, hydrodesulfurization, oxidative desulfurization, extractive desulfurization, and extraction by ionic liquids. Despite their efficiency at reducing sulfur content, the conventional desulfurization methods are still accountable for a significant amount of the CO2 emissions associated with the crude oil refining process, releasing up to 9000 metric tons per year. Furthermore, the The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the most common fossil fuel? A. methane B. uranium C. coal D. diesel oil Answer:
sciq-8486
multiple_choice
What is the scientific term for the act of eating or feeding?
[ "excretion", "ingestion", "swallowing", "secretion" ]
B
Relavent Documents: Document 0::: Feeding is the process by which organisms, typically animals, obtain food. Terminology often uses either the suffixes -vore, -vory, or -vorous from Latin vorare, meaning "to devour", or -phage, -phagy, or -phagous from Greek φαγεῖν (), meaning "to eat". Evolutionary history The evolution of feeding is varied with some feeding strategies evolving several times in independent lineages. In terrestrial vertebrates, the earliest forms were large amphibious piscivores 400 million years ago. While amphibians continued to feed on fish and later insects, reptiles began exploring two new food types, other tetrapods (carnivory), and later, plants (herbivory). Carnivory was a natural transition from insectivory for medium and large tetrapods, requiring minimal adaptation (in contrast, a complex set of adaptations was necessary for feeding on highly fibrous plant materials). Evolutionary adaptations The specialization of organisms towards specific food sources is one of the major causes of evolution of form and function, such as: mouth parts and teeth, such as in whales, vampire bats, leeches, mosquitos, predatory animals such as felines and fishes, etc. distinct forms of beaks in birds, such as in hawks, woodpeckers, pelicans, hummingbirds, parrots, kingfishers, etc. specialized claws and other appendages, for apprehending or killing (including fingers in primates) changes in body colour for facilitating camouflage, disguise, setting up traps for preys, etc. changes in the digestive system, such as the system of stomachs of herbivores, commensalism and symbiosis Classification By mode of ingestion There are many modes of feeding that animals exhibit, including: Filter feeding: obtaining nutrients from particles suspended in water Deposit feeding: obtaining nutrients from particles suspended in soil Fluid feeding: obtaining nutrients by consuming other organisms' fluids Bulk feeding: obtaining nutrients by eating all of an organism. Ram feeding and suction feeding: in Document 1::: Digestion is the breakdown of large insoluble food compounds into small water-soluble components so that they can be absorbed into the blood plasma. In certain organisms, these smaller substances are absorbed through the small intestine into the blood stream. Digestion is a form of catabolism that is often divided into two processes based on how food is broken down: mechanical and chemical digestion. The term mechanical digestion refers to the physical breakdown of large pieces of food into smaller pieces which can subsequently be accessed by digestive enzymes. Mechanical digestion takes place in the mouth through mastication and in the small intestine through segmentation contractions. In chemical digestion, enzymes break down food into the small compounds that the body can use. In the human digestive system, food enters the mouth and mechanical digestion of the food starts by the action of mastication (chewing), a form of mechanical digestion, and the wetting contact of saliva. Saliva, a liquid secreted by the salivary glands, contains salivary amylase, an enzyme which starts the digestion of starch in the food; the saliva also contains mucus, which lubricates the food, and hydrogen carbonate, which provides the ideal conditions of pH (alkaline) for amylase to work, and electrolytes (Na+, K+, Cl−, HCO−3). About 30% of starch is hydrolyzed into disaccharide in the oral cavity (mouth). After undergoing mastication and starch digestion, the food will be in the form of a small, round slurry mass called a bolus. It will then travel down the esophagus and into the stomach by the action of peristalsis. Gastric juice in the stomach starts protein digestion. Gastric juice mainly contains hydrochloric acid and pepsin. In infants and toddlers, gastric juice also contains rennin to digest milk proteins. As the first two chemicals may damage the stomach wall, mucus and bicarbonates are secreted by the stomach. They provide a slimy layer that acts as a shield against the damag Document 2::: Eating (also known as consuming) is the ingestion of food. In the natural biological world, this is typically to provide a heterotrophic organism with energy and nutrients and to allow for growth. Animals and other heterotrophs must eat in order to survive — carnivores eat other animals, herbivores eat plants, omnivores consume a mixture of both plant and animal matter, and detritivores eat detritus. Fungi digest organic matter outside their bodies as opposed to animals that digest their food inside their bodies. For humans, eating is more complex, but is typically an activity of daily living. Physicians and dieticians consider a healthful diet essential for maintaining peak physical condition. Some individuals may limit their amount of nutritional intake. This may be a result of a lifestyle choice: as part of a diet or as religious fasting. Limited consumption may be due to hunger or famine. Overconsumption of calories may lead to obesity and the reasons behind it are myriad but its prevalence has led some to declare an "obesity epidemic". Eating practices among humans Many homes have a large kitchen area devoted to preparation of meals and food, and may have a dining room, dining hall, or another designated area for eating. Most societies also have restaurants, food courts, and food vendors so that people may eat when away from home, when lacking time to prepare food, or as a social occasion. At their highest level of sophistication, these places become "theatrical spectacles of global cosmopolitanism and myth." At picnics, potlucks, and food festivals, eating is in fact the primary purpose of a social gathering. At many social events, food and beverages are made available to attendees. People usually have two or three meals a day. Snacks of smaller amounts may be consumed between meals. Doctors in the UK recommend three meals a day (with between 400 and 600 kcal per meal), with four to six hours between. Having three well-balanced meals (described as: half o Document 3::: ' is the process of absorption of vitamins, minerals, and other chemicals from food as part of the nutrition of an organism. In humans, this is always done with a chemical breakdown (enzymes and acids) and physical breakdown (oral mastication and stomach churning).chemical alteration of substances in the bloodstream by the liver or cellular secretions. Although a few similar compounds can be absorbed in digestion bio assimilation, the bioavailability of many compounds is dictated by this second process since both the liver and cellular secretions can be very specific in their metabolic action (see chirality). This second process is where the absorbed food reaches the cells via the liver. Most foods are composed of largely indigestible components depending on the enzymes and effectiveness of an animal's digestive tract. The most well-known of these indigestible compounds is cellulose; the basic chemical polymer in the makeup of plant cell walls. Most animals, however, do not produce cellulase; the enzyme needed to digest cellulose. However some animal and species have developed symbiotic relationships with cellulase-producing bacteria (see termites and metamonads.) This allows termites to use the energy-dense cellulose carbohydrate. Other such enzymes are known to significantly improve bio-assimilation of nutrients. Because of the use of bacterial derivatives, enzymatic dietary supplements now contain such enzymes as amylase, glucoamylase, protease, invertase, peptidase, lipase, lactase, phytase, and cellulase. Examples of biological assimilation Photosynthesis, a process whereby carbon dioxide and water are transformed into a number of organic molecules in plant cells. Nitrogen fixation from the soil into organic molecules by symbiotic bacteria which live in the roots of certain plants, such as Leguminosae. Magnesium supplements orotate, oxide, sulfate, citrate, and glycerate are all structurally similar. However, oxide and sulfate are not water-soluble Document 4::: The trophic level of an organism is the position it occupies in a food web. A food chain is a succession of organisms that eat other organisms and may, in turn, be eaten themselves. The trophic level of an organism is the number of steps it is from the start of the chain. A food web starts at trophic level 1 with primary producers such as plants, can move to herbivores at level 2, carnivores at level 3 or higher, and typically finish with apex predators at level 4 or 5. The path along the chain can form either a one-way flow or a food "web". Ecological communities with higher biodiversity form more complex trophic paths. The word trophic derives from the Greek τροφή (trophē) referring to food or nourishment. History The concept of trophic level was developed by Raymond Lindeman (1942), based on the terminology of August Thienemann (1926): "producers", "consumers", and "reducers" (modified to "decomposers" by Lindeman). Overview The three basic ways in which organisms get food are as producers, consumers, and decomposers. Producers (autotrophs) are typically plants or algae. Plants and algae do not usually eat other organisms, but pull nutrients from the soil or the ocean and manufacture their own food using photosynthesis. For this reason, they are called primary producers. In this way, it is energy from the sun that usually powers the base of the food chain. An exception occurs in deep-sea hydrothermal ecosystems, where there is no sunlight. Here primary producers manufacture food through a process called chemosynthesis. Consumers (heterotrophs) are species that cannot manufacture their own food and need to consume other organisms. Animals that eat primary producers (like plants) are called herbivores. Animals that eat other animals are called carnivores, and animals that eat both plants and other animals are called omnivores. Decomposers (detritivores) break down dead plant and animal material and wastes and release it again as energy and nutrients into The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the scientific term for the act of eating or feeding? A. excretion B. ingestion C. swallowing D. secretion Answer:
sciq-6175
multiple_choice
From what did the first proto-oncogenes arise?
[ "carcinogens", "spores", "viral infections", "bacteria" ]
C
Relavent Documents: Document 0::: Evolution of cells refers to the evolutionary origin and subsequent evolutionary development of cells. Cells first emerged at least 3.8 billion years ago approximately 750 million years after Earth was formed. The first cells The initial development of the cell marked the passage from prebiotic chemistry to partitioned units resembling modern cells. The final transition to living entities that fulfill all the definitions of modern cells depended on the ability to evolve effectively by natural selection. This transition has been called the Darwinian transition. If life is viewed from the point of view of replicator molecules, cells satisfy two fundamental conditions: protection from the outside environment and confinement of biochemical activity. The former condition is needed to keep complex molecules stable in a varying and sometimes aggressive environment; the latter is fundamental for the evolution of biocomplexity. If freely floating molecules that code for enzymes are not enclosed in cells, the enzymes will automatically benefit neighboring replicator molecules as well. Thus, the consequences of diffusion in non-partitioned lifeforms would result in "parasitism by default." Therefore, the selection pressure on replicator molecules will be lower, as the 'lucky' molecule that produces the better enzyme does not fully leverage its advantage over its close neighbors. In contrast, if the molecule is enclosed in a cell membrane, the enzymes coded will be available only to itself. That molecule will uniquely benefit from the enzymes it codes for, increasing individuality and thus accelerating natural selection. Partitioning may have begun from cell-like spheroids formed by proteinoids, which are observed by heating amino acids with phosphoric acid as a catalyst. They bear much of the basic features provided by cell membranes. Proteinoid-based protocells enclosing RNA molecules could have been the first cellular life forms on Earth. Another possibility is that the Document 1::: This is a list of topics in molecular biology. See also index of biochemistry articles. Document 2::: In biology, cell theory is a scientific theory first formulated in the mid-nineteenth century, that organisms are made up of cells, that they are the basic structural/organizational unit of all organisms, and that all cells come from pre-existing cells. Cells are the basic unit of structure in all organisms and also the basic unit of reproduction. The theory was once universally accepted, but now some biologists consider non-cellular entities such as viruses living organisms, and thus disagree with the first tenet. As of 2021: "expert opinion remains divided roughly a third each between yes, no and don’t know". As there is no universally accepted definition of life, discussion still continues. History With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as it was believed no one else had seen these. To further support his theory, Matthias Schleiden and Theodor Schwann both also studied cells of both animal and plants. What they discovered were significant differences between the two types of cells. This put forth the idea that cells were not only fundamental to plants, but animals as well. Microscopes The discovery of the cell was made possible through the invention of the microscope. In the first century BC, Romans were able to make glass. They discovered that objects appeared to be larger under the glass. The expanded use of lenses in eyeglasses in the 13th century probably led to wider spread use of simple microscopes (magnifying glasses) with limited magnification. Compound microscopes, which combine an objective lens with an eyepiece to view a real image achieving much higher magnification, first appeared in Europe around 1620. In 1665, Robert Hooke used a microscope Document 3::: MicrobeLibrary is a permanent collection of over 1400 original peer-reviewed resources for teaching undergraduate microbiology. It is provided by the American Society for Microbiology, Washington DC, United States. Contents include curriculum activities; images and animations; reviews of books, websites and other resources; and articles from Focus on Microbiology Education, Microbiology Education and Microbe. Around 40% of the materials are free to educators and students, the remainder require a subscription. the service is suspended with the message to: "Please check back with us in 2017". External links MicrobeLibrary Microbiology Document 4::: Microbiology () is the scientific study of microorganisms, those being of unicellular (single-celled), multicellular (consisting of complex cells), or acellular (lacking cells). Microbiology encompasses numerous sub-disciplines including virology, bacteriology, protistology, mycology, immunology, and parasitology. Eukaryotic microorganisms possess membrane-bound organelles and include fungi and protists, whereas prokaryotic organisms—all of which are microorganisms—are conventionally classified as lacking membrane-bound organelles and include Bacteria and Archaea. Microbiologists traditionally relied on culture, staining, and microscopy for the isolation and identification of microorganisms. However, less than 1% of the microorganisms present in common environments can be cultured in isolation using current means. With the emergence of biotechnology, Microbiologists currently rely on molecular biology tools such as DNA sequence-based identification, for example, the 16S rRNA gene sequence used for bacterial identification. Viruses have been variably classified as organisms, as they have been considered either as very simple microorganisms or very complex molecules. Prions, never considered as microorganisms, have been investigated by virologists, however, as the clinical effects traced to them were originally presumed due to chronic viral infections, virologists took a search—discovering "infectious proteins". The existence of microorganisms was predicted many centuries before they were first observed, for example by the Jains in India and by Marcus Terentius Varro in ancient Rome. The first recorded microscope observation was of the fruiting bodies of moulds, by Robert Hooke in 1666, but the Jesuit priest Athanasius Kircher was likely the first to see microbes, which he mentioned observing in milk and putrid material in 1658. Antonie van Leeuwenhoek is considered a father of microbiology as he observed and experimented with microscopic organisms in the 1670s, us The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. From what did the first proto-oncogenes arise? A. carcinogens B. spores C. viral infections D. bacteria Answer:
sciq-6791
multiple_choice
What are the special compartments that are surrounded by membranes inside eukaryotic cells called?
[ "organelles", "vacuoles", "ribosomes", "chloroplasts" ]
A
Relavent Documents: Document 0::: Cell physiology is the biological study of the activities that take place in a cell to keep it alive. The term physiology refers to normal functions in a living organism. Animal cells, plant cells and microorganism cells show similarities in their functions even though they vary in structure. General characteristics There are two types of cells: prokaryotes and eukaryotes. Prokaryotes were the first of the two to develop and do not have a self-contained nucleus. Their mechanisms are simpler than later-evolved eukaryotes, which contain a nucleus that envelops the cell's DNA and some organelles. Prokaryotes Prokaryotes have DNA located in an area called the nucleoid, which is not separated from other parts of the cell by a membrane. There are two domains of prokaryotes: bacteria and archaea. Prokaryotes have fewer organelles than eukaryotes. Both have plasma membranes and ribosomes (structures that synthesize proteins and float free in cytoplasm). Two unique characteristics of prokaryotes are fimbriae (finger-like projections on the surface of a cell) and flagella (threadlike structures that aid movement). Eukaryotes Eukaryotes have a nucleus where DNA is contained. They are usually larger than prokaryotes and contain many more organelles. The nucleus, the feature of a eukaryote that distinguishes it from a prokaryote, contains a nuclear envelope, nucleolus and chromatin. In cytoplasm, endoplasmic reticulum (ER) synthesizes membranes and performs other metabolic activities. There are two types, rough ER (containing ribosomes) and smooth ER (lacking ribosomes). The Golgi apparatus consists of multiple membranous sacs, responsible for manufacturing and shipping out materials such as proteins. Lysosomes are structures that use enzymes to break down substances through phagocytosis, a process that comprises endocytosis and exocytosis. In the mitochondria, metabolic processes such as cellular respiration occur. The cytoskeleton is made of fibers that support the str Document 1::: Cellular compartments in cell biology comprise all of the closed parts within the cytosol of a eukaryotic cell, usually surrounded by a single or double lipid layer membrane. These compartments are often, but not always, defined as membrane-bound organelles. The formation of cellular compartments is called compartmentalization. Both organelles, the mitochondria and chloroplasts (in photosynthetic organisms), are compartments that are believed to be of endosymbiotic origin. Other compartments such as peroxisomes, lysosomes, the endoplasmic reticulum, the cell nucleus or the Golgi apparatus are not of endosymbiotic origin. Smaller elements like vesicles, and sometimes even microtubules can also be counted as compartments. It was thought that compartmentalization is not found in prokaryotic cells., but the discovery of carboxysomes and many other metabolosomes revealed that prokaryotic cells are capable of making compartmentalized structures, albeit these are in most cases not surrounded by a lipid bilayer, but of pure proteinaceous built. Types In general there are 4 main cellular compartments, they are: The nuclear compartment comprising the nucleus The intercisternal space which comprises the space between the membranes of the endoplasmic reticulum (which is continuous with the nuclear envelope) Organelles (the mitochondrion in all eukaryotes and the plastid in phototrophic eukaryotes) The cytosol Function Compartments have three main roles. One is to establish physical boundaries for biological processes that enables the cell to carry out different metabolic activities at the same time. This may include keeping certain biomolecules within a region, or keeping other molecules outside. Within the membrane-bound compartments, different intracellular pH, different enzyme systems, and other differences are isolated from other organelles and cytosol. With mitochondria, the cytosol has an oxidizing environment which converts NADH to NAD+. With these cases, the Document 2::: Endoplasm generally refers to the inner (often granulated), dense part of a cell's cytoplasm. This is opposed to the ectoplasm which is the outer (non-granulated) layer of the cytoplasm, which is typically watery and immediately adjacent to the plasma membrane. The nucleus is separated from the endoplasm by the nuclear envelope. The different makeups/viscosities of the endoplasm and ectoplasm contribute to the amoeba's locomotion through the formation of a pseudopod. However, other types of cells have cytoplasm divided into endo- and ectoplasm. The endoplasm, along with its granules, contains water, nucleic acids, amino acids, carbohydrates, inorganic ions, lipids, enzymes, and other molecular compounds. It is the site of most cellular processes as it houses the organelles that make up the endomembrane system, as well as those that stand alone. The endoplasm is necessary for most metabolic activities, including cell division. The endoplasm, like the cytoplasm, is far from static. It is in a constant state of flux through intracellular transport, as vesicles are shuttled between organelles and to/from the plasma membrane. Materials are regularly both degraded and synthesized within the endoplasm based on the needs of the cell and/or organism. Some components of the cytoskeleton run throughout the endoplasm though most are concentrated in the ectoplasm - towards the cells edges, closer to the plasma membrane. The endoplasm's granules are suspended in cytosol. Granules The term granule refers to a small particle within the endoplasm, typically the secretory vesicles. The granule is the defining characteristic of the endoplasm, as they are typically not present within the ectoplasm. These offshoots of the endomembrane system are enclosed by a phospholipid bilayer and can fuse with other organelles as well as the plasma membrane. Their membrane is only semipermeable and allows them to house substances that could be harmful to the cell if they were allowed to flow fre Document 3::: Like the nucleus, whether to include the vacuole in the protoplasm concept is controversial. Terminology Besides "protoplasm", many other related terms and distinctions were used for the cell contents over time. These were as follows: Urschleim (Oken, 1802, 1809), Protoplasma (Purkinje, 1840, von Mohl, 1846), Primordialschlauch (primordial utricle, von Mohl, 1846), sarcode (Dujardin, 1835, 1841), Cytoplasma (Kölliker, 1863), Hautschicht/Körnerschicht (ectoplasm/endoplasm, Pringsheim, 1854; Hofmeister, 1867), Grundsubstanz (ground substance, Cienkowski, 1863), metaplasm/protoplasm (Hanstein, 1868), deutoplasm/protoplasm (van Beneden, 1870), bioplasm (Beale, 1872), paraplasm/protoplasm (Kupffer, 1875), inter-filar substance theory (Velten, 1876) Hyaloplasma (Pfeffer, 1877), Protoplast (Hanstein, 1880), Enchylema/Hyaloplasma (Hanstein, 1880), Kleinkörperchen or Mikrosomen (small bodies or microsomes, Hanstein, 1882), paramitome (Flemming, 1882), Idioplasma (Nageli, 1884), Zwischensu Document 4::: Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence. Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism. Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry. See also Cell (biology) Cell biology Biomolecule Organelle Tissue (biology) External links https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What are the special compartments that are surrounded by membranes inside eukaryotic cells called? A. organelles B. vacuoles C. ribosomes D. chloroplasts Answer:
sciq-11419
multiple_choice
At how many places does points source pollution enter water?
[ "two", "four", "one", "three" ]
C
Relavent Documents: Document 0::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 1::: The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields. Description The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions. The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.” Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers. Current efforts The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo Document 2::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 3::: Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women. The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development. Current status of girls and women in STEM education Overall trends in STEM education Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle. Learning achievement in STEM education Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and Document 4::: Tech City College (Formerly STEM Academy) is a free school sixth form located in the Islington area of the London Borough of Islington, England. It originally opened in September 2013, as STEM Academy Tech City and specialised in Science, Technology, Engineering and Maths (STEM) and the Creative Application of Maths and Science. In September 2015, STEM Academy joined the Aspirations Academy Trust was renamed Tech City College. Tech City College offers A-levels and BTECs as programmes of study for students. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. At how many places does points source pollution enter water? A. two B. four C. one D. three Answer:
sciq-11362
multiple_choice
Structural adaptations in flying animals often contribute to reduced what?
[ "eyesight", "blood flow", "body mass", "respiration" ]
C
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: A number of animals are capable of aerial locomotion, either by powered flight or by gliding. This trait has appeared by evolution many times, without any single common ancestor. Flight has evolved at least four times in separate animals: insects, pterosaurs, birds, and bats. Gliding has evolved on many more occasions. Usually the development is to aid canopy animals in getting from tree to tree, although there are other possibilities. Gliding, in particular, has evolved among rainforest animals, especially in the rainforests in Asia (most especially Borneo) where the trees are tall and widely spaced. Several species of aquatic animals, and a few amphibians and reptiles have also evolved this gliding flight ability, typically as a means of evading predators. Types Animal aerial locomotion can be divided into two categories: powered and unpowered. In unpowered modes of locomotion, the animal uses aerodynamic forces exerted on the body due to wind or falling through the air. In powered flight, the animal uses muscular power to generate aerodynamic forces to climb or to maintain steady, level flight. Those who can find air that is rising faster than they are falling can gain altitude by soaring. Unpowered These modes of locomotion typically require an animal start from a raised location, converting that potential energy into kinetic energy and using aerodynamic forces to control trajectory and angle of descent. Energy is continually lost to drag without being replaced, thus these methods of locomotion have limited range and duration. Falling: decreasing altitude under the force of gravity, using no adaptations to increase drag or provide lift. Parachuting: falling at an angle greater than 45° from the horizontal with adaptations to increase drag forces. Very small animals may be carried up by the wind. Some gliding animals may use their gliding membranes for drag rather than lift, to safely descend. Gliding flight: falling at an angle less than 45° from the horizo Document 2::: Animal science is described as "studying the biology of animals that are under the control of humankind". It can also be described as the production and management of farm animals. Historically, the degree was called animal husbandry and the animals studied were livestock species, like cattle, sheep, pigs, poultry, and horses. Today, courses available look at a broader area, including companion animals, like dogs and cats, and many exotic species. Degrees in Animal Science are offered at a number of colleges and universities. Animal science degrees are often offered at land-grant universities, which will often have on-campus farms to give students hands-on experience with livestock animals. Education Professional education in animal science prepares students for careers in areas such as animal breeding, food and fiber production, nutrition, animal agribusiness, animal behavior, and welfare. Courses in a typical Animal Science program may include genetics, microbiology, animal behavior, nutrition, physiology, and reproduction. Courses in support areas, such as genetics, soils, agricultural economics and marketing, legal aspects, and the environment also are offered. Bachelor degree At many universities, a Bachelor of Science (BS) degree in Animal Science allows emphasis in certain areas. Typical areas are species-specific or career-specific. Species-specific areas of emphasis prepare students for a career in dairy management, beef management, swine management, sheep or small ruminant management, poultry production, or the horse industry. Other career-specific areas of study include pre-veterinary medicine studies, livestock business and marketing, animal welfare and behavior, animal nutrition science, animal reproduction science, or genetics. Youth programs are also an important part of animal science programs. Pre-veterinary emphasis Many schools that offer a degree option in Animal Science also offer a pre-veterinary emphasis such as Iowa State University, th Document 3::: Tinbergen's four questions, named after 20th century biologist Nikolaas Tinbergen, are complementary categories of explanations for animal behaviour. These are also commonly referred to as levels of analysis. It suggests that an integrative understanding of behaviour must include ultimate (evolutionary) explanations, in particular: behavioural adaptive functions phylogenetic history; and the proximate explanations underlying physiological mechanisms ontogenetic/developmental history. Four categories of questions and explanations When asked about the purpose of sight in humans and animals, even elementary-school children can answer that animals have vision to help them find food and avoid danger (function/adaptation). Biologists have three additional explanations: sight is caused by a particular series of evolutionary steps (phylogeny), the mechanics of the eye (mechanism/causation), and even the process of an individual's development (ontogeny). This schema constitutes a basic framework of the overlapping behavioural fields of ethology, behavioural ecology, comparative psychology, sociobiology, evolutionary psychology, and anthropology. Julian Huxley identified the first three questions. Niko Tinbergen gave only the fourth question, as Huxley's questions failed to distinguish between survival value and evolutionary history; Tinbergen's fourth question helped resolve this problem. Evolutionary (ultimate) explanations First question: Function (adaptation) Darwin's theory of evolution by natural selection is the only scientific explanation for why an animal's behaviour is usually well adapted for survival and reproduction in its environment. However, claiming that a particular mechanism is well suited to the present environment is different from claiming that this mechanism was selected for in the past due to its history of being adaptive. The literature conceptualizes the relationship between function and evolution in two ways. On the one hand, function Document 4::: Vestigiality is the retention, during the process of evolution, of genetically determined structures or attributes that have lost some or all of the ancestral function in a given species. Assessment of the vestigiality must generally rely on comparison with homologous features in related species. The emergence of vestigiality occurs by normal evolutionary processes, typically by loss of function of a feature that is no longer subject to positive selection pressures when it loses its value in a changing environment. The feature may be selected against more urgently when its function becomes definitively harmful, but if the lack of the feature provides no advantage, and its presence provides no disadvantage, the feature may not be phased out by natural selection and persist across species. Examples of vestigial structures (also called degenerate, atrophied, or rudimentary organs) are the loss of functional wings in island-dwelling birds; the human vomeronasal organ; and the hindlimbs of the snake and whale. Overview Vestigial features may take various forms; for example, they may be patterns of behavior, anatomical structures, or biochemical processes. Like most other physical features, however functional, vestigial features in a given species may successively appear, develop, and persist or disappear at various stages within the life cycle of the organism, ranging from early embryonic development to late adulthood. Vestigiality, biologically speaking, refers to organisms retaining organs that have seemingly lost their original function. Vestigial organs are common evolutionary knowledge. In addition, the term vestigiality is useful in referring to many genetically determined features, either morphological, behavioral, or physiological; in any such context, however, it need not follow that a vestigial feature must be completely useless. A classic example at the level of gross anatomy is the human vermiform appendix, vestigial in the sense of retaining no significa The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Structural adaptations in flying animals often contribute to reduced what? A. eyesight B. blood flow C. body mass D. respiration Answer:
sciq-8008
multiple_choice
What are the two main types of air pollutants?
[ "a and b", "good and bad", "primary and secondary", "new and old" ]
C
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591. On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education. Format This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions. The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test. Preparation The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a Document 2::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of Document 3::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 4::: The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered. The 1995 version has 30 five-way multiple choice questions. Example question (question 4): Gender differences The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What are the two main types of air pollutants? A. a and b B. good and bad C. primary and secondary D. new and old Answer:
sciq-4019
multiple_choice
What laws regulate radiation doses to which people can be exposed?
[ "dose regulation laws", "medical regulation laws", "radiation protection laws", "voltage protection laws" ]
C
Relavent Documents: Document 0::: Radiation Protection Convention, 1960 is an International Labour Organization Convention to restrict workers from exposure of ionising radiation and to prohibit persons under 16 engaging in work that causes such exposure. (Article 6) It was established in 1960, with the preamble stating: Ratifications As of January 2023, the convention has been ratified by 50 states. External links Text. Ratifications. Health treaties Nuclear technology treaties International Labour Organization conventions Occupational safety and health treaties Radiation Treaties concluded in 1960 Treaties entered into force in 1962 Treaties of Argentina Treaties of Azerbaijan Treaties of Barbados Treaties of the Byelorussian Soviet Socialist Republic Treaties of Belgium Treaties of the military dictatorship in Brazil Treaties of Chile Treaties of Czechoslovakia Treaties of the Czech Republic Treaties of Denmark Treaties of Djibouti Treaties of Ecuador Treaties of Egypt Treaties of Finland Treaties of France Treaties of West Germany Treaties of Ghana Treaties of Greece Treaties of Guinea Treaties of Guyana Treaties of the Hungarian People's Republic Treaties of India Treaties of the Iraqi Republic (1958–1968) Treaties of Italy Treaties of Japan Treaties of South Korea Treaties of Kyrgyzstan Treaties of Latvia Treaties of Lebanon Treaties of Lithuania Treaties of Luxembourg Treaties of Mexico Treaties of the Netherlands Treaties of Nicaragua Treaties of Norway Treaties of Paraguay Treaties of the Polish People's Republic Treaties of the Soviet Union Treaties of Portugal Treaties of Slovakia Treaties of Francoist Spain Treaties of Sri Lanka Treaties of Sweden Treaties of Switzerland Treaties of Syria Treaties of Tajikistan Treaties of Turkey Treaties of the Ukrainian Soviet Socialist Republic Treaties of the United Kingdom Treaties of Uruguay 1960 in labor relations Radiation protection Document 1::: Health physics, also referred to as the science of radiation protection, is the profession devoted to protecting people and their environment from potential radiation hazards, while making it possible to enjoy the beneficial uses of radiation. Health physicists normally require a four-year bachelor’s degree and qualifying experience that demonstrates a professional knowledge of the theory and application of radiation protection principles and closely related sciences. Health physicists principally work at facilities where radionuclides or other sources of ionizing radiation (such as X-ray generators) are used or produced; these include research, industry, education, medical facilities, nuclear power, military, environmental protection, enforcement of government regulations, and decontamination and decommissioning—the combination of education and experience for health physicists depends on the specific field in which the health physicist is engaged. Sub-specialties There are many sub-specialties in the field of health physics, including Ionising radiation instrumentation and measurement Internal dosimetry and external dosimetry Radioactive waste management Radioactive contamination, decontamination and decommissioning Radiological engineering (shielding, holdup, etc.) Environmental assessment, radiation monitoring and radon evaluation Operational radiation protection/health physics Particle accelerator physics Radiological emergency response/planning - (e.g., Nuclear Emergency Support Team) Industrial uses of radioactive material Medical health physics Public information and communication involving radioactive materials Biological effects/radiation biology Radiation standards Radiation risk analysis Nuclear power Radioactive materials and homeland security Radiation protection Nanotechnology Operational health physics The subfield of operational health physics, also called applied health physics in older sources, focuses on field work and the p Document 2::: The Radioactive Substances Act 1993 (RSA93) deals with the control of radioactive material and disposal of radioactive waste in the United Kingdom. On 6 April 2010 the Environmental Permitting (England and Wales) Regulations 2010 came into force. These new regulations repeal, amend and replace much of Radioactive Substances Act 1993 in England and Wales. See also Ionising Radiations Regulations 1999 Document 3::: absorbed dose Electromagnetic radiation equivalent dose hormesis Ionizing radiation Louis Harold Gray (British physicist) rad (unit) radar radar astronomy radar cross section radar detector radar gun radar jamming (radar reflector) corner reflector radar warning receiver (Radarange) microwave oven radiance (radiant: see) meteor shower radiation Radiation absorption Radiation acne Radiation angle radiant barrier (radiation belt: see) Van Allen radiation belt Radiation belt electron Radiation belt model Radiation Belt Storm Probes radiation budget Radiation burn Radiation cancer (radiation contamination) radioactive contamination Radiation contingency Radiation damage Radiation damping Radiation-dominated era Radiation dose reconstruction Radiation dosimeter Radiation effect radiant energy Radiation enteropathy (radiation exposure) radioactive contamination Radiation flux (radiation gauge: see) gauge fixing radiation hardening (radiant heat) thermal radiation radiant heating radiant intensity radiation hormesis radiation impedance radiation implosion Radiation-induced lung injury Radiation Laboratory radiation length radiation mode radiation oncologist radiation pattern radiation poisoning (radiation sickness) radiation pressure radiation protection (radiation shield) (radiation shielding) radiation resistance Radiation Safety Officer radiation scattering radiation therapist radiation therapy (radiotherapy) (radiation treatment) radiation therapy (radiation units: see) :Category:Units of radiation dose (radiation weight factor: see) equivalent dose radiation zone radiative cooling radiative forcing radiator radio (radio amateur: see) amateur radio (radio antenna) antenna (radio) radio astronomy radio beacon (radio broadcasting: see) broadcasting radio clock (radio communications) radio radio control radio controlled airplane radio controlled car radio-controlled helicopter radio control Document 4::: The National Council on Radiation Protection and Measurements (NCRP), formerly the National Committee on Radiation Protection and Measurements, and before that the Advisory Committee on X-Ray and Radium Protection (ACXRP), is a U.S. organization. It has a congressional charter under Title 36 of the United States Code, but this does not imply any sort of oversight by Congress; NCRP is not a government entity. History The Advisory Committee on X-Ray and Radium Protection was established in 1929. Initially, the organization was an informal collective of scientists seeking to proffer accurate information and appropriate recommendations for radiation protection. In 1946, the organization changed its name to the National Committee on Radiation Protection and Measurements. In 1964, the U.S. Congress reorganized and chartered the organization as the National Council on Radiation Protection and Measurements. NCRP Presidents Lauriston S. Taylor (1929 to 1977); William K. Sinclair (1977 to 1991); Charles B. Meinhold (1991 to 2002); Thomas S. Tenforde (2002 to 2012); John D. Boice, Jr. (2012 to 2019); Kathryn D. Held (2019 to present) Executive Directors W. Roger Ney (1964 to 1997); William M. Beckner (1997 to 2004); David A. Schauer (2004 to 2012); James R. Cassata (2012 to 2014); David A. Smith (2014 to 2016); Kathryn D. Held (2016 to 2019) The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What laws regulate radiation doses to which people can be exposed? A. dose regulation laws B. medical regulation laws C. radiation protection laws D. voltage protection laws Answer:
sciq-6571
multiple_choice
Defined as total distance traveled divided by elapsed speed, average speed is a scalar quantity that does not include what?
[ "pressure", "direction", "shift", "size" ]
B
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of Document 2::: Velocity is the speed in combination with the direction of motion of an object. Velocity is a fundamental concept in kinematics, the branch of classical mechanics that describes the motion of bodies. Velocity is a physical vector quantity: both magnitude and direction are needed to define it. The scalar absolute value (magnitude) of velocity is called , being a coherent derived unit whose quantity is measured in the SI (metric system) as metres per second (m/s or m⋅s−1). For example, "5 metres per second" is a scalar, whereas "5 metres per second east" is a vector. If there is a change in speed, direction or both, then the object is said to be undergoing an acceleration. Constant velocity vs acceleration To have a constant velocity, an object must have a constant speed in a constant direction. Constant direction constrains the object to motion in a straight path thus, a constant velocity means motion in a straight line at a constant speed. For example, a car moving at a constant 20 kilometres per hour in a circular path has a constant speed, but does not have a constant velocity because its direction changes. Hence, the car is considered to be undergoing an acceleration. Difference between speed and velocity While the terms speed and velocity are often colloquially used interchangeably to connote how fast an object is moving, in scientific terms they are different. Speed, the scalar magnitude of a velocity vector, denotes only how fast an object is moving, while velocity indicates both an objects speed and direction. Equation of motion Average velocity Velocity is defined as the rate of change of position with respect to time, which may also be referred to as the instantaneous velocity to emphasize the distinction from the average velocity. In some applications the average velocity of an object might be needed, that is to say, the constant velocity that would provide the same resultant displacement as a variable velocity in the same time interval, , over some Document 3::: Progress tests are longitudinal, feedback oriented educational assessment tools for the evaluation of development and sustainability of cognitive knowledge during a learning process. A progress test is a written knowledge exam (usually involving multiple choice questions) that is usually administered to all students in the "A" program at the same time and at regular intervals (usually twice to four times yearly) throughout the entire academic program. The test samples the complete knowledge domain expected of new graduates upon completion of their courses, regardless of the year level of the student). The differences between students’ knowledge levels show in the test scores; the further a student has progressed in the curriculum the higher the scores. As a result, these resultant scores provide a longitudinal, repeated measures, curriculum-independent assessment of the objectives (in knowledge) of the entire programme. History Since its inception in the late 1970s at both Maastricht University and the University of Missouri–Kansas City independently, the progress test of applied knowledge has been increasingly used in medical and health sciences programs across the globe. They are well established and increasingly used in medical education in both undergraduate and postgraduate medical education. They are used formatively and summatively. Use in academic programs The progress test is currently used by national progress test consortia in the United Kingdom, Italy, The Netherlands, in Germany (including Austria), and in individual schools in Africa, Saudi Arabia, South East Asia, the Caribbean, Australia, New Zealand, Sweden, Finland, UK, and the USA. The National Board of Medical Examiners in the USA also provides progress testing in various countries The feasibility of an international approach to progress testing has been recently acknowledged and was first demonstrated by Albano et al. in 1996, who compared test scores across German, Dutch and Italian medi Document 4::: The Texas Math and Science Coaches Association or TMSCA is an organization for coaches of academic University Interscholastic League teams in Texas middle schools and high schools, specifically those that compete in mathematics and science-related tests. Events There are four events in the TMSCA at both the middle and high school level: Number Sense, General Mathematics, Calculator Applications, and General Science. Number Sense is an 80-question exam that students are given only 10 minutes to solve. Additionally, no scratch work or paper calculations are allowed. These questions range from simple calculations such as 99+98 to more complicated operations such as 1001×1938. Each calculation is able to be done with a certain trick or shortcut that makes the calculations easier. The high school exam includes calculus and other difficult topics in the questions also with the same rules applied as to the middle school version. It is well known that the grading for this event is particularly stringent as errors such as writing over a line or crossing out potential answers are considered as incorrect answers. General Mathematics is a 50-question exam that students are given only 40 minutes to solve. These problems are usually more challenging than questions on the Number Sense test, and the General Mathematics word problems take more thinking to figure out. Every problem correct is worth 5 points, and for every problem incorrect, 2 points are deducted. Tiebreakers are determined by the person that misses the first problem and by percent accuracy. Calculator Applications is an 80-question exam that students are given only 30 minutes to solve. This test requires practice on the calculator, knowledge of a few crucial formulas, and much speed and intensity. Memorizing formulas, tips, and tricks will not be enough. In this event, plenty of practice is necessary in order to master the locations of the keys and develop the speed necessary. All correct questions are worth 5 The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Defined as total distance traveled divided by elapsed speed, average speed is a scalar quantity that does not include what? A. pressure B. direction C. shift D. size Answer:
sciq-9046
multiple_choice
For any given species, what term means the maximum population that can be supported by the environment?
[ "tipping point", "mass extinction", "carrying capacity", "zero population growth" ]
C
Relavent Documents: Document 0::: The carrying capacity of an environment is the maximum population size of a biological species that can be sustained by that specific environment, given the food, habitat, water, and other resources available. The carrying capacity is defined as the environment's maximal load, which in population ecology corresponds to the population equilibrium, when the number of deaths in a population equals the number of births (as well as immigration and emigration). The effect of carrying capacity on population dynamics is modelled with a logistic function. Carrying capacity is applied to the maximum population an environment can support in ecology, agriculture and fisheries. The term carrying capacity has been applied to a few different processes in the past before finally being applied to population limits in the 1950s. The notion of carrying capacity for humans is covered by the notion of sustainable population. At the global scale, scientific data indicates that humans are living beyond the carrying capacity of planet Earth and that this cannot continue indefinitely. This scientific evidence comes from many sources worldwide. It was presented in detail in the Millennium Ecosystem Assessment of 2005, a collaborative effort involving more than 1,360 experts worldwide. More recent, detailed accounts are provided by ecological footprint accounting, and interdisciplinary research on planetary boundaries to safe human use of the biosphere. The Sixth Assessment Report on Climate Change from the IPCC and the First Assessment Report on Biodiversity and Ecosystem Services by the IPBES, large international summaries of the state of scientific knowledge regarding climate disruption and biodiversity loss, also support this view. An early detailed examination of global limits was published in the 1972 book Limits to Growth, which has prompted follow-up commentary and analysis. A 2012 review in Nature by 22 international researchers expressed concerns that the Earth may be "approaching Document 1::: Overpopulation or overabundance is a phenomenon in which a species' population becomes larger than the carrying capacity of its environment. This may be caused by increased birth rates, lowered mortality rates, reduced predation or large scale migration, leading to an overabundant species and other animals in the ecosystem competing for food, space, and resources. The animals in an overpopulated area may then be forced to migrate to areas not typically inhabited, or die off without access to necessary resources. Judgements regarding overpopulation always involve both facts and values. Animals often are judged overpopulated when their numbers cause impacts that people find dangerous, damaging, expensive, or otherwise harmful. Societies may be judged overpopulated when their human numbers cause impacts that degrade ecosystem services, decrease human health and well-being, or crowd other species out of existence. Background In ecology, overpopulation is a concept used primarily in wildlife management. Typically, an overpopulation causes the entire population of the species in question to become weaker, as no single individual is able to find enough food or shelter. As such, overpopulation is thus characterized by an increase in the diseases and parasite-load which live upon the species in question, as the entire population is weaker. Other characteristics of overpopulation are lower fecundity, adverse effects on the environment (soil, vegetation or fauna) and lower average body weights. Especially the worldwide increase of deer populations, which usually show irruptive growth, is proving to be of ecological concern. Ironically, where ecologists were preoccupied with conserving or augmenting deer populations only a century ago, the focus has now shifted in the direct opposite, and ecologists are now more concerned with limiting the populations of such animals. Supplemental feeding of charismatic species or interesting game species is a major problem in causing overp Document 2::: The term population biology has been used with different meanings. In 1971 Edward O. Wilson et al. used the term in the sense of applying mathematical models to population genetics, community ecology, and population dynamics. Alan Hastings used the term in 1997 as the title of his book on the mathematics used in population dynamics. The name was also used for a course given at UC Davis in the late 2010s, which describes it as an interdisciplinary field combining the areas of ecology and evolutionary biology. The course includes mathematics, statistics, ecology, genetics, and systematics. Numerous types of organisms are studied. The journal Theoretical Population Biology is published. See also Document 3::: Biodiversity loss includes the worldwide extinction of different species, as well as the local reduction or loss of species in a certain habitat, resulting in a loss of biological diversity. The latter phenomenon can be temporary or permanent, depending on whether the environmental degradation that leads to the loss is reversible through ecological restoration/ecological resilience or effectively permanent (e.g. through land loss). The current global extinction (frequently called the sixth mass extinction or Anthropocene extinction), has resulted in a biodiversity crisis being driven by human activities which push beyond the planetary boundaries and so far has proven irreversible. The main direct threats to conservation (and thus causes for biodiversity loss) fall in eleven categories: Residential and commercial development; farming activities; energy production and mining; transportation and service corridors; biological resource usages; human intrusions and activities that alter, destroy, disturb habitats and species from exhibiting natural behaviors; natural system modification; invasive and problematic species, pathogens and genes; pollution; catastrophic geological events, climate change, and so on. Numerous scientists and the IPBES Global Assessment Report on Biodiversity and Ecosystem Services assert that human population growth and overconsumption are the primary factors in this decline. However other scientists have criticized this, saying that loss of habitat is caused mainly by "the growth of commodities for export" and that population has very little to do with overall consumption, due to country wealth disparities. Climate change is another threat to global biodiversity. For example, coral reefs – which are biodiversity hotspots – will be lost within the century if global warming continues at the current rate. However, habitat destruction e.g. for the expansion of agriculture, is currently the more significant driver of contemporary biodiversity lo Document 4::: Minimum viable population (MVP) is a lower bound on the population of a species, such that it can survive in the wild. This term is commonly used in the fields of biology, ecology, and conservation biology. MVP refers to the smallest possible size at which a biological population can exist without facing extinction from natural disasters or demographic, environmental, or genetic stochasticity. The term "population" is defined as a group of interbreeding individuals in similar geographic area that undergo negligible gene flow with other groups of the species. Typically, MVP is used to refer to a wild population, but can also be used for ex-situ conservation (Zoo populations). Estimation There is no unique definition of what constitutes a sufficient population for the continuation of a species, because whether a species survives will depend to some extent on random events. Thus, any calculation of a minimum viable population (MVP) will depend on the population projection model used. A set of random (stochastic) projections might be used to estimate the initial population size needed (based on the assumptions in the model) for there to be, (for example) a 95% or 99% probability of survival 1,000 years into the future. Some models use generations as a unit of time rather than years in order to maintain consistency between taxa. These projections (population viability analyses, or PVA) use computer simulations to model populations using demographic and environmental information to project future population dynamics. The probability assigned to a PVA is arrived at after repeating the environmental simulation thousands of times. Extinction Small populations are at a greater risk of extinction than larger populations due to small populations having less capacity to recover from adverse stochastic (i.e. random) events. Such events may be divided into four sources: Demographic stochasticity Demographic stochasticity is often only a driving force toward extinction in po The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. For any given species, what term means the maximum population that can be supported by the environment? A. tipping point B. mass extinction C. carrying capacity D. zero population growth Answer:
sciq-3888
multiple_choice
The standard reduction potential can be determined by subtracting the standard reduction potential for the reaction occurring at the anode from the standard reduction potential for the reaction occurring at this?
[ "Spinner", "electrode", "plasma", "cathode" ]
D
Relavent Documents: Document 0::: The values below are standard apparent reduction potentials for electro-biochemical half-reactions measured at 25 °C, 1 atmosphere and a pH of 7 in aqueous solution. The actual physiological potential depends on the ratio of the reduced () and oxidized () forms according to the Nernst equation and the thermal voltage. When an oxidizer () accepts a number z of electrons () to be converted in its reduced form (), the half-reaction is expressed as: + z → The reaction quotient (r) is the ratio of the chemical activity (ai) of the reduced form (the reductant, aRed) to the activity of the oxidized form (the oxidant, aox). It is equal to the ratio of their concentrations (Ci) only if the system is sufficiently diluted and the activity coefficients (γi) are close to unity (ai = γi Ci): The Nernst equation is a function of and can be written as follows: At chemical equilibrium, the reaction quotient of the product activity (aRed) by the reagent activity (aOx) is equal to the equilibrium constant () of the half-reaction and in the absence of driving force () the potential () also becomes nul. The numerically simplified form of the Nernst equation is expressed as: Where is the standard reduction potential of the half-reaction expressed versus the standard reduction potential of hydrogen. For standard conditions in electrochemistry (T = 25 °C, P = 1 atm and all concentrations being fixed at 1 mol/L, or 1 M) the standard reduction potential of hydrogen is fixed at zero by convention as it serves of reference. The standard hydrogen electrode (SHE), with [] = 1 M works thus at a pH = 0. At pH = 7, when [] = 10−7 M, the reduction potential of differs from zero because it depends on pH. Solving the Nernst equation for the half-reaction of reduction of two protons into hydrogen gas gives: In biochemistry and in biological fluids, at pH = 7, it is thus important to note that the reduction potential of the protons () into hydrogen gas is no longer zero Document 1::: Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams. Course content E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are: Electrostatics Conductors, capacitors, and dielectrics Electric circuits Magnetic fields Electromagnetism. Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class. AP test The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution. Registration The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test. Format The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with Document 2::: In electrochemistry, the Nernst equation is a chemical thermodynamical relationship that permits the calculation of the reduction potential of a reaction (half-cell or full cell reaction) from the standard electrode potential, absolute temperature, the number of electrons involved in the redox reaction, and activities (often approximated by concentrations) of the chemical species undergoing reduction and oxidation respectively. It was named after Walther Nernst, a German physical chemist who formulated the equation. Expression General form with chemical activities When an oxidizer () accepts a number z of electrons () to be converted in its reduced form (), the half-reaction is expressed as: + z → The reaction quotient (), also often called the ion activity product (IAP), is the ratio between the chemical activities (a) of the reduced form (the reductant, ) and the oxidized form (the oxidant, ). The chemical activity of a dissolved species corresponds to its true thermodynamic concentration taking into account the electrical interactions between all ions present in solution at elevated concentrations. For a given dissolved species, its chemical activity (a) is the product of its activity coefficient (γ) by its molar (mol/L solution), or molal (mol/kg water), concentration (C): a = γ C. So, if the concentration (C, also denoted here below with square brackets [ ]) of all the dissolved species of interest are sufficiently low and that their activity coefficients are close to unity, their chemical activities can be approximated by their concentrations as commonly done when simplifying, or idealizing, a reaction for didactic purposes: At chemical equilibrium, the ratio of the activity of the reaction product (aRed) by the reagent activity (aOx) is equal to the equilibrium constant of the half-reaction: The standard thermodynamics also says that the actual Gibbs free energy is related to the free energy change under standard state by the relati Document 3::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 4::: In electrochemistry, and more generally in solution chemistry, a Pourbaix diagram, also known as a potential/pH diagram, EH–pH diagram or a pE/pH diagram, is a plot of possible thermodynamically stable phases (i.e., at chemical equilibrium) of an aqueous electrochemical system. Boundaries (50 %/50 %) between the predominant chemical species (aqueous ions in solution, or solid phases) are represented by lines. As such a Pourbaix diagram can be read much like a standard phase diagram with a different set of axes. Similarly to phase diagrams, they do not allow for reaction rate or kinetic effects. Beside potential and pH, the equilibrium concentrations are also dependent upon, e.g., temperature, pressure, and concentration. Pourbaix diagrams are commonly given at room temperature, atmospheric pressure, and molar concentrations of 10−6 and changing any of these parameters will yield a different diagram. The diagrams are named after Marcel Pourbaix (1904–1998), the Russian-born Belgian chemist who invented them. Naming Pourbaix diagrams are also known as EH-pH diagrams due to the labeling of the two axes. Diagram The vertical axis is labeled EH for the voltage potential with respect to the standard hydrogen electrode (SHE) as calculated by the Nernst equation. The "H" stands for hydrogen, although other standards may be used, and they are for room temperature only. For a reversible redox reaction described by the following chemical equilibrium: With the corresponding equilibrium constant : The Nernst equation is: sometimes formulated as: or, more simply directly expressed numerically as: where:  volt is the thermal voltage or the "Nernst slope" at standard temperature λ = ln(10) ≈ 2.30, so that  volt. The horizontal axis is labeled pH for the −log function of the H+ ion activity. The lines in the Pourbaix diagram show the equilibrium conditions, that is, where the activities are equal, for the species on each side of that line. O The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The standard reduction potential can be determined by subtracting the standard reduction potential for the reaction occurring at the anode from the standard reduction potential for the reaction occurring at this? A. Spinner B. electrode C. plasma D. cathode Answer:
sciq-829
multiple_choice
Which forests are found throughout the ocean in temperate and arctic climates?
[ "coral reefs", "kelp", "mangrove", "cedar" ]
B
Relavent Documents: Document 0::: A temperate forest is a forest found between the tropical and boreal regions, located in the temperate zone. It is the second largest biome on our planet, covering 25% of the world's forest area, only behind the boreal forest, which covers about 33%. These forests cover both hemispheres at latitudes ranging from 25 to 50 degrees, wrapping the planet in a belt similar to that of the boreal forest. Due to its large size spanning several continents, there are several main types: deciduous, coniferous, mixed forest, and rainforest. Climate The climate of a temperate forest is highly variable depending on the location of the forest. For example, Los Angeles and Vancouver, Canada are both considered to be located in a temperate zone, however, Vancouver is located in a temperate rainforest, while Los Angeles is a relatively dry subtropical climate. Types of temperate forest Deciduous They are found in Europe, East Asia, North America, and in some parts of South America. Deciduous forests are composed mainly of broadleaf trees, such as maple and oak, that shed all their leaves during one season. They are typically found in three middle-latitude regions with temperate climates characterized by a winter season and year-round precipitation: eastern North America, western Eurasia and northeastern Asia. Coniferous Coniferous forests are composed of needle-leaved evergreen trees, such as pine or fir. Evergreen forests are typically found in regions with moderate climates. Boreal forests, however, are an exception as they are found in subarctic regions. Coniferous trees often have an advantage over broadleaf trees in harsher environments. Their leaves are typically hardier and longer lived but require more energy to grow. Mixed As the name implies, conifers and broadleaf trees grow in the same area. The main trees found in these forests in North America and Eurasia include fir, oak, ash, maple, birch, beech, poplar, elm and pine. Other plant species may include magnolia, Document 1::: Temperate coniferous forest is a terrestrial biome defined by the World Wide Fund for Nature. Temperate coniferous forests are found predominantly in areas with warm summers and cool winters, and vary in their kinds of plant life. In some, needleleaf trees dominate, while others are home primarily to broadleaf evergreen trees or a mix of both tree types. A separate habitat type, the tropical coniferous forests, occurs in more tropical climates. Temperate coniferous forests are common in the coastal areas of regions that have mild winters and heavy rainfall, or inland in drier climates or montane areas. Many species of trees inhabit these forests including pine, cedar, fir, and redwood. The understory also contains a wide variety of herbaceous and shrub species. Temperate coniferous forests sustain the highest levels of biomass in any terrestrial ecosystem and are notable for trees of massive proportions in temperate rainforest regions. Structurally, these forests are rather simple, consisting of 2 layers generally: an overstory and understory. However, some forests may support a layer of shrubs. Pine forests support an herbaceous ground layer that may be dominated by grasses and forbs that lend themselves to ecologically important wildfires. In contrast, the moist conditions found in temperate rain forests favor the dominance by ferns and some forbs. Forest communities dominated by huge trees (e.g., giant sequoia, Sequoiadendron gigantea; redwood, Sequoia sempervirens), unusual ecological phenomena, occur in western North America, southwestern South America, as well as in the Australasian region in such areas as southeastern Australia and northern New Zealand. The Klamath-Siskiyou ecoregion of western North America harbors diverse and unusual assemblages and displays notable endemism for a number of plant and animal taxa. Ecoregions Eurasia North America See also Cedar hemlock douglas-fir forest Temperate deciduous forest Document 2::: Polar ecology is the relationship between plants and animals in a polar environment. Polar environments are in the Arctic and Antarctic regions. Arctic regions are in the Northern Hemisphere, and it contains land and the islands that surrounds it. Antarctica is in the Southern Hemisphere and it also contains the land mass, surrounding islands and the ocean. Polar regions also contain the subantarctic and subarctic zone which separate the polar regions from the temperate regions. Antarctica and the Arctic lie in the polar circles. The polar circles are imaginary lines shown on maps to be the areas that receives less sunlight due to less radiation. These areas either receive sunlight (midnight sun) or shade (polar night) 24 hours a day because of the earth's tilt. Plants and animals in the polar regions are able to withstand living in harsh weather conditions but are facing environmental threats that limit their survival. Climate Polar climates are cold, windy and dry. Because of the lack of precipitation and low temperatures the Arctic and Antarctic are considered the world's largest deserts or Polar deserts. Much of the radiation from the sun that is received is reflected off the snow making the polar regions cold. When the radiation is reflected, the heat is also reflected. The polar regions reflect 89-90% of the sun radiation that the earth receives. And because Antarctica is closer to the sun at perihelion, it receives 7% more radiation than the Arctic. Also in the polar region, the atmosphere is thin. Because of this the UV radiation that gets to the atmosphere can cause fast sun tanning and snow blindness. Polar regions are dry areas; there is very little precipitation due to the cold air. There are some times when the humidity may be high but the water vapor present in the air may be low. Wind is also strong in the polar region. Wind carries snow creating blizzard like conditions. Winds may also move small organisms or vegetation if it is present. The wind Document 3::: Tropical and subtropical moist broadleaf forests (TSMF), also known as tropical moist forest, is a subtropical and tropical forest habitat type defined by the World Wide Fund for Nature. Description TSMF is generally found in large, discontinuous patches centered on the equatorial belt and between the Tropic of Cancer and Tropic of Capricorn, TSMF are characterized by low variability in annual temperature and high levels of rainfall of more than annually. Forest composition is dominated by evergreen and semi-deciduous tree species. These forests are home to more species than any other terrestrial ecosystem on Earth: Half of the world's species may live in these forests, where a square kilometer may be home to more than 1,000 tree species. These forests are found around the world, particularly in the Indo-Malayan Archipelago, the Amazon Basin, and the African Congo Basin. The perpetually warm, wet climate makes these environments more productive than any other terrestrial environment on Earth and promotes explosive plant growth. A tree here may grow over in height in just 5 years. From above, the forest appears as an unending sea of green, broken only by occasional, taller "emergent" trees. These towering emergents are the realm of hornbills, toucans, and the harpy eagle. In general, biodiversity is highest in the forest canopy. The canopy can be divided into five layers: overstory canopy with emergent crowns, a medium layer of canopy, lower canopy, shrub level, and finally understory. The canopy is home to many of the forest's animals, including apes and monkeys. Below the canopy, a lower understory hosts to snakes and big cats. The forest floor, relatively clear of undergrowth due to the thick canopy above, is prowled by other animals such as gorillas and deer. All levels of these forests contain an unparalleled diversity of invertebrate species, including New Guinea’s stick insects and butterflies that can grow over in length. Many forests are being cl Document 4::: The Gulf of St. Lawrence lowland forests are a temperate broadleaf and mixed forest ecoregion of Eastern Canada, as defined by the World Wildlife Fund (WWF) categorization system. Setting Located on the Gulf of Saint Lawrence, the world's largest estuary, this ecoregion covers all of Prince Edward Island, the Les Îles-de-la-Madeleine of Quebec, most of east-central New Brunswick, the Annapolis Valley, Minas Basin and the Northumberland Strait coast of Nova Scotia. This area has a coastal climate of warm summers and cold and snowy winters with an average annual temperature of around 5 °C going up to 15 °C in summer, the coast is warmer than the islands or the sheltered inland valleys. Flora The colder climate allows more hardwood trees to grow in the Gulf of St Lawrence than in most of this part of northeast North America. Trees of the region include eastern hemlock (Tsuga canadensis), balsam fir (Abies balsamea), American elm (Ulmus americana), black ash (Fraxinus nigra), eastern white pine (Pinus strobus), red maple, (Acer rubrum) northern red oak (Quercus rubra), black spruce (Picea mariana), red spruce (Picea rubens) and white spruce (Picea glauca). Fauna The forests are home to a variety of wildlife including American black bear (Ursus americanus), moose (Alces alces), white-tailed deer (Odocoileus virginianus), red fox (Vulpes vulpes), snowshoe hare (Lepus americanus), North American porcupine (Erithyzon dorsatum), fisher (Martes pennanti), North American beaver (Castor canadensis), bobcat (Lynx rufus), American marten (Martes americana), raccoon (Procyon lotor) and muskrat (Ondatra zibethica). The area is habitat for maritime ringlet butterflies (Coenonympha nipisiquit) and other invertebrates. Birds include many seabirds, a large colony of great blue heron (Ardea herodias), the largest remaining population of the endangered piping plover and one of the largest colonies of double-crested cormorant (Phalacrocorax auritus) in the world. Threats and preserva The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Which forests are found throughout the ocean in temperate and arctic climates? A. coral reefs B. kelp C. mangrove D. cedar Answer:
sciq-1331
multiple_choice
The major salivary enzyme is called?
[ "sucrase", "amylase", "mucosa", "synthase" ]
B
Relavent Documents: Document 0::: Saliva (commonly referred to as spit) is an extracellular fluid produced and secreted by salivary glands in the mouth. In humans, saliva is around 99% water, plus electrolytes, mucus, white blood cells, epithelial cells (from which DNA can be extracted), enzymes (such as lipase and amylase), antimicrobial agents (such as secretory IgA, and lysozymes). The enzymes found in saliva are essential in beginning the process of digestion of dietary starches and fats. These enzymes also play a role in breaking down food particles entrapped within dental crevices, thus protecting teeth from bacterial decay. Saliva also performs a lubricating function, wetting food and permitting the initiation of swallowing, and protecting the oral mucosa from drying out. Various animal species have special uses for saliva that go beyond predigestion. Some swifts use their gummy saliva to build nests. Aerodramus nests form the basis of bird's nest soup. Cobras, vipers, and certain other members of the venom clade hunt with venomous saliva injected by fangs. Some caterpillars produce silk fiber from silk proteins stored in modified salivary glands (which are unrelated to the vertebrate ones). Composition Produced in salivary glands, human saliva comprises 99.5% water, but also contains many important substances, including electrolytes, mucus, antibacterial compounds and various enzymes. Medically, constituents of saliva can noninvasively provide important diagnostic information related to oral and systemic diseases. Water: 99.5% Electrolytes: 2–21 mmol/L sodium (lower than blood plasma) 10–36 mmol/L potassium (higher than plasma) 1.2–2.8 mmol/L calcium (similar to plasma) 0.08–0.5 mmol/L magnesium 5–40 mmol/L chloride (lower than plasma) 25 mmol/L bicarbonate (higher than plasma) 1.4–39 mmol/L phosphate Iodine (mmol/L concentration is usually higher than plasma, but dependent variable according to dietary iodine intake) Mucus (mucus in saliva mainly consists of mucopolysacchari Document 1::: The salivary glands in many vertebrates including mammals are exocrine glands that produce saliva through a system of ducts. Humans have three paired major salivary glands (parotid, submandibular, and sublingual), as well as hundreds of minor salivary glands. Salivary glands can be classified as serous, mucous, or seromucous (mixed). In serous secretions, the main type of protein secreted is alpha-amylase, an enzyme that breaks down starch into maltose and glucose, whereas in mucous secretions, the main protein secreted is mucin, which acts as a lubricant. In humans, 1200 to 1500 ml of saliva are produced every day. The secretion of saliva (salivation) is mediated by parasympathetic stimulation; acetylcholine is the active neurotransmitter and binds to muscarinic receptors in the glands, leading to increased salivation. A proposed fourth pair of salivary glands, the tubarial glands, were first identified in 2020. They are named for their location, being positioned in front of and over the torus tubarius. However, this finding from one study is yet to be confirmed. Structure The salivary glands are detailed below: Parotid glands The two parotid glands are major salivary glands wrapped around the mandibular ramus in humans. These are largest of the salivary glands, secreting saliva to facilitate mastication and swallowing, and amylase to begin the digestion of starches. It is the serous type of gland which secretes alpha-amylase (also known as ptyalin). It enters the oral cavity via the parotid duct. The glands are located posterior to the mandibular ramus and anterior to the mastoid process of the temporal bone. They are clinically relevant in dissections of facial nerve branches while exposing the different lobes, since any iatrogenic lesion will result in either loss of action or strength of muscles involved in facial expression. They produce 20% of the total salivary content in the oral cavity. Mumps is a viral infection, caused by infection in the parotid Document 2::: The human AMY1C gene encodes the protein Amylase, alpha 1C (salivary). Amylases are secreted proteins that hydrolyze 1,4-alpha-glucoside bonds in oligosaccharides and polysaccharides, and thus catalyze the first step in digestion of dietary starch and glycogen. The human genome has a cluster of several amylase genes that are expressed at high levels in either the salivary gland or pancreas. This gene encodes an amylase isoenzyme produced by the salivary gland. See also . Document 3::: This is a list of articles that describe particular biomolecules or types of biomolecules. A For substances with an A- or α- prefix such as α-amylase, please see the parent page (in this case Amylase). A23187 (Calcimycin, Calcium Ionophore) Abamectine Abietic acid Acetic acid Acetylcholine Actin Actinomycin D Adenine Adenosmeme Adenosine diphosphate (ADP) Adenosine monophosphate (AMP) Adenosine triphosphate (ATP) Adenylate cyclase Adiponectin Adonitol Adrenaline, epinephrine Adrenocorticotropic hormone (ACTH) Aequorin Aflatoxin Agar Alamethicin Alanine Albumins Aldosterone Aleurone Alpha-amanitin Alpha-MSH (Melaninocyte stimulating hormone) Allantoin Allethrin α-Amanatin, see Alpha-amanitin Amino acid Amylase (also see α-amylase) Anabolic steroid Anandamide (ANA) Androgen Anethole Angiotensinogen Anisomycin Antidiuretic hormone (ADH) Anti-Müllerian hormone (AMH) Arabinose Arginine Argonaute Ascomycin Ascorbic acid (vitamin C) Asparagine Aspartic acid Asymmetric dimethylarginine ATP synthase Atrial-natriuretic peptide (ANP) Auxin Avidin Azadirachtin A – C35H44O16 B Bacteriocin Beauvericin beta-Hydroxy beta-methylbutyric acid beta-Hydroxybutyric acid Bicuculline Bilirubin Biopolymer Biotin (Vitamin H) Brefeldin A Brassinolide Brucine Butyric acid C Document 4::: Mucous gland, also known as muciparous glands, are found in several different parts of the body, and they typically stain lighter than serous glands during standard histological preparation. Most are multicellular, but goblet cells are single-celled glands. Mucous salivary glands The mucous salivary glands are similar in structure to the buccal and labial glands. They are found especially at the back part behind the vallate papillae, but are also present at the apex and marginal parts. In this connection the anterior lingual glands require special notice. They are situated on the under surface of the apex of the tongue, one on either side of the frenulum, where they are covered by a fascicle of muscular fibers derived from the styloglossus and inferior longitudinal muscles. They produce a glycoprotein, mucin that absorbs water to form a sticky secretion called mucus. They are from 12 to 25 mm. long, and about 8 mm. broad, and each opens by three or four ducts on the under surface of the apex. The Weber's glands are an example of muciparous glands located along the tongue. See also Mucus Gland Exocrine gland Weber's glands The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The major salivary enzyme is called? A. sucrase B. amylase C. mucosa D. synthase Answer:
sciq-6246
multiple_choice
Radon (rn) is a radioactive gas formed by the decay of naturally occurring uranium in rocks such as granite. it tends to collect in the basements of houses and poses a significant health risk if present in indoor air. many states now require that houses be tested for radon before they are what?
[ "modified", "built", "seen", "sold" ]
D
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: The International Radon Project (IRP) is a World Health Organization initiative to reduce the lung cancer risk around the world. The IRP released their guidance to member countries in September 2009. Exposure to radon in the home and workplace is one of the main risks of ionizing radiation causing tens of thousands of deaths from lung cancer each year globally. In order to reduce this burden it is important that national authorities have methods and tools based on solid scientific evidence and sound public health policy. The public needs to be aware of radon risks and the means to reduce and prevent these. In 1996, WHO published a report containing several conclusions and recommendations covering the scientific understanding of radon risk and the need for countries to take action in the areas of risk management and risk communication. Recent findings from case-control studies on lung cancer and exposure to radon in homes completed in many countries allow for substantial improvement in risk estimates and for further consolidation of knowledge by pooling data from these studies. The consistency of the findings from the latest pooled analyses of case-control studies from Europe and North America as well as China provides a strong argument for an international initiative to reduce indoor radon risks. To fulfill these goals, WHO has developed a program on public health aspects of radon exposure. This project enjoys high priority with WHO's Department of Public Health and Environment. The key elements of the International Radon Project include: Estimation of the global burden of disease (GBD) associated with exposure to radon, based on the establishment of a global radon database Provision of guidance on methods for radon measurements and mitigation Developing evidence-based public health guidance for Member States to formulate policy and advocacy strategy including the establishment of radon action levels Development of approaches for radon risk communication. Document 2::: The Supervising Scientist is a statutory office under Australian law, originally created to assist in the monitoring of what was then one of the world's largest uranium mines, the Ranger Uranium Mine. It now provides advice more generally on a 'wide range of scientific matters and mining-related environmental issues of national importance, including; radiological matters and tropical wetlands conservation and management'. The Supervising Scientist is administered as a division within the Department of the Environment, Water, Heritage and the Arts. See also Uranium mining in Australia Uranium mining in Kakadu National Park Document 3::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of Document 4::: The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591. On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education. Format This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions. The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test. Preparation The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Radon (rn) is a radioactive gas formed by the decay of naturally occurring uranium in rocks such as granite. it tends to collect in the basements of houses and poses a significant health risk if present in indoor air. many states now require that houses be tested for radon before they are what? A. modified B. built C. seen D. sold Answer:
sciq-7930
multiple_choice
According to which process, sublevels and orbitals are filled with electrons in order of increasing energy?
[ "particle dynamics", "hausen", "aufbau", "Schrodinger's cat" ]
C
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: The study of electromagnetism in higher education, as a fundamental part of both physics and engineering, is typically accompanied by textbooks devoted to the subject. The American Physical Society and the American Association of Physics Teachers recommend a full year of graduate study in electromagnetism for all physics graduate students. A joint task force by those organizations in 2006 found that in 76 of the 80 US physics departments surveyed, a course using John David Jackson's Classical Electrodynamics was required for all first year graduate students. For undergraduates, there are several widely used textbooks, including David Griffiths' Introduction to Electrodynamics and Electricity and Magnetism by Edward Mills Purcell and D. J. Morin. Also at an undergraduate level, Richard Feynman's classic The Feynman Lectures on Physics is available online to read for free. Undergraduate There are several widely used undergraduate textbooks in electromagnetism, including David Griffiths' Introduction to Electrodynamics as well as Electricity and Magnetism by Edward Mills Purcell and D. J. Morin. The Feynman Lectures on Physics also include a volume on electromagnetism that is available to read online for free, through the California Institute of Technology. In addition, there are popular physics textbooks that include electricity and magnetism among the material they cover, such as David Halliday and Robert Resnick's Fundamentals of Physics. Graduate A 2006 report by a joint taskforce between the American Physical Society and the American Association of Physics Teachers found that 76 of the 80 physics departments surveyed require a first-year graduate course in John David Jackson's Classical Electrodynamics. This made Jackson's book the most popular textbook in any field of graduate-level physics, with Herbert Goldstein's Classical Mechanics as the second most popular with adoption at 48 universities. In a 2015 review of Andrew Zangwill's Modern Electrodynamics in Document 2::: Introduction to Elementary Particles, by David Griffiths, is an introductory textbook that describes an accessible "coherent and unified theoretical structure" of particle physics, appropriate for advanced undergraduate physics students. It was originally published in 1987, and the second revised and enlarged edition was published 2008. Content (2nd edition) Table of contents History and Overview Chapter 1: Historical Introduction to the Elementary Particles Chapter 2: Elementary Particle Dynamics Chapter 3: Relative Kinematics Chapter 4: Symmetries Chapter 5: Bound States Quantitative Formulation of Particle Dynamics Chapter 6: The Feynman Calculus Chapter 7: Quantum Electrodynamics Chapter 8: Electrodynamics of Quarks and Hadrons Chapter 9: Quantum Chromodynamics Chapter 10: Weak Interactions Chapter 11: Gauge Theories Appendices Appendix A: The Dirac Delta Function Appendix B: Decay Rates and Cross Sections Appendix C: Pauli and Dirac Matrices Appendix D: Feynman Rules New content in the second addition includes "neutrino oscillations and prospects for physics beyond the Standard Model". Reception The first edition, reviewed by Gerald Intermann, earned praise for its "good use of examples as a means of discussing in detail useful problem-solving techniques that other texts leave for the student to discover." Acknowledging it as a "a well-established textbook", an IAEA review said the second edition "...strikes a balance between quantitative rigor and intuitive understanding, using a lively, informal style... The first chapter provides a detailed historical introduction to the subject, while subsequent chapters offer a quantitative presentation of the Standard Model. A simplified introduction to the Feynman rules, based on a 'toy' model, helps readers learn the calculational techniques without the complications of spin. It is followed by accessible treatments of quantum electrodynamics, the strong and weak interactions, and gauge theories." Document 3::: Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams. Course content E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are: Electrostatics Conductors, capacitors, and dielectrics Electric circuits Magnetic fields Electromagnetism. Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class. AP test The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution. Registration The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test. Format The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with Document 4::: Physics First is an educational program in the United States, that teaches a basic physics course in the ninth grade (usually 14-year-olds), rather than the biology course which is more standard in public schools. This course relies on the limited math skills that the students have from pre-algebra and algebra I. With these skills students study a broad subset of the introductory physics canon with an emphasis on topics which can be experienced kinesthetically or without deep mathematical reasoning. Furthermore, teaching physics first is better suited for English Language Learners, who would be overwhelmed by the substantial vocabulary requirements of Biology. Physics First began as an organized movement among educators around 1990, and has been slowly catching on throughout the United States. The most prominent movement championing Physics First is Leon Lederman's ARISE (American Renaissance in Science Education). Many proponents of Physics First argue that turning this order around lays the foundations for better understanding of chemistry, which in turn will lead to more comprehension of biology. Due to the tangible nature of most introductory physics experiments, Physics First also lends itself well to an introduction to inquiry-based science education, where students are encouraged to probe the workings of the world in which they live. The majority of high schools which have implemented "physics first" do so by way of offering two separate classes, at two separate levels: simple physics concepts in 9th grade, followed by more advanced physics courses in 11th or 12th grade. In schools with this curriculum, nearly all 9th grade students take a "Physical Science", or "Introduction to Physics Concepts" course. These courses focus on concepts that can be studied with skills from pre-algebra and algebra I. With these ideas in place, students then can be exposed to ideas with more physics related content in chemistry, and other science electives. After th The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. According to which process, sublevels and orbitals are filled with electrons in order of increasing energy? A. particle dynamics B. hausen C. aufbau D. Schrodinger's cat Answer:
sciq-6250
multiple_choice
What is the name of the iron-containing oxygen-transport protein in the red blood cells of all vertebrates?
[ "platelet", "hemoglobin", "ferric acid", "plasma" ]
B
Relavent Documents: Document 0::: Blood is a body fluid in the circulatory system of humans and other vertebrates that delivers necessary substances such as nutrients and oxygen to the cells, and transports metabolic waste products away from those same cells. Blood in the circulatory system is also known as peripheral blood, and the blood cells it carries, peripheral blood cells. Blood is composed of blood cells suspended in blood plasma. Plasma, which constitutes 55% of blood fluid, is mostly water (92% by volume), and contains proteins, glucose, mineral ions, hormones, carbon dioxide (plasma being the main medium for excretory product transportation), and blood cells themselves. Albumin is the main protein in plasma, and it functions to regulate the colloidal osmotic pressure of blood. The blood cells are mainly red blood cells (also called RBCs or erythrocytes), white blood cells (also called WBCs or leukocytes), and in mammals platelets (also called thrombocytes). The most abundant cells in vertebrate blood are red blood cells. These contain hemoglobin, an iron-containing protein, which facilitates oxygen transport by reversibly binding to this respiratory gas thereby increasing its solubility in blood. In contrast, carbon dioxide is mostly transported extracellularly as bicarbonate ion transported in plasma. Vertebrate blood is bright red when its hemoglobin is oxygenated and dark red when it is deoxygenated. Some animals, such as crustaceans and mollusks, use hemocyanin to carry oxygen, instead of hemoglobin. Insects and some mollusks use a fluid called hemolymph instead of blood, the difference being that hemolymph is not contained in a closed circulatory system. In most insects, this "blood" does not contain oxygen-carrying molecules such as hemoglobin because their bodies are small enough for their tracheal system to suffice for supplying oxygen. Jawed vertebrates have an adaptive immune system, based largely on white blood cells. White blood cells help to resist infections and parasite Document 1::: Human iron metabolism is the set of chemical reactions that maintain human homeostasis of iron at the systemic and cellular level. Iron is both necessary to the body and potentially toxic. Controlling iron levels in the body is a critically important part of many aspects of human health and disease. Hematologists have been especially interested in systemic iron metabolism, because iron is essential for red blood cells, where most of the human body's iron is contained. Understanding iron metabolism is also important for understanding diseases of iron overload, such as hereditary hemochromatosis, and iron deficiency, such as iron-deficiency anemia. Importance of iron regulation Iron is an essential bioelement for most forms of life, from bacteria to mammals. Its importance lies in its ability to mediate electron transfer. In the ferrous state (Fe2+), iron acts as an electron donor, while in the ferric state (Fe3+) it acts as an acceptor. Thus, iron plays a vital role in the catalysis of enzymatic reactions that involve electron transfer (reduction and oxidation, redox). Proteins can contain iron as part of different cofactors, such as iron–sulfur clusters (Fe-S) and heme groups, both of which are assembled in mitochondria. Cellular respiration Human cells require iron in order to obtain energy as ATP from a multi-step process known as cellular respiration, more specifically from oxidative phosphorylation at the mitochondrial cristae. Iron is present in the iron–sulfur cluster and heme groups of the electron transport chain proteins that generate a proton gradient that allows ATP synthase to synthesize ATP (chemiosmosis). Heme groups are part of hemoglobin, a protein found in red blood cells that serves to transport oxygen from the lungs to other tissues. Heme groups are also present in myoglobin to store and diffuse oxygen in muscle cells. Oxygen transport The human body needs iron for oxygen transport. Oxygen (O2) is required for the functioning and survival Document 2::: Iron is an important biological element. It is used in both the ubiquitous Iron-sulfur proteins and in Vertebrates it is used in Hemoglobin which is essential for Blood and oxygen transport. Overview Iron is required for life. The iron–sulfur clusters are pervasive and include nitrogenase, the enzymes responsible for biological nitrogen fixation. Iron-containing proteins participate in transport, storage and used of oxygen. Iron proteins are involved in electron transfer. The ubiquity of Iron in life has led to the Iron–sulfur world hypothesis that Iron was a central component of the environment of early life. Examples of iron-containing proteins in higher organisms include hemoglobin, cytochrome (see high-valent iron), and catalase. The average adult human contains about 0.005% body weight of iron, or about four grams, of which three quarters is in hemoglobin – a level that remains constant despite only about one milligram of iron being absorbed each day, because the human body recycles its hemoglobin for the iron content. Microbial growth may be assisted by oxidation of iron(II) or by reduction of iron (III). Biochemistry Iron acquisition poses a problem for aerobic organisms because ferric iron is poorly soluble near neutral pH. Thus, these organisms have developed means to absorb iron as complexes, sometimes taking up ferrous iron before oxidising it back to ferric iron. In particular, bacteria have evolved very high-affinity sequestering agents called siderophores. After uptake in human cells, iron storage is precisely regulated. A major component of this regulation is the protein transferrin, which binds iron ions absorbed from the duodenum and carries it in the blood to cells. Transferrin contains Fe3+ in the middle of a distorted octahedron, bonded to one nitrogen, three oxygens and a chelating carbonate anion that traps the Fe3+ ion: it has such a high stability constant that it is very effective at taking up Fe3+ ions even from the most stable comple Document 3::: Transferrins are glycoproteins found in vertebrates which bind and consequently mediate the transport of iron (Fe) through blood plasma. They are produced in the liver and contain binding sites for two Fe3+ ions. Human transferrin is encoded by the TF gene and produced as a 76 kDa glycoprotein. Transferrin glycoproteins bind iron tightly, but reversibly. Although iron bound to transferrin is less than 0.1% (4 mg) of total body iron, it forms the most vital iron pool with the highest rate of turnover (25 mg/24 h). Transferrin has a molecular weight of around 80 kDa and contains two specific high-affinity Fe(III) binding sites. The affinity of transferrin for Fe(III) is extremely high (association constant is 1020 M−1 at pH 7.4) but decreases progressively with decreasing pH below neutrality. Transferrins are not limited to only binding to iron but also to different metal ions. These glycoproteins are located in various bodily fluids of vertebrates. Some invertebrates have proteins that act like transferrin found in the hemolymph. When not bound to iron, transferrin is known as "apotransferrin" (see also apoprotein). Occurrence and function Transferrins are glycoproteins that are often found in biological fluids of vertebrates. When a transferrin protein loaded with iron encounters a transferrin receptor on the surface of a cell, e.g., erythroid precursors in the bone marrow, it binds to it and is transported into the cell in a vesicle by receptor-mediated endocytosis. The pH of the vesicle is reduced by hydrogen ion pumps ( ATPases) to about 5.5, causing transferrin to release its iron ions. Iron release rate is dependent on several factors including pH levels, interactions between lobes, temperature, salt, and chelator. The receptor with its ligand bound transferrin is then transported through the endocytic cycle back to the cell surface, ready for another round of iron uptake. Each transferrin molecule has the ability to carry two iron ions in the ferric form Document 4::: A heme transporter is a protein that delivers heme to the various parts of a biological cell that require it. Heme is a major source of dietary iron in humans and other mammals, and its synthesis in the body is well understood, but heme pathways are not as well understood. It is likely that heme is tightly regulated for two reasons: the toxic nature of iron in cells, and the lack of a regulated excretory system for excess iron. Understanding heme pathways is therefore important in understanding diseases such as hemochromatosis and anemia. Heme transport Members of the SLC48 and SLC49 solute carrier family participate in heme transport across cellular membranes (heme-transporting ATPase). SLC48A1—also known as Heme-Responsive Gene 1 (HRG1)—and its orthologues were first identified as a heme transporter family through a genetic screen in C.elegans. The protein plays a role in mobilizing heme from the lysosome to the cytoplasm. Deletion of the gene in mice leads to accumulation of heme crystals called hemozoin within the lysosomes of bone marrow, liver and splenic macrophages, but the gene is not known to be associated with human disease. FLVCR1 was originally identified as the receptor for the feline leukemia virus, whose genetic disruption leads to anemia and disruption of heme transport. It appears to protect cells at the CFU-E stage by exporting heme to prevent heme toxicity. Rare homozygous mutations result in autosomal recessive posterior column ataxia with retinitis pigmentosa. FLVCR2 is closely related to FLCVR1, and genetic transfection experiments indicate that it transports heme. Mutations in the gene are associated with proliferative vasculopathy and hydranencephaly-hydrocephaly syndrome (PVHH, also known as Fowler syndrome). Related genes SLC49A3 and SLC49A4 are less well characterized functionally, although SLC49A4 is also known as Disrupted In Renal Cancer Protein 2 or RCC4 due to an association with renal cell cancer. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the name of the iron-containing oxygen-transport protein in the red blood cells of all vertebrates? A. platelet B. hemoglobin C. ferric acid D. plasma Answer:
sciq-9686
multiple_choice
What consists of all the biotic and abiotic factors in an area and their interactions?
[ "habitat", "ecosystem", "community", "macroevolution" ]
B
Relavent Documents: Document 0::: A biophysical environment is a biotic and abiotic surrounding of an organism or population, and consequently includes the factors that have an influence in their survival, development, and evolution. A biophysical environment can vary in scale from microscopic to global in extent. It can also be subdivided according to its attributes. Examples include the marine environment, the atmospheric environment and the terrestrial environment. The number of biophysical environments is countless, given that each living organism has its own environment. The term environment can refer to a singular global environment in relation to humanity, or a local biophysical environment, e.g. the UK's Environment Agency. Life-environment interaction All life that has survived must have adapted to the conditions of its environment. Temperature, light, humidity, soil nutrients, etc., all influence the species within an environment. However, life in turn modifies, in various forms, its conditions. Some long-term modifications along the history of the planet have been significant, such as the incorporation of oxygen to the atmosphere. This process consisted of the breakdown of carbon dioxide by anaerobic microorganisms that used the carbon in their metabolism and released the oxygen to the atmosphere. This led to the existence of oxygen-based plant and animal life, the great oxygenation event. Related studies Environmental science is the study of the interactions within the biophysical environment. Part of this scientific discipline is the investigation of the effect of human activity on the environment. Ecology, a sub-discipline of biology and a part of environmental sciences, is often mistaken as a study of human-induced effects on the environment. Environmental studies is a broader academic discipline that is the systematic study of the interaction of humans with their environment. It is a broad field of study that includes: The natural environment Built environments Social envi Document 1::: In ecology, habitat refers to the array of resources, physical and biotic factors that are present in an area, such as to support the survival and reproduction of a particular species. A species habitat can be seen as the physical manifestation of its ecological niche. Thus "habitat" is a species-specific term, fundamentally different from concepts such as environment or vegetation assemblages, for which the term "habitat-type" is more appropriate. The physical factors may include (for example): soil, moisture, range of temperature, and light intensity. Biotic factors include the availability of food and the presence or absence of predators. Every species has particular habitat requirements, with habitat generalist species able to thrive in a wide array of environmental conditions while habitat specialist species requiring a very limited set of factors to survive. The habitat of a species is not necessarily found in a geographical area, it can be the interior of a stem, a rotten log, a rock or a clump of moss; a parasitic organism has as its habitat the body of its host, part of the host's body (such as the digestive tract), or a single cell within the host's body. Habitat types are environmental categorizations of different environments based on the characteristics of a given geographical area, particularly vegetation and climate. Thus habitat types do not refer to a single species but to multiple species living in the same area. For example, terrestrial habitat types include forest, steppe, grassland, semi-arid or desert. Fresh-water habitat types include marshes, streams, rivers, lakes, and ponds; marine habitat types include salt marshes, the coast, the intertidal zone, estuaries, reefs, bays, the open sea, the sea bed, deep water and submarine vents. Habitat types may change over time. Causes of change may include a violent event (such as the eruption of a volcano, an earthquake, a tsunami, a wildfire or a change in oceanic currents); or change may occur mo Document 2::: A biome () is a biogeographical unit consisting of a biological community that has formed in response to the physical environment in which they are found and a shared regional climate. Biomes may span more than one continent. Biome is a broader term than habitat and can comprise a variety of habitats. While a biome can cover small areas, a microbiome is a mix of organisms that coexist in a defined space on a much smaller scale. For example, the human microbiome is the collection of bacteria, viruses, and other microorganisms that are present on or in a human body. A biota is the total collection of organisms of a geographic region or a time period, from local geographic scales and instantaneous temporal scales all the way up to whole-planet and whole-timescale spatiotemporal scales. The biotas of the Earth make up the biosphere. Etymology The term was suggested in 1916 by Clements, originally as a synonym for biotic community of Möbius (1877). Later, it gained its current definition, based on earlier concepts of phytophysiognomy, formation and vegetation (used in opposition to flora), with the inclusion of the animal element and the exclusion of the taxonomic element of species composition. In 1935, Tansley added the climatic and soil aspects to the idea, calling it ecosystem. The International Biological Program (1964–74) projects popularized the concept of biome. However, in some contexts, the term biome is used in a different manner. In German literature, particularly in the Walter terminology, the term is used similarly as biotope (a concrete geographical unit), while the biome definition used in this article is used as an international, non-regional, terminology—irrespectively of the continent in which an area is present, it takes the same biome name—and corresponds to his "zonobiome", "orobiome" and "pedobiome" (biomes determined by climate zone, altitude or soil). In Brazilian literature, the term "biome" is sometimes used as synonym of biogeographic pr Document 3::: Ecosystem diversity deals with the variations in ecosystems within a geographical location and its overall impact on human existence and the environment. Ecosystem diversity addresses the combined characteristics of biotic properties (biodiversity) and abiotic properties (geodiversity). It is a variation in the ecosystems found in a region or the variation in ecosystems over the whole planet. Ecological diversity includes the variation in both terrestrial and aquatic ecosystems. Ecological diversity can also take into account the variation in the complexity of a biological community, including the number of different niches, the number of and other ecological processes. An example of ecological diversity on a global scale would be the variation in ecosystems, such as deserts, forests, grasslands, wetlands and oceans. Ecological diversity is the largest scale of biodiversity, and within each ecosystem, there is a great deal of both species and genetic diversity. Impact Diversity in the ecosystem is significant to human existence for a variety of reasons. Ecosystem diversity boosts the availability of oxygen via the process of photosynthesis amongst plant organisms domiciled in the habitat. Diversity in an aquatic environment helps in the purification of water by plant varieties for use by humans. Diversity increases plant varieties which serves as a good source for medicines and herbs for human use. A lack of diversity in the ecosystem produces an opposite result. Examples Some examples of ecosystems that are rich in diversity are: Deserts Forests Large marine ecosystems Marine ecosystems Old-growth forests Rainforests Tundra Coral reefs Marine Ecosystem diversity as a result of evolutionary pressure Ecological diversity around the world can be directly linked to the evolutionary and selective pressures that constrain the diversity outcome of the ecosystems within different niches. Tundras, Rainforests, coral reefs and deciduous forests all are form Document 4::: A biochore is a subdivision of the biosphere consisting of biotopes that resemble one another and thus are colonized by similar biota. The concept is relevant in biogeography to refer to a units regardless it rank (regardless the scale). The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What consists of all the biotic and abiotic factors in an area and their interactions? A. habitat B. ecosystem C. community D. macroevolution Answer:
sciq-10526
multiple_choice
Excretion is the process of removing excess water and wastes from the body. what are the main organs of excretion?
[ "lungs", "eyes", "kidneys", "brains" ]
C
Relavent Documents: Document 0::: The excretory system is a passive biological system that removes excess, unnecessary materials from the body fluids of an organism, so as to help maintain internal chemical homeostasis and prevent damage to the body. The dual function of excretory systems is the elimination of the waste products of metabolism and to drain the body of used up and broken down components in a liquid and gaseous state. In humans and other amniotes (mammals, birds and reptiles) most of these substances leave the body as urine and to some degree exhalation, mammals also expel them through sweating. Only the organs specifically used for the excretion are considered a part of the excretory system. In the narrow sense, the term refers to the urinary system. However, as excretion involves several functions that are only superficially related, it is not usually used in more formal classifications of anatomy or function. As most healthy functioning organs produce metabolic and other wastes, the entire organism depends on the function of the system. Breaking down of one of more of the systems is a serious health condition, for example kidney failure. Systems Urinary system The kidneys are large, bean-shaped organs which are present on each side of the vertebral column in the abdominal cavity. Humans have two kidneys and each kidney is supplied with blood from the renal artery. The kidneys remove from the blood the nitrogenous wastes such as urea, as well as salts and excess water, and excrete them in the form of urine. This is done with the help of millions of nephrons present in the kidney. The filtrated blood is carried away from the kidneys by the renal vein (or kidney vein). The urine from the kidney is collected by the ureter (or excretory tubes), one from each kidney, and is passed to the urinary bladder. The urinary bladder collects and stores the urine until urination. The urine collected in the bladder is passed into the external environment from the body through an opening called Document 1::: Urine is a liquid by-product of metabolism in humans and in many other animals. Urine flows from the kidneys through the ureters to the urinary bladder. Urination results in urine being excreted from the body through the urethra. Cellular metabolism generates many by-products that are rich in nitrogen and must be cleared from the bloodstream, such as urea, uric acid, and creatinine. These by-products are expelled from the body during urination, which is the primary method for excreting water-soluble chemicals from the body. A urinalysis can detect nitrogenous wastes of the mammalian body. Urine plays an important role in the earth's nitrogen cycle. In balanced ecosystems, urine fertilizes the soil and thus helps plants to grow. Therefore, urine can be used as a fertilizer. Some animals use it to mark their territories. Historically, aged or fermented urine (known as lant) was also used for gunpowder production, household cleaning, tanning of leather and dyeing of textiles. Human urine and feces are collectively referred to as human waste or human excreta, and are managed via sanitation systems. Livestock urine and feces also require proper management if the livestock population density is high. Physiology Most animals have excretory systems for elimination of soluble toxic wastes. In humans, soluble wastes are excreted primarily by the urinary system and, to a lesser extent in terms of urea, removed by perspiration. The urinary system consists of the kidneys, ureters, urinary bladder, and urethra. The system produces urine by a process of filtration, reabsorption, and tubular secretion. The kidneys extract the soluble wastes from the bloodstream, as well as excess water, sugars, and a variety of other compounds. The resulting urine contains high concentrations of urea and other substances, including toxins. Urine flows from the kidneys through the ureter, bladder, and finally the urethra before passing from the body. Duration Research looking at the duration Document 2::: In pharmacology the elimination or excretion of a drug is understood to be any one of a number of processes by which a drug is eliminated (that is, cleared and excreted) from an organism either in an unaltered form (unbound molecules) or modified as a metabolite. The kidney is the main excretory organ although others exist such as the liver, the skin, the lungs or glandular structures, such as the salivary glands and the lacrimal glands. These organs or structures use specific routes to expel a drug from the body, these are termed elimination pathways: Urine Tears Perspiration Saliva Respiration Milk Faeces Bile Drugs are excreted from the kidney by glomerular filtration and by active tubular secretion following the same steps and mechanisms as the products of intermediate metabolism. Therefore, drugs that are filtered by the glomerulus are also subject to the process of passive tubular reabsorption. Glomerular filtration will only remove those drugs or metabolites that are not bound to proteins present in blood plasma (free fraction) and many other types of drugs (such as the organic acids) are actively secreted. In the proximal and distal convoluted tubules non-ionised acids and weak bases are reabsorbed both actively and passively. Weak acids are excreted when the tubular fluid becomes too alkaline and this reduces passive reabsorption. The opposite occurs with weak bases. Poisoning treatments use this effect to increase elimination, by alkalizing the urine causing forced diuresis which promotes excretion of a weak acid, rather than it getting reabsorbed. As the acid is ionised, it cannot pass through the plasma membrane back into the blood stream and instead gets excreted with the urine. Acidifying the urine has the same effect for weakly basic drugs. On other occasions drugs combine with bile juices and enter the intestines. In the intestines the drug will join with the unabsorbed fraction of the administered dose and be eliminated with the faeces Document 3::: The organs of Bojanus or Bojanus organs are excretory glands that serve the function of kidneys in some of the molluscs. In other words, these are metanephridia that are found in some molluscs, for example in the bivalves. Some other molluscs have another type of organ for excretion called Keber's organ. The Bojanus organ is named after Ludwig Heinrich Bojanus, who first described it. The excretory system of a bivalve consists of a pair of kidneys called the organ of bojanus. These are situated one of each side of the body below the pericardium. Each kidney consist of 2 part (1)- glandular part (2)- a thin walled ciliated urinary bladder. Document 4::: Drinking is the act of ingesting water or other liquids into the body through the mouth, proboscis, or elsewhere. Humans drink by swallowing, completed by peristalsis in the esophagus. The physiological processes of drinking vary widely among other animals. Most animals drink water to maintain bodily hydration, although many can survive on the water gained from their food. Water is required for many physiological processes. Both inadequate and (less commonly) excessive water intake are associated with health problems. Methods of drinking In humans When a liquid enters a human mouth, the swallowing process is completed by peristalsis which delivers the liquid through the esophagus to the stomach; much of the activity is abetted by gravity. The liquid may be poured from the hands or drinkware may be used as vessels. Drinking can also be performed by acts of inhalation, typically when imbibing hot liquids or drinking from a spoon. Infants employ a method of suction wherein the lips are pressed tight around a source, as in breastfeeding: a combination of breath and tongue movement creates a vacuum which draws in liquid. In other land mammals By necessity, terrestrial animals in captivity become accustomed to drinking water, but most free-roaming animals stay hydrated through the fluids and moisture in fresh food, and learn to actively seek foods with high fluid content. When conditions impel them to drink from bodies of water, the methods and motions differ greatly among species. Cats, canines, and ruminants all lower the neck and lap in water with their powerful tongues. Cats and canines lap up water with the tongue in a spoon-like shape. Canines lap water by scooping it into their mouth with a tongue which has taken the shape of a ladle. However, with cats, only the tip of their tongue (which is smooth) touches the water, and then the cat quickly pulls its tongue back into its mouth which soon closes; this results in a column of liquid being pulled into the ca The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Excretion is the process of removing excess water and wastes from the body. what are the main organs of excretion? A. lungs B. eyes C. kidneys D. brains Answer:
sciq-3551
multiple_choice
For an exothermic chemical reaction, energy is given off as reactants are converted to what?
[ "forms", "exports", "imports", "products" ]
D
Relavent Documents: Document 0::: In chemistry and particularly biochemistry, an energy-rich species (usually energy-rich molecule) or high-energy species (usually high-energy molecule) is a chemical species which reacts, potentially with other species found in the environment, to release chemical energy. In particular, the term is often used for: adenosine triphosphate (ATP) and similar molecules called high-energy phosphates, which release inorganic phosphate into the environment in an exothermic reaction with water: ATP + → ADP + Pi ΔG°' = −30.5 kJ/mol (−7.3 kcal/mol) fuels such as hydrocarbons, carbohydrates, lipids, proteins, and other organic molecules which react with oxygen in the environment to ultimately form carbon dioxide, water, and sometimes nitrogen, sulfates, and phosphates molecular hydrogen monatomic oxygen, ozone, hydrogen peroxide, singlet oxygen and other metastable or unstable species which spontaneously react without further reactants in particular, the vast majority of free radicals explosives such as nitroglycerin and other substances which react exothermically without requiring a second reactant metals or metal ions which can be oxidized to release energy This is contrasted to species that are either part of the environment (this sometimes includes diatomic triplet oxygen) or do not react with the environment (such as many metal oxides or calcium carbonate); those species are not considered energy-rich or high-energy species. Alternative definitions The term is often used without a definition. Some authors define the term "high-energy" to be equivalent to "chemically unstable", while others reserve the term for high-energy phosphates, such as the Great Soviet Encyclopedia which defines the term "high-energy compounds" to refer exclusively to those. The IUPAC glossary of terms used in ecotoxicology defines a primary producer as an "organism capable of using the energy derived from light or a chemical substance in order to manufacture energy-rich organic compou Document 1::: An elementary reaction is a chemical reaction in which one or more chemical species react directly to form products in a single reaction step and with a single transition state. In practice, a reaction is assumed to be elementary if no reaction intermediates have been detected or need to be postulated to describe the reaction on a molecular scale. An apparently elementary reaction may be in fact a stepwise reaction, i.e. a complicated sequence of chemical reactions, with reaction intermediates of variable lifetimes. In a unimolecular elementary reaction, a molecule dissociates or isomerises to form the products(s) At constant temperature, the rate of such a reaction is proportional to the concentration of the species In a bimolecular elementary reaction, two atoms, molecules, ions or radicals, and , react together to form the product(s) The rate of such a reaction, at constant temperature, is proportional to the product of the concentrations of the species and The rate expression for an elementary bimolecular reaction is sometimes referred to as the Law of Mass Action as it was first proposed by Guldberg and Waage in 1864. An example of this type of reaction is a cycloaddition reaction. This rate expression can be derived from first principles by using collision theory for ideal gases. For the case of dilute fluids equivalent results have been obtained from simple probabilistic arguments. According to collision theory the probability of three chemical species reacting simultaneously with each other in a termolecular elementary reaction is negligible. Hence such termolecular reactions are commonly referred as non-elementary reactions and can be broken down into a more fundamental set of bimolecular reactions, in agreement with the law of mass action. It is not always possible to derive overall reaction schemes, but solutions based on rate equations are often possible in terms of steady-state or Michaelis-Menten approximations. Notes Chemical kinetics Phy Document 2::: Conversion and its related terms yield and selectivity are important terms in chemical reaction engineering. They are described as ratios of how much of a reactant has reacted (X — conversion, normally between zero and one), how much of a desired product was formed (Y — yield, normally also between zero and one) and how much desired product was formed in ratio to the undesired product(s) (S — selectivity). There are conflicting definitions in the literature for selectivity and yield, so each author's intended definition should be verified. Conversion can be defined for (semi-)batch and continuous reactors and as instantaneous and overall conversion. Assumptions The following assumptions are made: The following chemical reaction takes place: , where and are the stoichiometric coefficients. For multiple parallel reactions, the definitions can also be applied, either per reaction or using the limiting reaction. Batch reaction assumes all reactants are added at the beginning. Semi-Batch reaction assumes some reactants are added at the beginning and the rest fed during the batch. Continuous reaction assumes reactants are fed and products leave the reactor continuously and in steady state. Conversion Conversion can be separated into instantaneous conversion and overall conversion. For continuous processes the two are the same, for batch and semi-batch there are important differences. Furthermore, for multiple reactants, conversion can be defined overall or per reactant. Instantaneous conversion Semi-batch In this setting there are different definitions. One definition regards the instantaneous conversion as the ratio of the instantaneously converted amount to the amount fed at any point in time: . with as the change of moles with time of species i. This ratio can become larger than 1. It can be used to indicate whether reservoirs are built up and it is ideally close to 1. When the feed stops, its value is not defined. In semi-batch polymerisation, Document 3::: In chemical thermodynamics, the reaction quotient (Qr or just Q) is a dimensionless quantity that provides a measurement of the relative amounts of products and reactants present in a reaction mixture for a reaction with well-defined overall stoichiometry, at a particular point in time. Mathematically, it is defined as the ratio of the activities (or molar concentrations) of the product species over those of the reactant species involved in the chemical reaction, taking stoichiometric coefficients of the reaction into account as exponents of the concentrations. In equilibrium, the reaction quotient is constant over time and is equal to the equilibrium constant. A general chemical reaction in which α moles of a reactant A and β moles of a reactant B react to give ρ moles of a product R and σ moles of a product S can be written as \it \alpha\,\rm A{} + \it \beta\,\rm B{} <=> \it \rho\,\rm R{} + \it \sigma\,\rm S{}. The reaction is written as an equilibrium even though in many cases it may appear that all of the reactants on one side have been converted to the other side. When any initial mixture of A, B, R, and S is made, and the reaction is allowed to proceed (either in the forward or reverse direction), the reaction quotient Qr, as a function of time t, is defined as where {X}t denotes the instantaneous activity of a species X at time t. A compact general definition is where Пj denotes the product across all j-indexed variables, aj(t) is the activity of species j at time t, and νj is the stoichiometric number (the stoichiometric coefficient multiplied by +1 for products and –1 for starting materials). Relationship to K (the equilibrium constant) As the reaction proceeds with the passage of time, the species' activities, and hence the reaction quotient, change in a way that reduces the free energy of the chemical system. The direction of the change is governed by the Gibbs free energy of reaction by the relation , where K is a constant independent of initi Document 4::: Activation, in chemistry and biology, is the process whereby something is prepared or excited for a subsequent reaction. Chemistry In chemistry, "activation" refers to the reversible transition of a molecule into a nearly identical chemical or physical state, with the defining characteristic being that this resultant state exhibits an increased propensity to undergo a specified chemical reaction. Thus, activation is conceptually the opposite of protection, in which the resulting state exhibits a decreased propensity to undergo a certain reaction. The energy of activation specifies the amount of free energy the reactants must possess (in addition to their rest energy) in order to initiate their conversion into corresponding products—that is, in order to reach the transition state for the reaction. The energy needed for activation can be quite small, and often it is provided by the natural random thermal fluctuations of the molecules themselves (i.e. without any external sources of energy). The branch of chemistry that deals with this topic is called chemical kinetics. Biology Biochemistry In biochemistry, activation, specifically called bioactivation, is where enzymes or other biologically active molecules acquire the ability to perform their biological function, such as inactive proenzymes being converted into active enzymes that are able to catalyze their substrates' reactions into products. Bioactivation may also refer to the process where inactive prodrugs are converted into their active metabolites, or the toxication of protoxins into actual toxins. An enzyme may be reversibly or irreversibly bioactivated. A major mechanism of irreversible bioactivation is where a piece of a protein is cut off by cleavage, producing an enzyme that will then stay active. A major mechanism of reversible bioactivation is substrate presentation where an enzyme translocates near its substrate. Another reversible reaction is where a cofactor binds to an enzyme, which then rem The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. For an exothermic chemical reaction, energy is given off as reactants are converted to what? A. forms B. exports C. imports D. products Answer:
sciq-1534
multiple_choice
When can mutations occur in genes?
[ "after dna replication", "during dna replication", "during rna replication", "after rna replication" ]
B
Relavent Documents: Document 0::: In biology, and especially in genetics, a mutant is an organism or a new genetic character arising or resulting from an instance of mutation, which is generally an alteration of the DNA sequence of the genome or chromosome of an organism. It is a characteristic that would not be observed naturally in a specimen. The term mutant is also applied to a virus with an alteration in its nucleotide sequence whose genome is in the nuclear genome. The natural occurrence of genetic mutations is integral to the process of evolution. The study of mutants is an integral part of biology; by understanding the effect that a mutation in a gene has, it is possible to establish the normal function of that gene. Mutants arise by mutation Mutants arise by mutations occurring in pre-existing genomes as a result of errors of DNA replication or errors of DNA repair. Errors of replication often involve translesion synthesis by a DNA polymerase when it encounters and bypasses a damaged base in the template strand. A DNA damage is an abnormal chemical structure in DNA, such as a strand break or an oxidized base, whereas a mutation, by contrast, is a change in the sequence of standard base pairs. Errors of repair occur when repair processes inaccurately replace a damaged DNA sequence. The DNA repair process microhomology-mediated end joining is particularly error-prone. Etymology Although not all mutations have a noticeable phenotypic effect, the common usage of the word "mutant" is generally a pejorative term, only used for genetically or phenotypically noticeable mutations. Previously, people used the word "sport" (related to spurt) to refer to abnormal specimens. The scientific usage is broader, referring to any organism differing from the wild type. The word finds its origin in the Latin term mūtant- (stem of mūtāns), which means "to change". Mutants should not be confused with organisms born with developmental abnormalities, which are caused by errors during morphogenesis. In a devel Document 1::: DNA Repair and Mutagenesis is a college-level textbook about DNA repair and mutagenesis written by Errol Friedberg, Graham Walker, Wolfram Siede, Richard D. Wood, and Roger Schultz. In its second edition as of 2009, DNA Repair and Mutagenesis contains over 1,000 pages, 10,000 references and 700 illustrations and has been described as "the most comprehensive book available in [the] field." Document 2::: In genetics, a dynamic mutation is an unstable heritable element where the probability of expression of a mutant phenotype is a function of the number of copies of the mutation. That is, the replication product (progeny) of a dynamic mutation has a different likelihood of mutation than its predecessor. These mutations, typically short sequences repeated many times, give rise to numerous known diseases, including the trinucleotide repeat disorders. Robert I. Richards and Grant R. Sutherland called these phenomena, in the framework of dynamical genetics, dynamic mutations. Triplet expansion is caused by slippage during DNA replication. Due to the repetitive nature of the DNA sequence in these regions , 'loop out' structures may form during DNA replication while maintaining complementary base pairing between the parent strand and daughter strand being synthesized. If the loop out structure is formed from sequence on the daughter strand this will result in an increase in the number of repeats. However, if the loop out structure is formed on the parent strand a decrease in the number of repeats occurs. It appears that expansion of these repeats is more common than reduction. Generally the larger the expansion the more likely they are to cause disease or increase the severity of disease. This property results in the characteristic of anticipation seen in trinucleotide repeat disorders. Anticipation describes the tendency of age of onset to decrease and severity of symptoms to increase through successive generations of an affected family due to the expansion of these repeats. Common features Most of these diseases have neurological symptoms. Anticipation/The Sherman paradox refers to progressively earlier or more severe expression of the disease in more recent generations. Repeats are usually polymorphic in copy number, with mitotic and meiotic instability. Copy number related to the severity and/or age of onset Imprinting effects Reverse mutation - The mutation can rev Document 3::: Genome instability (also genetic instability or genomic instability) refers to a high frequency of mutations within the genome of a cellular lineage. These mutations can include changes in nucleic acid sequences, chromosomal rearrangements or aneuploidy. Genome instability does occur in bacteria. In multicellular organisms genome instability is central to carcinogenesis, and in humans it is also a factor in some neurodegenerative diseases such as amyotrophic lateral sclerosis or the neuromuscular disease myotonic dystrophy. The sources of genome instability have only recently begun to be elucidated. A high frequency of externally caused DNA damage can be one source of genome instability since DNA damage can cause inaccurate translesion DNA synthesis past the damage or errors in repair, leading to mutation. Another source of genome instability may be epigenetic or mutational reductions in expression of DNA repair genes. Because endogenous (metabolically-caused) DNA damage is very frequent, occurring on average more than 60,000 times a day in the genomes of human cells, any reduced DNA repair is likely an important source of genome instability. The usual genome situation Usually, all cells in an individual in a given species (plant or animal) show a constant number of chromosomes, which constitute what is known as the karyotype defining this species (see also List of number of chromosomes of various organisms), although some species present a very high karyotypic variability. In humans, mutations that would change an amino acid within the protein coding region of the genome occur at an average of only 0.35 per generation (less than one mutated protein per generation). Sometimes, in a species with a stable karyotype, random variations that modify the normal number of chromosomes may be observed. In other cases, there are structural alterations (e.g., chromosomal translocations, deletions) that modify the standard chromosomal complement. In these cases, it is indica Document 4::: Mutation frequency and mutation rates are highly correlated to each other. Mutation frequencies test are cost effective in laboratories however; these two concepts provide vital information in reference to accounting for the emergence of mutations on any given germ line. There are several test utilized in measuring the chances of mutation frequency and rates occurring in a particular gene pool. Some of the test are as follows: Avida Digital Evolution Platform Fluctuation Analysis Mutation frequency and rates provide vital information about how often a mutation may be expressed in a particular genetic group or sex. Yoon et., 2009 suggested that as sperm donors ages increased the sperm mutation frequencies increased. This reveals the positive correlation in how males are most likely to contribute to genetic disorders that reside within X-linked recessive chromosome. There are additional factors affecting mutation frequency and rates involving evolutionary influences. Since, organisms may pass mutations to their offspring incorporating and analyzing the mutation frequency and rates of a particular species may provide a means to adequately comprehend its longevity Aging The time course of spontaneous mutation frequency from middle to late adulthood was measured in four different tissues of the mouse. Mutation frequencies in the cerebellum (90% neurons) and male germ cells were lower than in liver and adipose tissue. Furthermore, the mutation frequencies increased with age in liver and adipose tissue, whereas in the cerebellum and male germ cells the mutation frequency remained constant Dietary restricted rodents live longer and are generally healthier than their ad libitum fed counterparts. No changes were observed in the spontaneous chromosomal mutation frequency of dietary restricted mice (aged 6 and 12 months) compared to ad libitum fed control mice. Thus dietary restriction appears to have no appreciable effect on spontaneous mutation in chromosomal The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. When can mutations occur in genes? A. after dna replication B. during dna replication C. during rna replication D. after rna replication Answer:
sciq-2266
multiple_choice
Mass spectrometry today is used extensively in chemistry and biology laboratories to identify chemical and biological substances according to their ratios of what?
[ "volume to charge", "ph to charge", "mass to volume", "mass-to-charge" ]
D
Relavent Documents: Document 0::: Swansea University has had a long established history of development and innovation in mass spectrometry and chromatography. Mass Spectrometry Research Unit In 1975, John H. Beynon was appointed the Royal Society Research Professor and established the Mass Spectrometry Research Unit at Swansea University (at that time known as the University College of Swansea). In 1986, Dai Games moved from Cardiff University to become the Units new Director. In 1984, the first observation of He22+ was made at the unit, its the same as molecular hydrogen (isolectronic molecules) except it has lots more energy 3310 kJ per mole. National Mass Spectrometry Service A grant of £670,000 was awarded in 1985 by the then Science and Engineering Research Council (SERC) to establish a national Mass Spectrometry Center at Swansea University to provide an analytical service to British Universities. It was officially opened in April 1987 by Lord Callaghan. In 2002, the center was enlarged and the new laboratories were opened by Lord Morgan. Following successful £3,000,000 contract renewal Edwina Hart, the Minister for Economy, Science and Transport, officially re-opened the EPSRC National Research Facility after refurbishment in 2015. Biomolecular Analysis Mass Spectrometry A Biomolecular Analysis Mass Spectrometry (BAMS) facility was officially opened in 2003, headed by Professor Newton and Dr Dudley. It was a collaborative entity between the Department of Biological Sciences and the Medical School. It focused on the study of nucleosides, nucleotides and cyclic nucleotides. Stable isotope mass spectrometry Stable isotope mass spectrometry is conducted in the Department of Geography, and was recently used by the Landmark Trust to determine very precisely the age of the timber from Llwyn Celyn farmhouse to the year 1420. Document 1::: The British Mass Spectrometry Society is a registered charity founded in 1964 that encourages participation in every aspect of mass spectrometry. It aims to encourage participation in all aspects of mass spectrometry on the widest basis, to promote knowledge and advancement in the field and to provide a forum for the exchange of views and information. It is committed to ensuring equal opportunities and reflecting the diversity of the society as a whole. The first foundations of the BMSS were laid in 1949 with the establishment of the Mass Spectrometry Panel by the Hydrocarbon Research Group. Conferences The society's annual meeting is held in the first week of September as well as regular special interest group meetings (Lipidomics, MALDI & Imaging, Ambient Ionisation, Environmental & Food Analysis) through the year, in locations throughout the United Kingdom. Locations of the society's annual meetings beginning in 1965: Grants In 1985, the Society used the proceeds from the 10th International Mass Spectrometry Conference to establish 7 Beynon PhD Studentships. In 2007, the Society announced they would initiate summer studentship projects and in 2012 they announced BMSS research grants. Publications Mass Matters Governance Executive committee The management of the Society is vested in an Executive Committee made up of Officers and General Members, they also act as Trustees of the Society. There are currently 10 officers of the Society namely the Chair, Vice-Chair, Treasurer, General Secretary, Meetings Secretary, Papers Secretary, Education Officer, Publicity Secretary, Special Interest Group Co-ordinator, and Digital Communications Officer. Presidents John Monaghan 2003 - Past chairs Awards In 1987 the society announce the establishment of the Aston Medal to be awarded to “individuals deserving special recognition by reason of their outstanding contributions to knowledge in the biological, chemical, engineering, mathematical, medical, or physical sci Document 2::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of Document 3::: GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test. Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95. After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17. Content specification Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below: Biochemistry (36%) A Chemical and Physical Foundations Thermodynamics and kinetics Redox states Water, pH, acid-base reactions and buffers Solutions and equilibria Solute-solvent interactions Chemical interactions and bonding Chemical reaction mechanisms B Structural Biology: Structure, Assembly, Organization and Dynamics Small molecules Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids) Supramolecular complexes (e.g. Document 4::: Analytical chemistry studies and uses instruments and methods to separate, identify, and quantify matter. In practice, separation, identification or quantification may constitute the entire analysis or be combined with another method. Separation isolates analytes. Qualitative analysis identifies analytes, while quantitative analysis determines the numerical amount or concentration. Analytical chemistry consists of classical, wet chemical methods and modern, instrumental methods. Classical qualitative methods use separations such as precipitation, extraction, and distillation. Identification may be based on differences in color, odor, melting point, boiling point, solubility, radioactivity or reactivity. Classical quantitative analysis uses mass or volume changes to quantify amount. Instrumental methods may be used to separate samples using chromatography, electrophoresis or field flow fractionation. Then qualitative and quantitative analysis can be performed, often with the same instrument and may use light interaction, heat interaction, electric fields or magnetic fields. Often the same instrument can separate, identify and quantify an analyte. Analytical chemistry is also focused on improvements in experimental design, chemometrics, and the creation of new measurement tools. Analytical chemistry has broad applications to medicine, science, and engineering. History Analytical chemistry has been important since the early days of chemistry, providing methods for determining which elements and chemicals are present in the object in question. During this period, significant contributions to analytical chemistry included the development of systematic elemental analysis by Justus von Liebig and systematized organic analysis based on the specific reactions of functional groups. The first instrumental analysis was flame emissive spectrometry developed by Robert Bunsen and Gustav Kirchhoff who discovered rubidium (Rb) and caesium (Cs) in 1860. Most of the major devel The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Mass spectrometry today is used extensively in chemistry and biology laboratories to identify chemical and biological substances according to their ratios of what? A. volume to charge B. ph to charge C. mass to volume D. mass-to-charge Answer:
sciq-11244
multiple_choice
The ocean basin begins where the ocean meets what?
[ "river", "bay", "sea", "land" ]
D
Relavent Documents: Document 0::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 1::: The Bachelor of Science in Aquatic Resources and Technology (B.Sc. in AQT) (or Bachelor of Aquatic Resource) is an undergraduate degree that prepares students to pursue careers in the public, private, or non-profit sector in areas such as marine science, fisheries science, aquaculture, aquatic resource technology, food science, management, biotechnology and hydrography. Post-baccalaureate training is available in aquatic resource management and related areas. The Department of Animal Science and Export Agriculture, at the Uva Wellassa University of Badulla, Sri Lanka, has the largest enrollment of undergraduate majors in Aquatic Resources and Technology, with about 200 students as of 2014. The Council on Education for Aquatic Resources and Technology includes undergraduate AQT degrees in the accreditation review of Aquatic Resources and Technology programs and schools. See also Marine Science Ministry of Fisheries and Aquatic Resources Development Document 2::: In hydrology, an oceanic basin (or ocean basin) is anywhere on Earth that is covered by seawater. Geologically, most of the ocean basins are large geologic basins that are below sea level. Most commonly the ocean is divided into basins following the continents distribution: the North and South Atlantic (together approximately 75 million km2/ 29 million mi2), North and South Pacific (together approximately 155 million km2/ 59 million mi2), Indian Ocean (68 million km2/ 26 million mi2) and Arctic Ocean (14 million km2/ 5.4 million mi2). Also recognized is the Southern Ocean (20 million km2/ 7 million mi2). All ocean basins collectively cover 71% of the Earth's surface, and together they contain almost 97% of all water on the planet. They have an average depth of almost 4 km (about 2.5 miles). Definitions of boundaries Boundaries based on continents "Limits of Oceans and Seas", published by the International Hydrographic Office in 1953, is a document that defined the ocean's basins as they are largely known today. The main ocean basins are the ones named in the previous section. These main basins are divided into smaller parts. Some examples are: the Baltic Sea (with three subdivisions), the North Sea, the Greenland Sea, the Norwegian Sea, the Laptev Sea, the Gulf of Mexico, the South China Sea, and many more. The limits were set for convenience of compiling sailing directions but had no geographical or physical ground and to this day have no political significance. For instance, the line between the North and South Atlantic is set at the equator. The Antarctic or Southern Ocean, which reaches from 60° south to Antarctica had been omitted until 2000, but is now also recognized by the International Hydrographic Office. Nevertheless, and since ocean basins are interconnected, many oceanographers prefer to refer to one single ocean basin instead of multiple ones.   Older references (e.g., Littlehales 1930) consider the oceanic basins to be the complement to the conti Document 3::: The borders of the oceans are the limits of Earth's oceanic waters. The definition and number of oceans can vary depending on the adopted criteria. The principal divisions (in descending order of area) of the five oceans are the Pacific Ocean, Atlantic Ocean, Indian Ocean, Southern (Antarctic) Ocean, and Arctic Ocean. Smaller regions of the oceans are called seas, gulfs, bays, straits, and other terms. Geologically, an ocean is an area of oceanic crust covered by water. See also the list of seas article for the seas included in each ocean area. Overview Though generally described as several separate oceans, the world's oceanic waters constitute one global, interconnected body of salt water sometimes referred to as the World Ocean or Global Ocean. This concept of a continuous body of water with relatively free interchange among its parts is of fundamental importance to oceanography. The major oceanic divisions are defined in part by the continents, various archipelagos, and other criteria. The principal divisions (in descending order of area) are the: Pacific Ocean, Atlantic Ocean, Indian Ocean, Southern (Antarctic) Ocean, and Arctic Ocean. Smaller regions of the oceans are called seas, gulfs, bays, straits, and other terms. Geologically, an ocean is an area of oceanic crust covered by water. Oceanic crust is the thin layer of solidified volcanic basalt that covers the Earth's mantle. Continental crust is thicker but less dense. From this perspective, the Earth has three oceans: the World Ocean, the Caspian Sea, and the Black Sea. The latter two were formed by the collision of Cimmeria with Laurasia. The Mediterranean Sea is at times a discrete ocean because tectonic plate movement has repeatedly broken its connection to the World Ocean through the Strait of Gibraltar. The Black Sea is connected to the Mediterranean through the Bosporus, but the Bosporus is a natural canal cut through continental rock some 7,000 years ago, rather than a piece of oceanic sea floo Document 4::: Marine technology is defined by WEGEMT (a European association of 40 universities in 17 countries) as "technologies for the safe use, exploitation, protection of, and intervention in, the marine environment." In this regard, according to WEGEMT, the technologies involved in marine technology are the following: naval architecture, marine engineering, ship design, ship building and ship operations; oil and gas exploration, exploitation, and production; hydrodynamics, navigation, sea surface and sub-surface support, underwater technology and engineering; marine resources (including both renewable and non-renewable marine resources); transport logistics and economics; inland, coastal, short sea and deep sea shipping; protection of the marine environment; leisure and safety. Education and training According to the Cape Fear Community College of Wilmington, North Carolina, the curriculum for a marine technology program provides practical skills and academic background that are essential in succeeding in the area of marine scientific support. Through a marine technology program, students aspiring to become marine technologists will become proficient in the knowledge and skills required of scientific support technicians. The educational preparation includes classroom instructions and practical training aboard ships, such as how to use and maintain electronic navigation devices, physical and chemical measuring instruments, sampling devices, and data acquisition and reduction systems aboard ocean-going and smaller vessels, among other advanced equipment. As far as marine technician programs are concerned, students learn hands-on to trouble shoot, service and repair four- and two-stroke outboards, stern drive, rigging, fuel & lube systems, electrical including diesel engines. Relationship to commerce Marine technology is related to the marine science and technology industry, also known as maritime commerce. The Executive Office of Housing and Economic Development (EOHED The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The ocean basin begins where the ocean meets what? A. river B. bay C. sea D. land Answer:
sciq-327
multiple_choice
Alkenes have double bonds while alkynes have what?
[ "triple bonds", "quadruple bonds", "single bonds", "equal bonds" ]
A
Relavent Documents: Document 0::: A carbon–carbon bond is a covalent bond between two carbon atoms. The most common form is the single bond: a bond composed of two electrons, one from each of the two atoms. The carbon–carbon single bond is a sigma bond and is formed between one hybridized orbital from each of the carbon atoms. In ethane, the orbitals are sp3-hybridized orbitals, but single bonds formed between carbon atoms with other hybridizations do occur (e.g. sp2 to sp2). In fact, the carbon atoms in the single bond need not be of the same hybridization. Carbon atoms can also form double bonds in compounds called alkenes or triple bonds in compounds called alkynes. A double bond is formed with an sp2-hybridized orbital and a p-orbital that is not involved in the hybridization. A triple bond is formed with an sp-hybridized orbital and two p-orbitals from each atom. The use of the p-orbitals forms a pi bond. Chains and branching Carbon is one of the few elements that can form long chains of its own atoms, a property called catenation. This coupled with the strength of the carbon–carbon bond gives rise to an enormous number of molecular forms, many of which are important structural elements of life, so carbon compounds have their own field of study: organic chemistry. Branching is also common in C−C skeletons. Carbon atoms in a molecule are categorized by the number of carbon neighbors they have: A primary carbon has one carbon neighbor. A secondary carbon has two carbon neighbors. A tertiary carbon has three carbon neighbors. A quaternary carbon has four carbon neighbors. In "structurally complex organic molecules", it is the three-dimensional orientation of the carbon–carbon bonds at quaternary loci which dictates the shape of the molecule. Further, quaternary loci are found in many biologically active small molecules, such as cortisone and morphine. Synthesis Carbon–carbon bond-forming reactions are organic reactions in which a new carbon–carbon bond is formed. They are important in th Document 1::: In chemistry, a double bond is a covalent bond between two atoms involving four bonding electrons as opposed to two in a single bond. Double bonds occur most commonly between two carbon atoms, for example in alkenes. Many double bonds exist between two different elements: for example, in a carbonyl group between a carbon atom and an oxygen atom. Other common double bonds are found in azo compounds (N=N), imines (C=N), and sulfoxides (S=O). In a skeletal formula, a double bond is drawn as two parallel lines (=) between the two connected atoms; typographically, the equals sign is used for this. Double bonds were introduced in chemical notation by Russian chemist Alexander Butlerov. Double bonds involving carbon are stronger and shorter than single bonds. The bond order is two. Double bonds are also electron-rich, which makes them potentially more reactive in the presence of a strong electron acceptor (as in addition reactions of the halogens). Double bonds in alkenes The type of bonding can be explained in terms of orbital hybridisation. In ethylene each carbon atom has three sp2 orbitals and one p-orbital. The three sp2 orbitals lie in a plane with ~120° angles. The p-orbital is perpendicular to this plane. When the carbon atoms approach each other, two of the sp2 orbitals overlap to form a sigma bond. At the same time, the two p-orbitals approach (again in the same plane) and together they form a pi bond. For maximum overlap, the p-orbitals have to remain parallel, and, therefore, rotation around the central bond is not possible. This property gives rise to cis-trans isomerism. Double bonds are shorter than single bonds because p-orbital overlap is maximized. With 133 pm, the ethylene C=C bond length is shorter than the C−C length in ethane with 154 pm. The double bond is also stronger, 636 kJ mol−1 versus 368 kJ mol−1 but not twice as much as the pi-bond is weaker than the sigma bond due to less effective pi-overlap. In an alternative representation, the doubl Document 2::: In chemistry, the carbon-hydrogen bond ( bond) is a chemical bond between carbon and hydrogen atoms that can be found in many organic compounds. This bond is a covalent, single bond, meaning that carbon shares its outer valence electrons with up to four hydrogens. This completes both of their outer shells, making them stable. Carbon–hydrogen bonds have a bond length of about 1.09 Å (1.09 × 10−10 m) and a bond energy of about 413 kJ/mol (see table below). Using Pauling's scale—C (2.55) and H (2.2)—the electronegativity difference between these two atoms is 0.35. Because of this small difference in electronegativities, the bond is generally regarded as being non-polar. In structural formulas of molecules, the hydrogen atoms are often omitted. Compound classes consisting solely of bonds and bonds are alkanes, alkenes, alkynes, and aromatic hydrocarbons. Collectively they are known as hydrocarbons. In October 2016, astronomers reported that the very basic chemical ingredients of life—the carbon-hydrogen molecule (CH, or methylidyne radical), the carbon-hydrogen positive ion () and the carbon ion ()—are the result, in large part, of ultraviolet light from stars, rather than in other ways, such as the result of turbulent events related to supernovae and young stars, as thought earlier. Bond length The length of the carbon-hydrogen bond varies slightly with the hybridisation of the carbon atom. A bond between a hydrogen atom and an sp2 hybridised carbon atom is about 0.6% shorter than between hydrogen and sp3 hybridised carbon. A bond between hydrogen and sp hybridised carbon is shorter still, about 3% shorter than sp3 C-H. This trend is illustrated by the molecular geometry of ethane, ethylene and acetylene. Reactions The C−H bond in general is very strong, so it is relatively unreactive. In several compound classes, collectively called carbon acids, the C−H bond can be sufficiently acidic for proton removal. Unactivated C−H bonds are found in alkanes and are no Document 3::: Cross-conjugation is a special type of conjugation in a molecule, when in a set of three pi bonds only two pi bonds interact with each other by conjugation, while the third one is excluded from interaction. Whereas a normal conjugated system such as a polyene typically has alternating single and double bonds along consecutive atoms, a cross-conjugated system has an alkene unit bonded to one of the middle atoms of another conjugated chain through a single bond. In classical terms, one of the double-bonds branches off rather than continuing consecutively: the main chain is conjugated, and part of that same main chain is conjugated with the side group, but all parts are not conjugated together as strongly. Examples of cross-conjugation can be found in molecules such as benzophenone, , p-quinones, dendralenes, radialenes, fullerene, and Indigo dye. The type of conjugation affects reactivity and molecular electronic transitions. Document 4::: In chemistry, an open-chain compound (also spelled as open chain compound) or acyclic compound (Greek prefix "α", without and "κύκλος", cycle) is a compound with a linear structure, rather than a cyclic one. An open-chain compound having no side groups is called a straight-chain compound (also spelled as straight chain compound). Many of the simple molecules of organic chemistry, such as the alkanes and alkenes, have both linear and ring isomers, that is, both acyclic and cyclic. For those with 4 or more carbons, the linear forms can have straight-chain or branched-chain isomers. The lowercase prefix n- denotes the straight-chain isomer; for example, n-butane is straight-chain butane, whereas i-butane is isobutane. Cycloalkanes are isomers of alkenes, not of alkanes, because the ring's closure involves a C-C bond. Having no rings (aromatic or otherwise), all open-chain compounds are aliphatic. Typically in biochemistry, some isomers are more prevalent than others. For example, in living organisms, the open-chain isomer of glucose usually exists only transiently, in small amounts; D-glucose is the usual isomer; and L-glucose is rare. Straight-chain molecules are often not literally straight, in the sense that their bond angles are often not 180°, but the name reflects that they are schematically straight. For example, the straight-chain alkanes are wavy or "puckered", as the models below show. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Alkenes have double bonds while alkynes have what? A. triple bonds B. quadruple bonds C. single bonds D. equal bonds Answer:
sciq-3851
multiple_choice
What system plays a critical role in the regulation of vascular homeostasis?
[ "circulatory system", "nervous system", "renal system", "endocrine system" ]
B
Relavent Documents: Document 0::: The blood circulatory system is a system of organs that includes the heart, blood vessels, and blood which is circulated throughout the entire body of a human or other vertebrate. It includes the cardiovascular system, or vascular system, that consists of the heart and blood vessels (from Greek kardia meaning heart, and from Latin vascula meaning vessels). The circulatory system has two divisions, a systemic circulation or circuit, and a pulmonary circulation or circuit. Some sources use the terms cardiovascular system and vascular system interchangeably with the circulatory system. The network of blood vessels are the great vessels of the heart including large elastic arteries, and large veins; other arteries, smaller arterioles, capillaries that join with venules (small veins), and other veins. The circulatory system is closed in vertebrates, which means that the blood never leaves the network of blood vessels. Some invertebrates such as arthropods have an open circulatory system. Diploblasts such as sponges, and comb jellies lack a circulatory system. Blood is a fluid consisting of plasma, red blood cells, white blood cells, and platelets; it is circulated around the body carrying oxygen and nutrients to the tissues and collecting and disposing of waste materials. Circulated nutrients include proteins and minerals and other components include hemoglobin, hormones, and gases such as oxygen and carbon dioxide. These substances provide nourishment, help the immune system to fight diseases, and help maintain homeostasis by stabilizing temperature and natural pH. In vertebrates, the lymphatic system is complementary to the circulatory system. The lymphatic system carries excess plasma (filtered from the circulatory system capillaries as interstitial fluid between cells) away from the body tissues via accessory routes that return excess fluid back to blood circulation as lymph. The lymphatic system is a subsystem that is essential for the functioning of the bloo Document 1::: Cardiovascular physiology is the study of the cardiovascular system, specifically addressing the physiology of the heart ("cardio") and blood vessels ("vascular"). These subjects are sometimes addressed separately, under the names cardiac physiology and circulatory physiology. Although the different aspects of cardiovascular physiology are closely interrelated, the subject is still usually divided into several subtopics. Heart Cardiac output (= heart rate * stroke volume. Can also be calculated with Fick principle, palpating method.) Stroke volume (= end-diastolic volume − end-systolic volume) Ejection fraction (= stroke volume / end-diastolic volume) Cardiac output is mathematically ` to systole Inotropic, chronotropic, and dromotropic states Cardiac input (= heart rate * suction volume Can be calculated by inverting terms in Fick principle) Suction volume (= end-systolic volume + end-diastolic volume) Injection fraction (=suction volume / end-systolic volume) Cardiac input is mathematically ` to diastole Electrical conduction system of the heart Electrocardiogram Cardiac marker Cardiac action potential Frank–Starling law of the heart Wiggers diagram Pressure volume diagram Regulation of blood pressure Baroreceptor Baroreflex Renin–angiotensin system Renin Angiotensin Juxtaglomerular apparatus Aortic body and carotid body Autoregulation Cerebral Autoregulation Hemodynamics Under most circumstances, the body attempts to maintain a steady mean arterial pressure. When there is a major and immediate decrease (such as that due to hemorrhage or standing up), the body can increase the following: Heart rate Total peripheral resistance (primarily due to vasoconstriction of arteries) Inotropic state In turn, this can have a significant impact upon several other variables: Stroke volume Cardiac output Pressure Pulse pressure (systolic pressure - diastolic pressure) Mean arterial pressure (usually approximated with diastolic pressure + Document 2::: In haemodynamics, the body must respond to physical activities, external temperature, and other factors by homeostatically adjusting its blood flow to deliver nutrients such as oxygen and glucose to stressed tissues and allow them to function. Haemodynamic response (HR) allows the rapid delivery of blood to active neuronal tissues. The brain consumes large amounts of energy but does not have a reservoir of stored energy substrates. Since higher processes in the brain occur almost constantly, cerebral blood flow is essential for the maintenance of neurons, astrocytes, and other cells of the brain. This coupling between neuronal activity and blood flow is also referred to as neurovascular coupling. Vascular anatomy overview In order to understand how blood is delivered to cranial tissues, it is important to understand the vascular anatomy of the space itself. Large cerebral arteries in the brain split into smaller arterioles, also known as pial arteries. These consist of endothelial cells and smooth muscle cells, and as these pial arteries further branch and run deeper into the brain, they associate with glial cells, namely astrocytes. The intracerebral arterioles and capillaries are unlike systemic arterioles and capillaries in that they do not readily allow substances to diffuse through them; they are connected by tight junctions in order to form the blood brain barrier (BBB). Endothelial cells, smooth muscle, neurons, astrocytes, and pericytes work together in the brain order to maintain the BBB while still delivering nutrients to tissues and adjusting blood flow in the intracranial space to maintain homeostasis. As they work as a functional neurovascular unit, alterations in their interactions at the cellular level can impair HR in the brain and lead to deviations in normal nervous function. Mechanisms Various cell types play a role in HR, including astrocytes, smooth muscle cells, endothelial cells of blood vessels, and pericytes. These cells control whether th Document 3::: Pathophysiology is a study which explains the function of the body as it relates to diseases and conditions. The pathophysiology of hypertension is an area which attempts to explain mechanistically the causes of hypertension, which is a chronic disease characterized by elevation of blood pressure. Hypertension can be classified by cause as either essential (also known as primary or idiopathic) or secondary. About 90–95% of hypertension is essential hypertension. Some authorities define essential hypertension as that which has no known explanation, while others define its cause as being due to overconsumption of sodium and underconsumption of potassium. Secondary hypertension indicates that the hypertension is a result of a specific underlying condition with a well-known mechanism, such as chronic kidney disease, narrowing of the aorta or kidney arteries, or endocrine disorders such as excess aldosterone, cortisol, or catecholamines. Persistent hypertension is a major risk factor for hypertensive heart disease, coronary artery disease, stroke, aortic aneurysm, peripheral artery disease, and chronic kidney disease. Cardiac output and peripheral resistance are the two determinants of arterial pressure. Cardiac output is determined by stroke volume and heart rate; stroke volume is related to myocardial contractility and to the size of the vascular compartment. Peripheral resistance is determined by functional and anatomic changes in small arteries and arterioles. Genetics Single gene mutations can cause Mendelian forms of high blood pressure; ten genes have been identified which cause these monogenic forms of hypertension. These mutations affect blood pressure by altering kidney salt handling. There is greater similarity in blood pressure within families than between families, which indicates a form of inheritance, and this is not due to shared environmental factors. With the aid of genetic analysis techniques, a statistically significant linkage of blood pressure to Document 4::: End organ damage usually refers to damage occurring in major organs fed by the circulatory system (heart, kidneys, brain, eyes) which can sustain damage due to uncontrolled hypertension, hypotension, or hypovolemia. Evidence of hypertensive damage In the context of hypertension, features include: Heart — evidence on electrocardiogram screening of the heart muscle thickening (but may also be seen on chest X-ray) suggesting left ventricular hypertrophy) or by echocardiography of less efficient function (left ventricular failure). Brain- hypertensive encephalopathy, hemorrhagic stroke, subarachnoid hemorrhage, confusion, loss of consciousness, eclampsia, seizures, or transient ischemic attack. Kidney — leakage of protein into the urine (albuminuria or proteinuria), or reduced renal function, hypertensive nephropathy, acute renal failure, or glomerulonephritis. Eye — evidence upon fundoscopic examination of hypertensive retinopathy, retinal hemorrhage, papilledema and blindness. Peripheral arteries — peripheral vascular disease and chronic lower limb ischemia. Evidence of shock In the context of poor end organ perfusion, features include: Kidney — poor urine output (less than 0.5 mL/kg), low glomerular filtration rate. Skin — pallor or mottled appearance, capillary refill > 2 secs, cool limbs. Brain — obtundation or disorientation to time, person, and place. The Glasgow Coma Scale may be used to quantify altered consciousness. Gut — absent bowel sounds, ileus The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What system plays a critical role in the regulation of vascular homeostasis? A. circulatory system B. nervous system C. renal system D. endocrine system Answer:
sciq-2384
multiple_choice
The light-sensing cells in the retina are called rods and what else?
[ "cones", "sensor cells", "stents", "light cells" ]
A
Relavent Documents: Document 0::: Rod cells are photoreceptor cells in the retina of the eye that can function in lower light better than the other type of visual photoreceptor, cone cells. Rods are usually found concentrated at the outer edges of the retina and are used in peripheral vision. On average, there are approximately 92 million rod cells (vs ~6 million cones) in the human retina. Rod cells are more sensitive than cone cells and are almost entirely responsible for night vision. However, rods have little role in color vision, which is the main reason why colors are much less apparent in dim light. Structure Rods are a little longer and leaner than cones but have the same basic structure. Opsin-containing disks lie at the end of the cell adjacent to the retinal pigment epithelium, which in turn is attached to the inside of the eye. The stacked-disc structure of the detector portion of the cell allows for very high efficiency. Rods are much more common than cones, with about 120 million rod cells compared to 6 to 7 million cone cells. Like cones, rod cells have a synaptic terminal, an inner segment, and an outer segment. The synaptic terminal forms a synapse with another neuron, usually a bipolar cell or a horizontal cell. The inner and outer segments are connected by a cilium, which lines the distal segment. The inner segment contains organelles and the cell's nucleus, while the rod outer segment (abbreviated to ROS), which is pointed toward the back of the eye, contains the light-absorbing materials. A human rod cell is about 2 microns in diameter and 100 microns long. Rods are not all morphologically the same; in mice, rods close to the outer plexiform synaptic layer display a reduced length due to a shortened synaptic terminal. Function Photoreception In vertebrates, activation of a photoreceptor cell is a hyperpolarization (inhibition) of the cell. When they are not being stimulated, such as in the dark, rod cells and cone cells depolarize and release a neurotransmitter spontan Document 1::: The elements composing the layer of rods and cones (Jacob's membrane) in the retina of the eye are of two kinds, rod cells and cone cells, the former being much more numerous than the latter except in the macula lutea. Jacob's membrane is named after Irish ophthalmologist Arthur Jacob, who was the first to describe this nervous layer of the retina. Document 2::: The posterior surfaces of the ciliary processes are covered by a bilaminar layer of black pigment cells, which is continued forward from the retina, and is named the pars ciliaris retinae. Document 3::: In the anatomy of the eye, the ganglion cell layer (ganglionic layer) is a layer of the retina that consists of retinal ganglion cells and displaced amacrine cells. The cells are somewhat flask-shaped; the rounded internal surface of each resting on the stratum opticum, and sending off an axon which is prolonged into it. From the opposite end numerous dendrites extend into the inner plexiform layer, where they branch and form flattened arborizations at different levels. The ganglion cells vary much in size, and the dendrites of the smaller ones as a rule arborize in the inner plexiform layer as soon as they enter it; while those of the larger cells ramify close to the inner nuclear layer. Document 4::: In the anatomy of the eye, the inner nuclear layer or layer of inner granules, of the retina, is made up of a number of closely packed cells, of which there are three varieties: bipolar cells, horizontal cells, and amacrine cells. Bipolar cells The bipolar cells, by far the most numerous, are round or oval in shape, and each is prolonged into an inner and an outer process. They are divisible into rod bipolars and cone bipolars. The inner processes of the rod bipolars run through the inner plexiform layer and arborize around the bodies of the cells of the ganglionic layer; their outer processes end in the outer plexiform layer in tufts of fibrils around the button-like ends of the inner processes of the rod granules. The inner processes of the cone bipolars ramify in the inner plexiform layer in contact with the dendrites of the ganglionic cells. Connection types Midget bipolars are linked to one cone while diffuse bipolars take groups of receptors. Diffuse bipolars can take signals from up to 50 rods or can be a flat cone form and take signals from seven cones. The bipolar cells corresponds to the intermediary cells between the touch and heat receptors on the skin and the medulla or spinal cord. Horizontal cells The horizontal cells lie in the outer part of the inner nuclear layer and possess somewhat flattened cell bodies. Their dendrites divide into numerous branches in the outer plexiform layer, while their axons run horizontally for some distance and finally ramify in the same layer. Amacrine cells The amacrine cells are placed in the inner part of the inner nuclear layer, and are so named because they have not yet been shown to possess axis-cylinder processes. Their dendrites undergo extensive ramification in the inner plexiform layer. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The light-sensing cells in the retina are called rods and what else? A. cones B. sensor cells C. stents D. light cells Answer:
sciq-2159
multiple_choice
What contains the spore-forming asci?
[ "the ascocarps", "flagella", "sporozoa", "mushroom cap" ]
A
Relavent Documents: Document 0::: Ascomycota is a phylum of the kingdom Fungi that, together with the Basidiomycota, forms the subkingdom Dikarya. Its members are commonly known as the sac fungi or ascomycetes. It is the largest phylum of Fungi, with over 64,000 species. The defining feature of this fungal group is the "ascus" (), a microscopic sexual structure in which nonmotile spores, called ascospores, are formed. However, some species of the Ascomycota are asexual, meaning that they do not have a sexual cycle and thus do not form asci or ascospores. Familiar examples of sac fungi include morels, truffles, brewers' and bakers' yeast, dead man's fingers, and cup fungi. The fungal symbionts in the majority of lichens (loosely termed "ascolichens") such as Cladonia belong to the Ascomycota. Ascomycota is a monophyletic group (it contains all descendants of one common ancestor). Previously placed in the Deuteromycota along with asexual species from other fungal taxa, asexual (or anamorphic) ascomycetes are now identified and classified based on morphological or physiological similarities to ascus-bearing taxa, and by phylogenetic analyses of DNA sequences. The ascomycetes are of particular use to humans as sources of medicinally important compounds, such as antibiotics, for fermenting bread, alcoholic beverages and cheese. Penicillium species on cheeses and those producing antibiotics for treating bacterial infectious diseases are examples of ascomycetes. Many ascomycetes are pathogens, both of animals, including humans, and of plants. Examples of ascomycetes that can cause infections in humans include Candida albicans, Aspergillus niger and several tens of species that cause skin infections. The many plant-pathogenic ascomycetes include apple scab, rice blast, the ergot fungi, black knot, and the powdery mildews. Another pathogenic ascomycete is Cordyceps. Cordyceps are parasites of insects and other arthropods. They are entomopathogenic fungi, which means the fungi kills or severely injures the Document 1::: An umbo is a raised area in the center of a mushroom cap. Caps that possess this feature are called umbonate. Umbos that are sharply pointed are called acute, while those that are more rounded are broadly umbonate. If the umbo is elongated, it is cuspidate, and if the umbo is sharply delineated but not elongated (somewhat resembling the shape of a human areola), it is called mammilate or papillate. Document 2::: Aeciospores are one of several different types of spores formed by rusts. They each have two nuclei and are typically seen in chain-like formations in the aecium. Document 3::: An ascus (; : asci) is the sexual spore-bearing cell produced in ascomycete fungi. Each ascus usually contains eight ascospores (or octad), produced by meiosis followed, in most species, by a mitotic cell division. However, asci in some genera or species can occur in numbers of one (e.g. Monosporascus cannonballus), two, four, or multiples of four. In a few cases, the ascospores can bud off conidia that may fill the asci (e.g. Tympanis) with hundreds of conidia, or the ascospores may fragment, e.g. some Cordyceps, also filling the asci with smaller cells. Ascospores are nonmotile, usually single celled, but not infrequently may be coenocytic (lacking a septum), and in some cases coenocytic in multiple planes. Mitotic divisions within the developing spores populate each resulting cell in septate ascospores with nuclei. The term ocular chamber, or oculus, refers to the epiplasm (the portion of cytoplasm not used in ascospore formation) that is surrounded by the "bourrelet" (the thickened tissue near the top of the ascus). Typically, a single ascus will contain eight ascospores (or octad). The eight spores are produced by meiosis followed by a mitotic division. Two meiotic divisions turn the original diploid zygote nucleus into four haploid ones. That is, the single original diploid cell from which the whole process begins contains two complete sets of chromosomes. In preparation for meiosis, all the DNA of both sets is duplicated, to make a total of four sets. The nucleus that contains the four sets divides twice, separating into four new nuclei – each of which has one complete set of chromosomes. Following this process, each of the four new nuclei duplicates its DNA and undergoes a division by mitosis. As a result, the ascus will contain four pairs of spores. Then the ascospores are released from the ascus. In many cases the asci are formed in a regular layer, the hymenium, in a fruiting body which is visible to the naked eye, here called an ascocarp or ascoma. Document 4::: The auxiliary cell is a spore-like structure that form within the fungal family Gigasporaceae (order Gigasporales). Auxiliary cells have thin cell walls, (spiny), papillate, knobby or sometimes smooth surfaces, and are formed from hyphae after spore germination before the formation of mycorrhizae, and then on the extraradical hyphae in the soil. They may not be 'cells' in the biological sense of the word, as they are structures found with coenocytic hyphae belonging to members of the phylum (division) Glomeromycota. Mostly they are known from members of the Gigasporaceae. Currently this family contains Gigaspora, Scutellospora and Racocetra, but there are other generic names that have not been widely accepted (Dentiscutata, Cetraspora, Fuscutata and Quatunica) — all of these form auxiliary cells. Members of the genus Pacispora (another genus in the Diversisporales) are also said to produce a kind of auxiliary cell but this requires further confirmation. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What contains the spore-forming asci? A. the ascocarps B. flagella C. sporozoa D. mushroom cap Answer:
ai2_arc-1073
multiple_choice
Scientists study fossils to learn about
[ "the composition of the Earth.", "patterns of crystal formation.", "physical properties of rocks.", "organisms from long ago." ]
D
Relavent Documents: Document 0::: A megabias, or a taphonomic megabias, is a large-scale pattern in the quality of the fossil record that affects paleobiologic analysis at provincial to global levels and at timescales usually exceeding ten million years. It can result from major shifts in intrinsic and extrinsic properties of organisms, including morphology and behaviour in relation to other organisms, or shifts in the global environment, which can cause secular or long-term cyclic changes in preservation. Introduction The fossil record exhibits bias at many different levels. At the most basic level, there is a global bias towards biomineralizing organisms, because biomineralized body parts are more resistant to decay and degradation. Due to the principle of uniformitarianism, there is a basic assumption in geology that the formation of rocks has occurred by the same naturalistic processes throughout history, and thus that the reach of such biases remains stable over time. A megabias is a direct contradiction of this, whereby changes occur in large scale paleobiologic patterns. This includes: Changes in diversity and community structure over tens of millions of years Variation in the quality of the fossil record between mass and background extinction times Variation among different climate states, biogeographic provinces, and tectonic settings. It is generally assumed that the quality of the fossil record decreases globally and across all taxa with increasing age, because more time is available for the diagenesis and destruction of both fossils and enclosing rocks, and thus the term "megabias" is usually used to refer to global trends in preservation. However, it has been noted that the fossil record of some taxa actually improves with greater age. Examples such as this, and other related paleobiological trends, clearly indicate the action of a megabias, but only within one particular taxon. Hence, it is necessary to define four classes of megabias related to the reach of the bias, first defined Document 1::: Paleobiology (or palaeobiology) is an interdisciplinary field that combines the methods and findings found in both the earth sciences and the life sciences. Paleobiology is not to be confused with geobiology, which focuses more on the interactions between the biosphere and the physical Earth. Paleobiological research uses biological field research of current biota and of fossils millions of years old to answer questions about the molecular evolution and the evolutionary history of life. In this scientific quest, macrofossils, microfossils and trace fossils are typically analyzed. However, the 21st-century biochemical analysis of DNA and RNA samples offers much promise, as does the biometric construction of phylogenetic trees. An investigator in this field is known as a paleobiologist. Important research areas Paleobotany applies the principles and methods of paleobiology to flora, especially green land plants, but also including the fungi and seaweeds (algae). See also mycology, phycology and dendrochronology. Paleozoology uses the methods and principles of paleobiology to understand fauna, both vertebrates and invertebrates. See also vertebrate and invertebrate paleontology, as well as paleoanthropology. Micropaleontology applies paleobiologic principles and methods to archaea, bacteria, protists and microscopic pollen/spores. See also microfossils and palynology. Paleovirology examines the evolutionary history of viruses on paleobiological timescales. Paleobiochemistry uses the methods and principles of organic chemistry to detect and analyze molecular-level evidence of ancient life, both microscopic and macroscopic. Paleoecology examines past ecosystems, climates, and geographies so as to better comprehend prehistoric life. Taphonomy analyzes the post-mortem history (for example, decay and decomposition) of an individual organism in order to gain insight on the behavior, death and environment of the fossilized organism. Paleoichnology analyzes the tracks, bo Document 2::: Paleoecology (also spelled palaeoecology) is the study of interactions between organisms and/or interactions between organisms and their environments across geologic timescales. As a discipline, paleoecology interacts with, depends on and informs a variety of fields including paleontology, ecology, climatology and biology. Paleoecology emerged from the field of paleontology in the 1950s, though paleontologists have conducted paleoecological studies since the creation of paleontology in the 1700s and 1800s. Combining the investigative approach of searching for fossils with the theoretical approach of Charles Darwin and Alexander von Humboldt, paleoecology began as paleontologists began examining both the ancient organisms they discovered and the reconstructed environments in which they lived. Visual depictions of past marine and terrestrial communities have been considered an early form of paleoecology. The term "paleo-ecology" was coined by Frederic Clements in 1916. Overview of paleoecological approaches Classic paleoecology uses data from fossils and subfossils to reconstruct the ecosystems of the past. It involves the study of fossil organisms and their associated remains (such as shells, teeth, pollen, and seeds), which can help in the interpretation of their life cycle, living interactions, natural environment, communities, and manner of death and burial. Such interpretations aid the reconstruction of past environments (i.e., paleoenvironments). Paleoecologists have studied the fossil record to try to clarify the relationship animals have to their environment, in part to help understand the current state of biodiversity. They have identified close links between vertebrate taxonomic and ecological diversity, that is, between the diversity of animals and the niches they occupy. Classical paleoecology is a primarily reductionist approach: scientists conduct detailed analysis of relatively small groups of organisms within shorter geologic timeframes. Evolutio Document 3::: Animal science is described as "studying the biology of animals that are under the control of humankind". It can also be described as the production and management of farm animals. Historically, the degree was called animal husbandry and the animals studied were livestock species, like cattle, sheep, pigs, poultry, and horses. Today, courses available look at a broader area, including companion animals, like dogs and cats, and many exotic species. Degrees in Animal Science are offered at a number of colleges and universities. Animal science degrees are often offered at land-grant universities, which will often have on-campus farms to give students hands-on experience with livestock animals. Education Professional education in animal science prepares students for careers in areas such as animal breeding, food and fiber production, nutrition, animal agribusiness, animal behavior, and welfare. Courses in a typical Animal Science program may include genetics, microbiology, animal behavior, nutrition, physiology, and reproduction. Courses in support areas, such as genetics, soils, agricultural economics and marketing, legal aspects, and the environment also are offered. Bachelor degree At many universities, a Bachelor of Science (BS) degree in Animal Science allows emphasis in certain areas. Typical areas are species-specific or career-specific. Species-specific areas of emphasis prepare students for a career in dairy management, beef management, swine management, sheep or small ruminant management, poultry production, or the horse industry. Other career-specific areas of study include pre-veterinary medicine studies, livestock business and marketing, animal welfare and behavior, animal nutrition science, animal reproduction science, or genetics. Youth programs are also an important part of animal science programs. Pre-veterinary emphasis Many schools that offer a degree option in Animal Science also offer a pre-veterinary emphasis such as Iowa State University, th Document 4::: The paleopedological record is, essentially, the fossil record of soils. The paleopedological record consists chiefly of paleosols buried by flood sediments, or preserved at geological unconformities, especially plateau escarpments or sides of river valleys. Other fossil soils occur in areas where volcanic activity has covered the ancient soils. Problems of recognition After burial, soil fossils tend to be altered by various chemical and physical processes. These include: Decomposition of organic matter that was once present in the old soil. This hinders the recognition of vegetation that was in the soil when it was present. Oxidation of iron from Fe2+ to Fe3+ by O2 as the former soil becomes dry and more oxygen enters the soil. Drying out of hydrous ferric oxides to anhydrous oxides - again due to the presence of more available O2 in the dry environment. The keys to recognising fossils of various soils include: Tubular structures that branch and thin irregularly downward or show the anatomy of fossilised root traces Gradational alteration down from a sharp lithological contact like that between land surface and soil horizons Complex patterns of cracks and mineral replacements like those of soil clods (peds) and planar cutans. Classification Soil fossils are usually classified by USDA soil taxonomy. With the exception of some exceedingly old soils which have a clayey, grey-green horizon that is quite unlike any present soil and clearly formed in the absence of O2, most fossil soils can be classified into one of the twelve orders recognised by this system. This is usually done by means of X-ray diffraction, which allows the various particles within the former soils to be analysed so that it can be seen to which order the soils correspond. Other methods for classifying soil fossils rely on geochemical analysis of the soil material, which allows the minerals in the soil to be identified. This is only useful where large amounts of the ancient soil are avai The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Scientists study fossils to learn about A. the composition of the Earth. B. patterns of crystal formation. C. physical properties of rocks. D. organisms from long ago. Answer:
sciq-9070
multiple_choice
What do you call a condition caused by mutations in one or more genes?
[ "mutation disorder", "evolutionary disorder", "intrinsic disorder", "genetic disorder" ]
D
Relavent Documents: Document 0::: In biology, and especially in genetics, a mutant is an organism or a new genetic character arising or resulting from an instance of mutation, which is generally an alteration of the DNA sequence of the genome or chromosome of an organism. It is a characteristic that would not be observed naturally in a specimen. The term mutant is also applied to a virus with an alteration in its nucleotide sequence whose genome is in the nuclear genome. The natural occurrence of genetic mutations is integral to the process of evolution. The study of mutants is an integral part of biology; by understanding the effect that a mutation in a gene has, it is possible to establish the normal function of that gene. Mutants arise by mutation Mutants arise by mutations occurring in pre-existing genomes as a result of errors of DNA replication or errors of DNA repair. Errors of replication often involve translesion synthesis by a DNA polymerase when it encounters and bypasses a damaged base in the template strand. A DNA damage is an abnormal chemical structure in DNA, such as a strand break or an oxidized base, whereas a mutation, by contrast, is a change in the sequence of standard base pairs. Errors of repair occur when repair processes inaccurately replace a damaged DNA sequence. The DNA repair process microhomology-mediated end joining is particularly error-prone. Etymology Although not all mutations have a noticeable phenotypic effect, the common usage of the word "mutant" is generally a pejorative term, only used for genetically or phenotypically noticeable mutations. Previously, people used the word "sport" (related to spurt) to refer to abnormal specimens. The scientific usage is broader, referring to any organism differing from the wild type. The word finds its origin in the Latin term mūtant- (stem of mūtāns), which means "to change". Mutants should not be confused with organisms born with developmental abnormalities, which are caused by errors during morphogenesis. In a devel Document 1::: A mutant protein is the protein product encoded by a gene with mutation. Mutated protein can have single amino acid change (minor, but still in many cases significant change leading to disease) or wide-range amino acid changes by e.g. truncation of C-terminus after introducing premature stop codon. See also Site-directed mutagenesis Phi value analysis missense mutation nonsense mutation point mutation frameshift mutation silent mutation single-nucleotide polymorphism Document 2::: A human disease modifier gene is a modifier gene that alters expression of a human gene at another locus that in turn causes a genetic disease. Whereas medical genetics has tended to distinguish between monogenic traits, governed by simple, Mendelian inheritance, and quantitative traits, with cumulative, multifactorial causes, increasing evidence suggests that human diseases exist on a continuous spectrum between the two. In the context of human disease, the terms 'modifier gene' and 'oligogene' have similar meanings, and characterization of a particular locus depends on characterization of the phenotype (effects) that it causes or modifies. The term 'modifier gene' may be taken to mean a gene in which genetic variation modifies the effects of mutation at a major locus, but has no effect on the normal condition, a condition not necessarily met for oligogenic interactions. The study of diseases that arise from interactions amongst genes is important for understanding the genetic basis of disease. For these purposes, the study of both modifier genes and oligogenes are useful. Theoretical origins Early theories that established the likely existence of modifier genes, and gene interactions as determinants of phenotypic variation, originated from theories of evolution, notably the evolution of the condition of allelic dominance. While many insightful early theorists contributed to current understanding of modifier genes, emphasized here are the theories of Ronald A. Fisher, Sewall Wright, and John B. S. Haldane. Fisher and Wright proposed somewhat opposing theories of the evolution of dominance in 1928 and 1931 respectively. Both sought to explain the observation that overwhelmingly, wild-type alleles were dominant to a majority of deleterious mutations. Their theories on the evolution of dominance had far-reaching implications for the fields of evolution, population and quantitative genetics, and biochemistry, and laid the early foundation of current understanding o Document 3::: In biology, the word gene (from , ; meaning generation or birth or gender) can have several different meanings. The Mendelian gene is a basic unit of heredity and the molecular gene is a sequence of nucleotides in DNA that is transcribed to produce a functional RNA. There are two types of molecular genes: protein-coding genes and non-coding genes. During gene expression, the DNA is first copied into RNA. The RNA can be directly functional or be the intermediate template for a protein that performs a function. (Some viruses have an RNA genome so the genes are made of RNA that may function directly without being copied into RNA. This is an exception to the strict definition of a gene described above.) The transmission of genes to an organism's offspring is the basis of the inheritance of phenotypic traits. These genes make up different DNA sequences called genotypes. Genotypes along with environmental and developmental factors determine what the phenotypes will be. Most biological traits are under the influence of polygenes (many different genes) as well as gene–environment interactions. Some genetic traits are instantly visible, such as eye color or the number of limbs, and some are not, such as blood type, the risk for specific diseases, or the thousands of basic biochemical processes that constitute life. A gene can acquire mutations in their sequence, leading to different variants, known as alleles, in the population. These alleles encode slightly different versions of a gene, which may cause different phenotypical traits. Usage of the term "having a gene" (e.g., "good genes," "hair color gene") typically refers to containing a different allele of the same, shared gene. Genes evolve due to natural selection / survival of the fittest and genetic drift of the alleles. The term gene was introduced by Danish botanist, plant physiologist and geneticist Wilhelm Johannsen in 1909. It is inspired by the Ancient Greek: γόνος, gonos, that means offspring and procreation Document 4::: The Encyclopedia of Genetics () is a print encyclopedia of genetics edited by Sydney Brenner and Jeffrey H. Miller. It has four volumes and 1,700 entries. It is available online at http://www.sciencedirect.com/science/referenceworks/9780122270802. Genetics Genetics literature The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What do you call a condition caused by mutations in one or more genes? A. mutation disorder B. evolutionary disorder C. intrinsic disorder D. genetic disorder Answer:
sciq-3586
multiple_choice
Matching donor and recipient blood types is important because different blood types have different types of what?
[ "charges", "antigens", "antibodies", "coagulants" ]
C
Relavent Documents: Document 0::: A blood type (also known as a blood group) is a classification of blood, based on the presence and absence of antibodies and inherited antigenic substances on the surface of red blood cells (RBCs). These antigens may be proteins, carbohydrates, glycoproteins, or glycolipids, depending on the blood group system. Some of these antigens are also present on the surface of other types of cells of various tissues. Several of these red blood cell surface antigens can stem from one allele (or an alternative version of a gene) and collectively form a blood group system. Blood types are inherited and represent contributions from both parents of an individual. a total of 44 human blood group systems are recognized by the International Society of Blood Transfusion (ISBT). The two most important blood group systems are ABO and Rh; they determine someone's blood type (A, B, AB, and O, with + or − denoting RhD status) for suitability in blood transfusion. Blood group systems A complete blood type would describe each of the 44 blood groups, and an individual's blood type is one of many possible combinations of blood-group antigens. Almost always, an individual has the same blood group for life, but very rarely an individual's blood type changes through addition or suppression of an antigen in infection, malignancy, or autoimmune disease. Another more common cause of blood type change is a bone marrow transplant. Bone-marrow transplants are performed for many leukemias and lymphomas, among other diseases. If a person receives bone marrow from someone of a different ABO type (e.g., a type O patient receives a type A bone marrow), the patient's blood type should eventually become the donor's type, as the patient's hematopoietic stem cells (HSCs) are destroyed, either by ablation of the bone marrow or by the donor's T-cells. Once all the patient's original red blood cells have died, they will have been fully replaced by new cells derived from the donor HSCs. Provided the donor had Document 1::: Animal erythrocytes have cell surface antigens that undergo polymorphism and give rise to blood types. Antigens from the human ABO blood group system are also found in apes and Old World monkeys, and the types trace back to the origin of humanoids. Other animal blood sometimes agglutinates (to varying levels of intensity) with human blood group reagents, but the structure of the blood group antigens in animals is not always identical to those typically found in humans. The classification of most animal blood groups therefore uses different blood typing systems to those used for classification of human blood. Simian blood groups Two categories of blood groups, human-type and simian-type, have been found in apes and monkeys, and they can be tested by methods established for grouping human blood. Data is available on blood groups of common chimpanzees, baboons, and macaques. Rh blood group The Rh system is named after the rhesus monkey, following experiments by Karl Landsteiner and Alexander S. Wiener, which showed that rabbits, when immunised with rhesus monkey red cells, produce an antibody that also agglutinates the red blood cells of many humans. Chimpanzee and Old World monkey blood group systems Two complex chimpanzee blood group systems, V-A-B-D and R-C-E-F systems, proved to be counterparts of the human MNS and Rh blood group systems, respectively. Two blood group systems have been defined in Old World monkeys: the Drh system of macaques and the Bp system of baboons, both linked by at least one species shared by either of the blood group systems. Canine blood groups Over 13 canine blood groups have been described. Eight DEA (dog erythrocyte antigen) types are recognized as international standards. Of these DEA types, DEA 4 and DEA 6 appear on the red blood cells of ~98% of dogs. Dogs with only DEA 4 or DEA 6 can thus serve as blood donors for the majority of the canine population. Any of these DEA types may stimulate an immune response in a recipient of Document 2::: Immunohematology is a branch of hematology and transfusion medicine which studies antigen-antibody reactions and analogous phenomena as they relate to the pathogenesis and clinical manifestations of blood disorders. A person employed in this field is referred to as an immunohematologist. Their day-to-day duties include blood typing, cross-matching and antibody identification. Immunohematology and Transfusion Medicine is a medical post graduate specialty in many countries. The specialist Immunohematology and Transfusion Physician provides expert opinion for difficult transfusions, massive transfusions, incompatibility work up, therapeutic plasmapheresis, cellular therapy, irradiated blood therapy, leukoreduced and washed blood products, stem cell procedures, platelet rich plasma therapies, HLA and cord blood banking. Other research avenues are in the field of stem cell researches, regenerative medicine and cellular therapy. Immunohematology is one of the specialized branches of medical science. It deals with the concepts and clinical 2 techniques related to modern transfusion therapy. Efforts to save human lives by transfusing blood have been recorded for several centuries. The era of blood transfusion, however, really began when William Harvey described the circulation of blood in 1616. See also Clinical laboratory scientist Transfusion medicine Document 3::: Blood compatibility testing is conducted in a medical laboratory to identify potential incompatibilities between blood group systems in blood transfusion. It is also used to diagnose and prevent some complications of pregnancy that can occur when the baby has a different blood group from the mother. Blood compatibility testing includes blood typing, which detects the antigens on red blood cells that determine a person's blood type; testing for unexpected antibodies against blood group antigens (antibody screening and identification); and, in the case of blood transfusions, mixing the recipient's plasma with the donor's red blood cells to detect incompatibilities (crossmatching). Routine blood typing involves determining the ABO and RhD (Rh factor) type, and involves both identification of ABO antigens on red blood cells (forward grouping) and identification of ABO antibodies in the plasma (reverse grouping). Other blood group antigens may be tested for in specific clinical situations. Blood compatibility testing makes use of reactions between blood group antigens and antibodies—specifically the ability of antibodies to cause red blood cells to clump together when they bind to antigens on the cell surface, a phenomenon called agglutination. Techniques that rely on antigen-antibody reactions are termed serologic methods, and several such methods are available, ranging from manual testing using test tubes or slides to fully automated systems. Blood types can also be determined through genetic testing, which is used when conditions that interfere with serologic testing are present or when a high degree of accuracy in antigen identification is required. Several conditions can cause false or inconclusive results in blood compatibility testing. When these issues affect ABO typing, they are called ABO discrepancies. ABO discrepancies must be investigated and resolved before the person's blood type is reported. Other sources of error include the "weak D" phenomenon, in whi Document 4::: Tissue typing is a procedure in which the tissues of a prospective donor and recipient are tested for compatibility prior to transplantation. Mismatched donor and recipient tissues can lead to rejection of the tissues. There are multiple methods of tissue typing. Overview During tissue typing, an individual's human leukocyte antigens (HLA) are identified. HLA molecules are presented on the surface of cells and facilitate interactions between immune cells (such as dendritic cells and T cells) that lead to adaptive immune responses. If HLA from the donor is recognized by the recipient's immune system as different from the recipient's own HLA, an immune response against the donor tissues can be triggered. More specifically, HLA mismatches between organ donors and recipients can lead to the development of anti-HLA donor-specific antibodies (DSAs). DSAs are strongly associated with the rejection of donor tissues in the recipient, and their presence is considered an indicator of antibody-mediated rejection. When donor and recipient HLA are matched, donor tissues are significantly more likely to be accepted by the recipient's immune system. During tissue typing, a number of HLA genes should be typed in both the donor and recipient, including HLA Class I A, B, and C genes, as well as HLA Class II DRB1, DRB3, DRB4, DRB5, DQA1, DQB1, DPA1, and DPB1 genes. HLA typing is made more difficult by the fact that the HLA region is the most genetically variable region in the human genome. Methods of tissue typing One of the first methods of tissue typing was through serological typing. In this technique, a donor's blood cells are HLA typed by mixing them with serum containing anti-HLA antibodies. If the antibodies recognize their epitope on the donor's HLA then complement activation occurs leads to cell lysis and death, allowing the cells to take up a dye (trypan blue). This allows for identification of the cells' HLA based indirectly on the specificity of the known antibodies i The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Matching donor and recipient blood types is important because different blood types have different types of what? A. charges B. antigens C. antibodies D. coagulants Answer:
sciq-11520
multiple_choice
Alkenes can react with what to form alcohols?
[ "sugars", "water", "air", "proteins" ]
B
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test. Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95. After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17. Content specification Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below: Biochemistry (36%) A Chemical and Physical Foundations Thermodynamics and kinetics Redox states Water, pH, acid-base reactions and buffers Solutions and equilibria Solute-solvent interactions Chemical interactions and bonding Chemical reaction mechanisms B Structural Biology: Structure, Assembly, Organization and Dynamics Small molecules Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids) Supramolecular complexes (e.g. Document 2::: The European Algae Biomass Association (EABA), established on 2 June 2009, is the European association representing both research and industry in the field of algae technologies. EABA was founded during its inaugural conference on 1–2 June 2009 at Villa La Pietra in Florence. The association is headquartered in Florence, Italy. History The first EABA's President, Prof. Dr. Mario Tredici, served a 2-year term since his election on 2 June 2009. The EABA Vice-presidents were Mr. Claudio Rochietta, (Oxem, Italy), Prof. Patrick Sorgeloos (University of Ghent, Belgium) and Mr. Marc Van Aken (SBAE Industries, Belgium). The EABA Executive Director was Mr. Raffaello Garofalo. EABA had 58 founding members and the EABA reached 79 members in 2011. The last election occurred on 3 December 2018 in Amsterdam. The EABA's President is Mr. Jean-Paul Cadoret (Algama / France). The EABA Vice-presidents are Prof. Dr. Sammy Boussiba (Ben-Gurion University of the Negev / Israel), Prof. Dr. Gabriel Acien (University of Almeria / Spain) and Dr. Alexandra Mosch (Germany). The EABA General Manager is Dr. Vítor Verdelho (A4F AlgaFuel, S.A. / Portugal) and Prof. Dr. Mario Tredici (University of Florence / Italy) is elected as Honorary President. Cooperation with other organisations ART Fuels Forum European Society of Biochemical Engineering Sciences Algae Biomass Organization Document 3::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 4::: Computer science and engineering (CSE) is an academic program at many universities which comprises computer science classes (e.g. data structures and algorithms) and computer engineering classes (e.g computer architecture). There is no clear division in computing between science and engineering, just like in the field of materials science and engineering. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both undergraduate as well postgraduate with specializations. Academic courses Academic programs vary between colleges, but typically include a combination of topics in computer science, computer engineering, and electrical engineering. Undergraduate courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most CSE programs require introductory mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, probability, and statistics, as well as the basics of electrical and electronic engineering, physics, and electromagnetism. Example universities with CSE majors and departments APJ Abdul Kalam Technological University American International University-B The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Alkenes can react with what to form alcohols? A. sugars B. water C. air D. proteins Answer:
sciq-3512
multiple_choice
Where is chlorine gas produced?
[ "in the nucleus", "carbon cycle", "epidermis", "at the anode" ]
D
Relavent Documents: Document 0::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 1::: The European Algae Biomass Association (EABA), established on 2 June 2009, is the European association representing both research and industry in the field of algae technologies. EABA was founded during its inaugural conference on 1–2 June 2009 at Villa La Pietra in Florence. The association is headquartered in Florence, Italy. History The first EABA's President, Prof. Dr. Mario Tredici, served a 2-year term since his election on 2 June 2009. The EABA Vice-presidents were Mr. Claudio Rochietta, (Oxem, Italy), Prof. Patrick Sorgeloos (University of Ghent, Belgium) and Mr. Marc Van Aken (SBAE Industries, Belgium). The EABA Executive Director was Mr. Raffaello Garofalo. EABA had 58 founding members and the EABA reached 79 members in 2011. The last election occurred on 3 December 2018 in Amsterdam. The EABA's President is Mr. Jean-Paul Cadoret (Algama / France). The EABA Vice-presidents are Prof. Dr. Sammy Boussiba (Ben-Gurion University of the Negev / Israel), Prof. Dr. Gabriel Acien (University of Almeria / Spain) and Dr. Alexandra Mosch (Germany). The EABA General Manager is Dr. Vítor Verdelho (A4F AlgaFuel, S.A. / Portugal) and Prof. Dr. Mario Tredici (University of Florence / Italy) is elected as Honorary President. Cooperation with other organisations ART Fuels Forum European Society of Biochemical Engineering Sciences Algae Biomass Organization Document 2::: Chloragogen cells, also called y cells, are star-shaped cells in annelids involved with excretory functions and intermediary metabolism. These cells function similarly to the liver found in vertebrates. Chloragogen tissue is most extensively studied in earthworms. Structure and location These cells are derived from the inner coelomic epithelium and are present in the coelomic fluid of some annelids. They have characteristic vesicular bulging due to their function in storing and transporting substances, and are yellow due to the presence of cytosolic granules known as chloragosomes. Function The most understood function of chloragogen tissue is its function in the excretory system. The cells accumulate and excrete nitrogenous wastes and silicates. They are involved in the deamination of amino acids, synthesis of urea, storage of glycogen and toxin neutralization. Document 3::: The Bionic Leaf is a biomimetic system that gathers solar energy via photovoltaic cells that can be stored or used in a number of different functions. Bionic leaves can be composed of both synthetic (metals, ceramics, polymers, etc.) and organic materials (bacteria), or solely made of synthetic materials. The Bionic Leaf has the potential to be implemented in communities, such as urbanized areas to provide clean air as well as providing needed clean energy. History In 2009 at MIT, Daniel Nocera's lab first developed the "artificial leaf", a device made from silicon and an anode electrocatalyst for the oxidation of water, capable of splitting water into hydrogen and oxygen gases. In 2012, Nocera came to Harvard and The Silver Lab of Harvard Medical School joined Nocera’s team. Together the teams expanded the existing technology to create the Bionic Leaf. It merged the concept of the artificial leaf with genetically engineered bacteria that feed on the hydrogen and convert CO2 in the air into alcohol fuels or chemicals. The first version of the teams Bionic Leaf was created in 2015 but the catalyst used was harmful to the bacteria. In 2016, a new catalyst was designed to solve this issue, named the "Bionic Leaf 2.0". Other versions of artificial leaves have been developed by the California Institute of Technology and the Joint Center for Artificial Photosynthesis, the University of Waterloo, and the University of Cambridge. Mechanics Photosynthesis In natural photosynthesis, photosynthetic organisms produce energy-rich organic molecules from water and carbon dioxide by using solar radiation. Therefore, the process of photosynthesis removes carbon dioxide, a greenhouse gas, from the air. Artificial photosynthesis, as performed by the Bionic Leaf, is approximately 10 times more efficient than natural photosynthesis. Using a catalyst, the Bionic Leaf can remove excess carbon dioxide in the air and convert that to useful alcohol fuels, like isopropanol and isobutan Document 4::: The Inverness Campus is an area in Inverness, Scotland. 5.5 hectares of the site have been designated as an enterprise area for life sciences by the Scottish Government. This designation is intended to encourage research and development in the field of life sciences, by providing incentives to locate at the site. The enterprise area is part of a larger site, over 200 acres, which will house Inverness College, Scotland's Rural College (SRUC), the University of the Highlands and Islands, a health science centre and sports and other community facilities. The purpose built research hub will provide space for up to 30 staff and researchers, allowing better collaboration. The Highland Science Academy will be located on the site, a collaboration formed by Highland Council, employers and public bodies. The academy will be aimed towards assisting young people to gain the necessary skills to work in the energy, engineering and life sciences sectors. History The site was identified in 2006. Work started to develop the infrastructure on the site in early 2012. A virtual tour was made available in October 2013 to help mark Doors Open Day. The construction had reached halfway stage in May 2014, meaning that it is on track to open doors to receive its first students in August 2015. In May 2014, work was due to commence on a building designed to provide office space and laboratories as part of the campus's "life science" sector. Morrison Construction have been appointed to undertake the building work. Scotland's Rural College (SRUC) will be able to relocate their Inverness-based activities to the Campus. SRUC's research centre for Comparative Epidemiology and Medicine, and Agricultural Business Consultancy services could co-locate with UHI where their activities have complementary themes. By the start of 2017, there were more than 600 people working at the site. In June 2021, a new bridge opened connecting Inverness Campus to Inverness Shopping Park. It crosses the Aberdeen The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Where is chlorine gas produced? A. in the nucleus B. carbon cycle C. epidermis D. at the anode Answer:
ai2_arc-830
multiple_choice
For a class project, students converted the daily temperatures for the past month from Fahrenheit to Celsius. What is the clearest way for the students to present the information?
[ "table", "formula", "pie chart", "line graph" ]
A
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: In mathematics education, a representation is a way of encoding an idea or a relationship, and can be both internal (e.g., mental construct) and external (e.g., graph). Thus multiple representations are ways to symbolize, to describe and to refer to the same mathematical entity. They are used to understand, to develop, and to communicate different mathematical features of the same object or operation, as well as connections between different properties. Multiple representations include graphs and diagrams, tables and grids, formulas, symbols, words, gestures, software code, videos, concrete models, physical and virtual manipulatives, pictures, and sounds. Representations are thinking tools for doing mathematics. Higher-order thinking The use of multiple representations supports and requires tasks that involve decision-making and other problem-solving skills. The choice of which representation to use, the task of making representations given other representations, and the understanding of how changes in one representation affect others are examples of such mathematically sophisticated activities. Estimation, another complex task, can strongly benefit from multiple representations Curricula that support starting from conceptual understanding, then developing procedural fluency, for example, AIMS Foundation Activities, frequently use multiple representations. Supporting student use of multiple representations may lead to more open-ended problems, or at least accepting multiple methods of solutions and forms of answers. Project-based learning units, such as WebQuests, typically call for several representations. Motivation Some representations, such as pictures, videos and manipulatives, can motivate because of their richness, possibilities of play, use of technologies, or connections with interesting areas of life. Tasks that involve multiple representations can sustain intrinsic motivation in mathematics, by supporting higher-order thinking and problem solving. Document 2::: The Texas Math and Science Coaches Association or TMSCA is an organization for coaches of academic University Interscholastic League teams in Texas middle schools and high schools, specifically those that compete in mathematics and science-related tests. Events There are four events in the TMSCA at both the middle and high school level: Number Sense, General Mathematics, Calculator Applications, and General Science. Number Sense is an 80-question exam that students are given only 10 minutes to solve. Additionally, no scratch work or paper calculations are allowed. These questions range from simple calculations such as 99+98 to more complicated operations such as 1001×1938. Each calculation is able to be done with a certain trick or shortcut that makes the calculations easier. The high school exam includes calculus and other difficult topics in the questions also with the same rules applied as to the middle school version. It is well known that the grading for this event is particularly stringent as errors such as writing over a line or crossing out potential answers are considered as incorrect answers. General Mathematics is a 50-question exam that students are given only 40 minutes to solve. These problems are usually more challenging than questions on the Number Sense test, and the General Mathematics word problems take more thinking to figure out. Every problem correct is worth 5 points, and for every problem incorrect, 2 points are deducted. Tiebreakers are determined by the person that misses the first problem and by percent accuracy. Calculator Applications is an 80-question exam that students are given only 30 minutes to solve. This test requires practice on the calculator, knowledge of a few crucial formulas, and much speed and intensity. Memorizing formulas, tips, and tricks will not be enough. In this event, plenty of practice is necessary in order to master the locations of the keys and develop the speed necessary. All correct questions are worth 5 Document 3::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of Document 4::: Advanced Placement (AP) Calculus (also known as AP Calc, Calc AB / Calc BC or simply AB / BC) is a set of two distinct Advanced Placement calculus courses and exams offered by the American nonprofit organization College Board. AP Calculus AB covers basic introductions to limits, derivatives, and integrals. AP Calculus BC covers all AP Calculus AB topics plus additional topics (including integration by parts, Taylor series, parametric equations, vector calculus, and polar coordinate functions). AP Calculus AB AP Calculus AB is an Advanced Placement calculus course. It is traditionally taken after precalculus and is the first calculus course offered at most schools except for possibly a regular calculus class. The Pre-Advanced Placement pathway for math helps prepare students for further Advanced Placement classes and exams. Purpose According to the College Board: Topic outline The material includes the study and application of differentiation and integration, and graphical analysis including limits, asymptotes, and continuity. An AP Calculus AB course is typically equivalent to one semester of college calculus. Analysis of graphs (predicting and explaining behavior) Limits of functions (one and two sided) Asymptotic and unbounded behavior Continuity Derivatives Concept At a point As a function Applications Higher order derivatives Techniques Integrals Interpretations Properties Applications Techniques Numerical approximations Fundamental theorem of calculus Antidifferentiation L'Hôpital's rule Separable differential equations AP Calculus BC AP Calculus BC is equivalent to a full year regular college course, covering both Calculus I and II. After passing the exam, students may move on to Calculus III (Multivariable Calculus). Purpose According to the College Board, Topic outline AP Calculus BC includes all of the topics covered in AP Calculus AB, as well as the following: Convergence tests for series Taylor series Parametric equations Polar functions (inclu The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. For a class project, students converted the daily temperatures for the past month from Fahrenheit to Celsius. What is the clearest way for the students to present the information? A. table B. formula C. pie chart D. line graph Answer:
sciq-2995
multiple_choice
What encloses the genetic material of the virus?
[ "the capsid", "mitochondria", "the spindle", "nuclei" ]
A
Relavent Documents: Document 0::: A virus is a submicroscopic infectious agent that replicates only inside the living cells of an organism. Viruses infect all life forms, from animals and plants to microorganisms, including bacteria and archaea. Viruses are found in almost every ecosystem on Earth and are the most numerous type of biological entity. Since Dmitri Ivanovsky's 1892 article describing a non-bacterial pathogen infecting tobacco plants and the discovery of the tobacco mosaic virus by Martinus Beijerinck in 1898, more than 11,000 of the millions of virus species have been described in detail. The study of viruses is known as virology, a subspeciality of microbiology. When infected, a host cell is often forced to rapidly produce thousands of copies of the original virus. When not inside an infected cell or in the process of infecting a cell, viruses exist in the form of independent viral particles, or virions, consisting of (i) genetic material, i.e., long molecules of DNA or RNA that encode the structure of the proteins by which the virus acts; (ii) a protein coat, the capsid, which surrounds and protects the genetic material; and in some cases (iii) an outside envelope of lipids. The shapes of these virus particles range from simple helical and icosahedral forms to more complex structures. Most virus species have virions too small to be seen with an optical microscope and are one-hundredth the size of most bacteria. The origins of viruses in the evolutionary history of life are unclear: some may have evolved from plasmids—pieces of DNA that can move between cells—while others may have evolved from bacteria. In evolution, viruses are an important means of horizontal gene transfer, which increases genetic diversity in a way analogous to sexual reproduction. Viruses are considered by some biologists to be a life form, because they carry genetic material, reproduce, and evolve through natural selection, although they lack the key characteristics, such as cell structure, that are generally Document 1::: The capsomere is a subunit of the capsid, an outer covering of protein that protects the genetic material of a virus. Capsomeres self-assemble to form the capsid. Subunits called protomers aggregate to form capsomeres. Various arrangements of capsomeres are: 1) Icosahedral, 2) Helical, and 3) Complex. 1) Icosahedral- An icosahedron is a polyhedron with 12 vertices and 20 faces. Two types of capsomeres constitute the icosahedral capsid: pentagonal (pentons) at the vertices and hexagonal (hexons) at the faces. There are always twelve pentons, but the number of hexons varies among virus groups. In electron micrographs, capsomeres are recognized as regularly spaced rings with a central hole. 2) Helical- The protomers are not grouped in capsomeres, but are bound to each other so as to form a ribbon-like structure. This structure folds into a helix because the protomers are thicker at one end than at the other. The diameter of the helical capsid is determined by characteristics of its protomers, while its length is determined by the length of the nucleic acid it encloses. 3) Complex- e.g., that exhibited by poxvirus and rhabdovirus. This group comprises all those viruses which do not fit into either of the above two groups. When the viral particle has entered a host cell, the host cellular enzymes digest the capsid and its constituent capsomeres, thereby exposing the naked genetic material (DNA/RNA) of the virus, which subsequently enters the replication cycle. The capsomeres protect against physical, chemical, and enzymatic damage and are multiply redundant; having a few protein subunits that are repeated. This is because the viral genome is being as economic as possible by only needing a few protein codons to make a large structure. One of the major functions of a capsid is to introduce the enclosed viral genome into host cells by adsorbing readily to host cell surfaces. Document 2::: The term viral protein refers to both the products of the genome of a virus and any host proteins incorporated into the viral particle. Viral proteins are grouped according to their functions, and groups of viral proteins include structural proteins, nonstructural proteins, regulatory proteins, and accessory proteins. Viruses are non-living and do not have the means to reproduce on their own, instead depending on their host cell's machinery to do this. Thus, viruses do not code for most of the proteins required for their replication and the translation of their mRNA into viral proteins, but use proteins encoded by the host cell for this purpose. Viral structural proteins Most viral structural proteins are components for the capsid and the envelope of the virus. Capsid The genetic material of a virus is stored within a viral protein structure called the capsid. The capsid is a "shield" that protects the viral nucleic acids from getting degraded by host enzymes or other types of pesticides or pestilences. It also functions to attach the virion to its host, and enable the virion to penetrate the host cell membrane. Many copies of a single viral protein or a number of different viral proteins make up the capsid, and each of these viral proteins are coded for by one gene from the viral genome. The structure of the capsid allows the virus to use a small number of viral genes to make a large capsid. Several protomers, oligomeric (viral) protein subunits, combine to form capsomeres, and capsomeres come together to form the capsid. Capsomeres can arrange into an icosahedral, helical, or complex capsid, but in many viruses, such as the herpes simplex virus, an icosahedral capsid is assembled. Three asymmetric and nonidentical viral protein units make up each of the twenty identical triangular faces in the icosahedral capsid. Viral envelope The capsid of some viruses are enclosed in a membrane called the viral envelope. In most cases, the viral envelope is obtained by Document 3::: Virophysics is a branch of biophysics in which the theoretical concepts and experimental techniques of physics are applied to study the mechanics and dynamics driving the interactions between virions and cells. Overview Research in virophysics typically focuses on resolving the physical structure and structural properties of viruses, the dynamics of their assembly and disassembly, their population kinetics over the course of an infection, and the emergence and evolution of various strains. The common aim of these efforts is to establish a set of models (expressions or laws) that quantitatively describe the details of all processes involved in viral infections with reliable predictive power. Having such a quantitative understanding of viruses would not only rationalize the development of strategies to prevent, guide, or control the course of viral infections, but could also be used to exploit virus processes and put virus to work in areas such as nanosciences, materials, and biotechnologies. Traditionally, in vivo and in vitro experimentation has been the only way to study viral infections. This approach for deriving knowledge based solely on experimental observations relies on common-sense assumptions (e.g., a higher virus count means a fitter virus). These assumptions often go untested due to difficulties controlling individual components of these complex systems without affecting others. The use of mathematical models and computer simulations to describe such systems, however, makes it possible to deconstruct an experimental system into individual components and determine how the pieces combine to create the infection we observe. Virophysics has large overlaps with other fields. For example, the modelling of infectious disease dynamics is a popular research topic in mathematics, notably in applied mathematics or mathematical biology. While most modelling efforts in mathematics have focused on elucidating the dynamics of spread of infectious diseases at an epid Document 4::: A genetically modified virus is a virus that has been altered or generated using biotechnology methods, and remains capable of infection. Genetic modification involves the directed insertion, deletion, artificial synthesis or change of nucleotide bases in viral genomes. Genetically modified viruses are mostly generated by the insertion of foreign genes intro viral genomes for the purposes of biomedical, agricultural, bio-control, or technological objectives. The terms genetically modified virus and genetically engineered virus are used synonymously. General usage Genetically modified viruses are generated through genetic modification, which involves the directed insertion, deletion, artificial synthesis, or change of nucleotide sequences in viral genomes using biotechnological methods. While most dsDNA viruses have single monopartite genomes, many RNA viruses have multipartite genomes, it is not necessary for all parts of a viral genome to be genetically modified for the virus to be considered a genetically modified virus. Infectious viruses capable of infection that are generated through artificial gene synthesis of all, or part of their genomes (for example based on inferred historical sequences) may also be considered as genetically modified viruses. Viruses that are changed solely through the action of spontaneous mutations, recombination or reassortment events (even in experimental settings), are not generally considered to be genetically modified viruses. Viruses are generally modified so they can be used as vectors for inserting new genetic information into a host organism or altering its preexisting genetic material. This can be achieved in at least three processes : Integration of all, or parts, of a viral genome into the host's genome (e.g. into its chromosomes). When the whole genetically modified viral genome is integrated it is then referred to as a genetically modified provirus. Where DNA or RNA which that has been packaged as part of a virus part The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What encloses the genetic material of the virus? A. the capsid B. mitochondria C. the spindle D. nuclei Answer:
sciq-5169
multiple_choice
Higher pressures increase the solubility of what?
[ "molecules", "bases", "fuels", "gases" ]
D
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test. Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95. After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17. Content specification Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below: Biochemistry (36%) A Chemical and Physical Foundations Thermodynamics and kinetics Redox states Water, pH, acid-base reactions and buffers Solutions and equilibria Solute-solvent interactions Chemical interactions and bonding Chemical reaction mechanisms B Structural Biology: Structure, Assembly, Organization and Dynamics Small molecules Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids) Supramolecular complexes (e.g. Document 2::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of Document 3::: In chemistry, solubility is the ability of a substance, the solute, to form a solution with another substance, the solvent. Insolubility is the opposite property, the inability of the solute to form such a solution. The extent of the solubility of a substance in a specific solvent is generally measured as the concentration of the solute in a saturated solution, one in which no more solute can be dissolved. At this point, the two substances are said to be at the solubility equilibrium. For some solutes and solvents, there may be no such limit, in which case the two substances are said to be "miscible in all proportions" (or just "miscible"). The solute can be a solid, a liquid, or a gas, while the solvent is usually solid or liquid. Both may be pure substances, or may themselves be solutions. Gases are always miscible in all proportions, except in very extreme situations, and a solid or liquid can be "dissolved" in a gas only by passing into the gaseous state first. The solubility mainly depends on the composition of solute and solvent (including their pH and the presence of other dissolved substances) as well as on temperature and pressure. The dependency can often be explained in terms of interactions between the particles (atoms, molecules, or ions) of the two substances, and of thermodynamic concepts such as enthalpy and entropy. Under certain conditions, the concentration of the solute can exceed its usual solubility limit. The result is a supersaturated solution, which is metastable and will rapidly exclude the excess solute if a suitable nucleation site appears. The concept of solubility does not apply when there is an irreversible chemical reaction between the two substances, such as the reaction of calcium hydroxide with hydrochloric acid; even though one might say, informally, that one "dissolved" the other. The solubility is also not the same as the rate of solution, which is how fast a solid solute dissolves in a liquid solvent. This property de Document 4::: At equilibrium, the relationship between water content and equilibrium relative humidity of a material can be displayed graphically by a curve, the so-called moisture sorption isotherm. For each humidity value, a sorption isotherm indicates the corresponding water content value at a given, constant temperature. If the composition or quality of the material changes, then its sorption behaviour also changes. Because of the complexity of sorption process the isotherms cannot be determined explicitly by calculation, but must be recorded experimentally for each product. The relationship between water content and water activity (aw) is complex. An increase in aw is usually accompanied by an increase in water content, but in a non-linear fashion. This relationship between water activity and moisture content at a given temperature is called the moisture sorption isotherm. These curves are determined experimentally and constitute the fingerprint of a food system. BET theory (Brunauer-Emmett-Teller) provides a calculation to describe the physical adsorption of gas molecules on a solid surface. Because of the complexity of the process, these calculations are only moderately successful; however, Stephen Brunauer was able to classify sorption isotherms into five generalized shapes as shown in Figure 2. He found that Type II and Type III isotherms require highly porous materials or desiccants, with first monolayer adsorption, followed by multilayer adsorption and finally leading to capillary condensation, explaining these materials high moisture capacity at high relative humidity. Care must be used in extracting data from isotherms, as the representation for each axis may vary in its designation. Brunauer provided the vertical axis as moles of gas adsorbed divided by the moles of the dry material, and on the horizontal axis he used the ratio of partial pressure of the gas just over the sample, divided by its partial pressure at saturation. More modern isotherms showing the The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Higher pressures increase the solubility of what? A. molecules B. bases C. fuels D. gases Answer:
sciq-864
multiple_choice
What are created based upon the loss or gain of electrons?
[ "crystals", "atoms", "hydrogens", "ions" ]
D
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Secondary electrons are electrons generated as ionization products. They are called 'secondary' because they are generated by other radiation (the primary radiation). This radiation can be in the form of ions, electrons, or photons with sufficiently high energy, i.e. exceeding the ionization potential. Photoelectrons can be considered an example of secondary electrons where the primary radiation are photons; in some discussions photoelectrons with higher energy (>50 eV) are still considered "primary" while the electrons freed by the photoelectrons are "secondary". Applications Secondary electrons are also the main means of viewing images in the scanning electron microscope (SEM). The range of secondary electrons depends on the energy. Plotting the inelastic mean free path as a function of energy often shows characteristics of the "universal curve" familiar to electron spectroscopists and surface analysts. This distance is on the order of a few nanometers in metals and tens of nanometers in insulators. This small distance allows such fine resolution to be achieved in the SEM. For SiO2, for a primary electron energy of 100 eV, the secondary electron range is up to 20 nm from the point of incidence. See also Delta ray Everhart-Thornley detector Document 2::: The electric dipole moment is a measure of the separation of positive and negative electrical charges within a system, that is, a measure of the system's overall polarity. The SI unit for electric dipole moment is the coulomb-meter (C⋅m). The debye (D) is another unit of measurement used in atomic physics and chemistry. Theoretically, an electric dipole is defined by the first-order term of the multipole expansion; it consists of two equal and opposite charges that are infinitesimally close together, although real dipoles have separated charge. Elementary definition Often in physics the dimensions of a massive object can be ignored and can be treated as a pointlike object, i.e. a point particle. Point particles with electric charge are referred to as point charges. Two point charges, one with charge and the other one with charge separated by a distance , constitute an electric dipole (a simple case of an electric multipole). For this case, the electric dipole moment has a magnitude and is directed from the negative charge to the positive one. Some authors may split in half and use since this quantity is the distance between either charge and the center of the dipole, leading to a factor of two in the definition. A stronger mathematical definition is to use vector algebra, since a quantity with magnitude and direction, like the dipole moment of two point charges, can be expressed in vector form where is the displacement vector pointing from the negative charge to the positive charge. The electric dipole moment vector also points from the negative charge to the positive charge. With this definition the dipole direction tends to align itself with an external electric field (and note that the electric flux lines produced by the charges of the dipole itself, which point from positive charge to negative charge then tend to oppose the flux lines of the external field). Note that this sign convention is used in physics, while the opposite sign convention for th Document 3::: Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams. Course content E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are: Electrostatics Conductors, capacitors, and dielectrics Electric circuits Magnetic fields Electromagnetism. Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class. AP test The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution. Registration The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test. Format The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with Document 4::: In physics, a charge carrier is a particle or quasiparticle that is free to move, carrying an electric charge, especially the particles that carry electric charges in electrical conductors. Examples are electrons, ions and holes. The term is used most commonly in solid state physics. In a conducting medium, an electric field can exert force on these free particles, causing a net motion of the particles through the medium; this is what constitutes an electric current. The electron and the proton are the elementary charge carriers, each carrying one elementary charge (e), of the same magnitude and opposite sign. In conductors In conducting media, particles serve to carry charge: In many metals, the charge carriers are electrons. One or two of the valence electrons from each atom are able to move about freely within the crystal structure of the metal. The free electrons are referred to as conduction electrons, and the cloud of free electrons is called a Fermi gas. Many metals have electron and hole bands. In some, the majority carriers are holes. In electrolytes, such as salt water, the charge carriers are ions, which are atoms or molecules that have gained or lost electrons so they are electrically charged. Atoms that have gained electrons so they are negatively charged are called anions, atoms that have lost electrons so they are positively charged are called cations. Cations and anions of the dissociated liquid also serve as charge carriers in melted ionic solids (see e.g. the Hall–Héroult process for an example of electrolysis of a melted ionic solid). Proton conductors are electrolytic conductors employing positive hydrogen ions as carriers. In a plasma, an electrically charged gas which is found in electric arcs through air, neon signs, and the sun and stars, the electrons and cations of ionized gas act as charge carriers. In a vacuum, free electrons can act as charge carriers. In the electronic component known as the vacuum tube (also called valve), the mobil The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What are created based upon the loss or gain of electrons? A. crystals B. atoms C. hydrogens D. ions Answer:
sciq-6450
multiple_choice
How are layers of the atmosphere divided?
[ "temperature gradients", "color gradients", "density gradients", "air gradients" ]
A
Relavent Documents: Document 0::: Atmospheric temperature is a measure of temperature at different levels of the Earth's atmosphere. It is governed by many factors, including incoming solar radiation, humidity and altitude. When discussing surface air temperature, the annual atmospheric temperature range at any geographical location depends largely upon the type of biome, as measured by the Köppen climate classification Temperature versus altitude Temperature varies greatly at different heights relative to Earth's surface and this variation in temperature characterizes the four layers that exist in the atmosphere. These layers include the troposphere, stratosphere, mesosphere, and thermosphere. The troposphere is the lowest of the four layers, extending from the surface of the Earth to about into the atmosphere where the tropopause (the boundary between the troposphere stratosphere) is located. The width of the troposphere can vary depending on latitude, for example, the troposphere is thicker in the tropics (about ) because the tropics are generally warmer, and thinner at the poles (about ) because the poles are colder. Temperatures in the atmosphere decrease with height at an average rate of 6.5°C (11.7°F) per kilometer. Because the troposphere experiences its warmest temperatures closer to Earth's surface, there is great vertical movement of heat and water vapour, causing turbulence. This turbulence, in conjunction with the presence of water vapour, is the reason that weather occurs within the troposphere. Following the tropopause is the stratosphere. This layer extends from the tropopause to the stratopause which is located at an altitude of about . Temperatures remain constant with height from the tropopause to an altitude of , after which they start to increase with height. This happening is referred to as an inversion and It is because of this inversion that the stratosphere is not characterised as turbulent. The stratosphere receives its warmth from the sun and the ozone layer which ab Document 1::: Aeronomy is the scientific study of the upper atmosphere of the Earth and corresponding regions of the atmospheres of other planets. It is a branch of both atmospheric chemistry and atmospheric physics. Scientists specializing in aeronomy, known as aeronomers, study the motions and chemical composition and properties of the Earth's upper atmosphere and regions of the atmospheres of other planets that correspond to it, as well as the interaction between upper atmospheres and the space environment. In atmospheric regions aeronomers study, chemical dissociation and ionization are important phenomena. History The mathematician Sydney Chapman introduced the term aeronomy to describe the study of the Earth's upper atmosphere in 1946 in a letter to the editor of Nature entitled "Some Thoughts on Nomenclature." The term became official in 1954 when the International Union of Geodesy and Geophysics adopted it. "Aeronomy" later also began to refer to the study of the corresponding regions of the atmospheres of other planets. Branches Aeronomy can be divided into three main branches: terrestrial aeronomy, planetary aeronomy, and comparative aeronomy. Terrestrial aeronomy Terrestrial aeronomy focuses on the Earth's upper atmosphere, which extends from the stratopause to the atmosphere's boundary with outer space and is defined as consisting of the mesosphere, thermosphere, and exosphere and their ionized component, the ionosphere. Terrestrial aeronomy contrasts with meteorology, which is the scientific study of the Earth's lower atmosphere, defined as the troposphere and stratosphere. Although terrestrial aeronomy and meteorology once were completely separate fields of scientific study, cooperation between terrestrial aeronomers and meteorologists has grown as discoveries made since the early 1990s have demonstrated that the upper and lower atmospheres have an impact on one another's physics, chemistry, and biology. Terrestrial aeronomers study atmospheric tides and upper- Document 2::: Biometeorology is the interdisciplinary field of science that studies the interactions between the biosphere and the Earth's atmosphere on time scales of the order of seasons or shorter (in contrast with bioclimatology). Examples of relevant processes Weather events influence biological processes on short time scales. For instance, as the Sun rises above the horizon in the morning, light levels become sufficient for the process of photosynthesis to take place in plant leaves. Later on, during the day, air temperature and humidity may induce the partial or total closure of the stomata, a typical response of many plants to limit the loss of water through transpiration. More generally, the daily evolution of meteorological variables controls the circadian rhythm of plants and animals alike. Living organisms, for their part, can collectively affect weather patterns. The rate of evapotranspiration of forests, or of any large vegetated area for that matter, contributes to the release of water vapor in the atmosphere. This local, relatively fast and continuous process may contribute significantly to the persistence of precipitations in a given area. As another example, the wilting of plants results in definite changes in leaf angle distribution and therefore modifies the rates of reflection, transmission and absorption of solar light in these plants. That, in turn, changes the albedo of the ecosystem as well as the relative importance of the sensible and latent heat fluxes from the surface to the atmosphere. For an example in oceanography, consider the release of dimethyl sulfide by biological activity in sea water and its impact on atmospheric aerosols. Human biometeorology The methods and measurements traditionally used in biometeorology are not different when applied to study the interactions between human bodies and the atmosphere, but some aspects or applications may have been explored more extensively. For instance, wind chill has been investigated to determine th Document 3::: Atmospheric optical phenomena include: Afterglow Airglow Alexander's band, the dark region between the two bows of a double rainbow. Alpenglow Anthelion Anticrepuscular rays Aurora Auroral light (northern and southern lights, aurora borealis and aurora australis) Belt of Venus Brocken Spectre Circumhorizontal arc Circumzenithal arc Cloud iridescence Crepuscular rays Earth's shadow Earthquake lights Glories Green flash Halos, of Sun or Moon, including sun dogs Haze Heiligenschein or halo effect, partly caused by the opposition effect Ice blink Light pillar Lightning Mirages (including Fata Morgana) Monochrome Rainbow Moon dog Moonbow Nacreous cloud/Polar stratospheric cloud Rainbow Subsun Sun dog Tangent arc Tyndall effect Upper-atmospheric lightning, including red sprites, Blue jets, and ELVES Water sky See also Document 4::: ARTS (Atmospheric Radiative Transfer Simulator) is a widely used<ref name="garlic"></ref> atmospheric radiative transfer simulator for infrared, microwave, and sub-millimeter wavelengths.<ref name="paper"></ref> While the model is developed by a community, core development is done by the University of Hamburg and Chalmers University, with previous participation from Luleå University of Technology and University of Bremen. Whereas most radiative transfer models are developed for a specific instrument, ARTS is one of few models that aims to be generically applicable.<ref name="burrows"></ref> It is designed from basic physical principles and has been used in a wide range of situations. It supports fully polarised radiative transfer calculations in clear-sky or cloudy conditions in 1-D, 2-D, or 3-D geometries,<ref name="herbin"></ref> including the calculations of Jacobians. Cloudy simulations support liquid and ice clouds with particles of varying sizes and shapes<ref name="esa"></ref> and supports multiple-scattering simulations.<ref name="griessbach"></ref> Absorption is calculated line-by-line, with continua<ref name="matz"></ref> or using a lookup table.<ref name="lut"></ref> The user programs ARTS by the means of a simple scripting language.<ref name="paper" /> ARTS is a physics-based model and therefore much slower than many radiative transfer models that are used operationally and is currently unable to simulate solar, visible, or shortwave radiation. ARTS has been used at the University of Maryland to assess radiosonde humidity measurements,<ref name="moradi"></ref> by the University of Bern for water vapour retrievals,<ref name="tschanz"></ref> by the Norwegian University of Science and Technology for Carbon monoxide retrievals above Antarctica,<ref name="co"></ref> and by the Japanese space agency JAXA to aid the development of retrievals from JEM/SMILES,<ref name="jaxa"></ref> among others. According to the ARTS website ARTS has been used in at least 1 The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. How are layers of the atmosphere divided? A. temperature gradients B. color gradients C. density gradients D. air gradients Answer:
sciq-2672
multiple_choice
What do you call the angle of the earth's axis of rotation?
[ "vertical tilt", "dynamic tilt", "horizontal tilt", "axial tilt" ]
D
Relavent Documents: Document 0::: Polar motion of the Earth is the motion of the Earth's rotational axis relative to its crust. This is measured with respect to a reference frame in which the solid Earth is fixed (a so-called Earth-centered, Earth-fixed or ECEF reference frame). This variation is a few meters on the surface of the Earth. Analysis Polar motion is defined relative to a conventionally defined reference axis, the CIO (Conventional International Origin), being the pole's average location over the year 1900. It consists of three major components: a free oscillation called Chandler wobble with a period of about 435 days, an annual oscillation, and an irregular drift in the direction of the 80th meridian west, which has lately been less extremely west. Causes The slow drift, about 20 m since 1900, is partly due to motions in the Earth's core and mantle, and partly to the redistribution of water mass as the Greenland ice sheet melts, and to isostatic rebound, i.e. the slow rise of land that was formerly burdened with ice sheets or glaciers. The drift is roughly along the 80th meridian west. Since about 2000, the pole has found a less extreme drift, which is roughly along the central meridian. This less dramatically westward drift of motion is attributed to the global scale mass transport between the oceans and the continents. Major earthquakes cause abrupt polar motion by altering the volume distribution of the Earth's solid mass. These shifts are quite small in magnitude relative to the long-term core/mantle and isostatic rebound components of polar motion. Principle In the absence of external torques, the vector of the angular momentum M of a rotating system remains constant and is directed toward a fixed point in space. If the earth were perfectly symmetrical and rigid, M would remain aligned with its axis of symmetry, which would also be its axis of rotation. In the case of the Earth, it is almost identical with its axis of rotation, with the discrepancy due to shifts of mass on the Document 1::: Solar rotation varies with latitude. The Sun is not a solid body, but is composed of a gaseous plasma. Different latitudes rotate at different periods. The source of this differential rotation is an area of current research in solar astronomy. The rate of surface rotation is observed to be the fastest at the equator (latitude ) and to decrease as latitude increases. The solar rotation period is 24.47 days at the equator and almost 38 days at the poles. The average rotation is 28 days. Current Carrington Rotation: CR [] Surface rotation as an equation The differential rotation rate is usually described by the equation: where is the angular velocity in degrees per day, is the solar latitude, A is angular velocity at the equator, and B, C are constants controlling the decrease in velocity with increasing latitude. The values of A, B, and C differ depending on the techniques used to make the measurement, as well as the time period studied. A current set of accepted average values is: A= 14.713 ± 0.0491 °/day B= −2.396 ± 0.188 °/day C= −1.787 ± 0.253 °/day Sidereal rotation At the equator, the solar rotation period is 24.47 days. This is called the sidereal rotation period, and should not be confused with the synodic rotation period of 26.24 days, which is the time for a fixed feature on the Sun to rotate to the same apparent position as viewed from Earth (the earth's orbital rotation is in the same direction as the sun's rotation). The synodic period is longer because the Sun must rotate for a sidereal period plus an extra amount due to the orbital motion of Earth around the Sun. Note that astrophysical literature does not typically use the equatorial rotation period, but instead often uses the definition of a Carrington rotation: a synodic rotation period of 27.2753 days or a sidereal period of 25.38 days. This chosen period roughly corresponds to the prograde rotation at a latitude of 26° north or south, which is consistent with the typical latitude of sunspot Document 2::: The Euler angles are three angles introduced by Leonhard Euler to describe the orientation of a rigid body with respect to a fixed coordinate system. They can also represent the orientation of a mobile frame of reference in physics or the orientation of a general basis in 3-dimensional linear algebra. Classic Euler angles usually take the inclination angle in such a way that zero degrees represent the vertical orientation. Alternative forms were later introduced by Peter Guthrie Tait and George H. Bryan intended for use in aeronautics and engineering in which zero degrees represent the horizontal position. Chained rotations equivalence Euler angles can be defined by elemental geometry or by composition of rotations. The geometrical definition demonstrates that three composed elemental rotations (rotations about the axes of a coordinate system) are always sufficient to reach any target frame. The three elemental rotations may be extrinsic (rotations about the axes xyz of the original coordinate system, which is assumed to remain motionless), or intrinsic (rotations about the axes of the rotating coordinate system XYZ, solidary with the moving body, which changes its orientation with respect to the extrinsic frame after each elemental rotation). In the sections below, an axis designation with a prime mark superscript (e.g., z″) denotes the new axis after an elemental rotation. Euler angles are typically denoted as α, β, γ, or ψ, θ, φ. Different authors may use different sets of rotation axes to define Euler angles, or different names for the same angles. Therefore, any discussion employing Euler angles should always be preceded by their definition. Without considering the possibility of using two different conventions for the definition of the rotation axes (intrinsic or extrinsic), there exist twelve possible sequences of rotation axes, divided in two groups: Proper Euler angles Tait–Bryan angles . Tait–Bryan angles are also called Cardan angles; nautica Document 3::: The orientation of a building refers to the direction in which it is constructed and laid out, taking account of its planned purpose and ease of use for its occupants, its relation to the path of the sun and other aspects of its environment. Within church architecture, orientation is an arrangement by which the point of main interest in the interior is towards the east (). The east end is where the altar is placed, often within an apse. The façade and main entrance are accordingly at the west end. The opposite arrangement, in which the church is entered from the east and the sanctuary is at the other end, is called occidentation. Since the eighth century most churches are oriented. Hence, even in the many churches where the altar end is not actually to the east, terms such as "east end", "west door", "north aisle" are commonly used as if the church were oriented, treating the altar end as the liturgical east. History The first Christians faced east when praying, likely an outgrowth of the ancient Jewish custom of praying in the direction of the Holy Temple in Jerusalem. Due to this established custom, Tertullian says some non-Christians thought they worshipped the sun. Origen says: "The fact that [...] of all the quarters of the heavens, the east is the only direction we turn to when we pour out prayer, the reasons for this, I think, are not easily discovered by anyone." Later on, various Church Fathers advanced mystical reasons for the custom. One such explanation is that Christ's Second Coming was expected to be from the east: "For as the lightning comes from the east and shines as far as the west, so will be the coming of the Son of Man". At first, the orientation of the building in which Christians met was unimportant, but after the legalization of the religion in the fourth century, customs developed in this regard. These differed in Eastern and Western Christianity. The Apostolic Constitutions, a work of Eastern Christianity written between 375 and 380 Document 4::: Earth's rotation or Earth's spin is the rotation of planet Earth around its own axis, as well as changes in the orientation of the rotation axis in space. Earth rotates eastward, in prograde motion. As viewed from the northern polar star Polaris, Earth turns counterclockwise. The North Pole, also known as the Geographic North Pole or Terrestrial North Pole, is the point in the Northern Hemisphere where Earth's axis of rotation meets its surface. This point is distinct from Earth's North Magnetic Pole. The South Pole is the other point where Earth's axis of rotation intersects its surface, in Antarctica. Earth rotates once in about 24 hours with respect to the Sun, but once every 23 hours, 56 minutes and 4 seconds with respect to other distant stars (see below). Earth's rotation is slowing slightly with time; thus, a day was shorter in the past. This is due to the tidal effects the Moon has on Earth's rotation. Atomic clocks show that the modern day is longer by about 1.7 milliseconds than a century ago, slowly increasing the rate at which UTC is adjusted by leap seconds. Analysis of historical astronomical records shows a slowing trend; the length of a day increased by about 2.3 milliseconds per century since the 8th century BCE. Scientists reported that in 2020 Earth had started spinning faster, after consistently spinning slower than 86,400 seconds per day in the decades before. On June 29, 2022, Earth's spin was completed in 1.59 milliseconds under 24 hours, setting a new record. Because of that trend, engineers worldwide are discussing a 'negative leap second' and other possible timekeeping measures. This increase in speed is thought to be due to various factors, including the complex motion of its molten core, oceans, and atmosphere, the effect of celestial bodies such as the Moon, and possibly climate change, which is causing the ice at Earth's poles to melt. The masses of ice account for the Earth's shape being that of an oblate spheroid, bulging around t The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What do you call the angle of the earth's axis of rotation? A. vertical tilt B. dynamic tilt C. horizontal tilt D. axial tilt Answer:
sciq-9234
multiple_choice
How does burning matter affect its mass?
[ "the mass increases", "the mass remains the same", "the mass decreases", "the mass quadruples" ]
B
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: A combustible material is a material that can burn (i.e., sustain a flame) in air under certain conditions. A material is flammable if it ignites easily at ambient temperatures. In other words, a combustible material ignites with some effort and a flammable material catches fire immediately on exposure to flame. The degree of flammability in air depends largely upon the volatility of the material - this is related to its composition-specific vapour pressure, which is temperature dependent. The quantity of vapour produced can be enhanced by increasing the surface area of the material forming a mist or dust. Take wood as an example. Finely divided wood dust can undergo explosive flames and produce a blast wave. A piece of paper (made from wood) catches on fire quite easily. A heavy oak desk is much harder to ignite, even though the wood fibre is the same in all three materials. Common sense (and indeed scientific consensus until the mid-1700s) would seem to suggest that material "disappears" when burned, as only the ash is left. In fact, there is an increase in weight because the flammable material reacts (or combines) chemically with oxygen, which also has mass. The original mass of flammable material and the mass of the oxygen required for flames equals the mass of the flame products (ash, water, carbon dioxide, and other gases). Antoine Lavoisier, one of the pioneers in these early insights, stated that Nothing is lost, nothing is created, everything is transformed, which would later be known as the law of conservation of mass. Lavoisier used the experimental fact that some metals gained mass when they burned to support his ideas. Definitions Historically, flammable, inflammable and combustible meant capable of burning. The word "inflammable" came through French from the Latin inflammāre = "to set fire to", where the Latin preposition "in-" means "in" as in "indoctrinate", rather than "not" as in "invisible" and "ineligible". The word "inflammable" may be er Document 2::: The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591. On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education. Format This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions. The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test. Preparation The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a Document 3::: The SAT Subject Test in Physics, Physics SAT II, or simply the Physics SAT, was a one-hour multiple choice test on physics administered by the College Board in the United States. A high school student generally chose to take the test to fulfill college entrance requirements for the schools at which the student was planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; until January 2005, they were known as SAT IIs; they are still well known by this name. The material tested on the Physics SAT was supposed to be equivalent to that taught in a junior- or senior-level high school physics class. It required critical thinking and test-taking strategies, at which high school freshmen or sophomores may have been inexperienced. The Physics SAT tested more than what normal state requirements were; therefore, many students prepared for the Physics SAT using a preparatory book or by taking an AP course in physics. On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Physics. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education. Format The SAT Subject Test in Physics had 75 questions and consisted of two parts: Part A and Part B. Part A: First 12 or 13 questions 4 groups of two to four questions each The questions within any one group all relate to a single situation. Five possible answer choices are given before the question. An answer choice can be used once, more than once, or not at all in each group. Part B: Last 62 or 63 questions Each question has five possible answer choice with one correct answer. Some questions may be in groups of two or three. Topics Scoring The test had 75 multiple choice questions that were to be answered in one hour. All questions had f Document 4::: The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered. The 1995 version has 30 five-way multiple choice questions. Example question (question 4): Gender differences The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. How does burning matter affect its mass? A. the mass increases B. the mass remains the same C. the mass decreases D. the mass quadruples Answer:
sciq-6886
multiple_choice
What is a measure of the force of gravity pulling on an object of a given mass?
[ "solidity", "scale", "effect", "weight" ]
D
Relavent Documents: Document 0::: The gravity of Earth, denoted by , is the net acceleration that is imparted to objects due to the combined effect of gravitation (from mass distribution within Earth) and the centrifugal force (from the Earth's rotation). It is a vector quantity, whose direction coincides with a plumb bob and strength or magnitude is given by the norm . In SI units this acceleration is expressed in metres per second squared (in symbols, m/s2 or m·s−2) or equivalently in newtons per kilogram (N/kg or N·kg−1). Near Earth's surface, the acceleration due to gravity, accurate to 2 significant figures, is . This means that, ignoring the effects of air resistance, the speed of an object falling freely will increase by about per second every second. This quantity is sometimes referred to informally as little (in contrast, the gravitational constant is referred to as big ). The precise strength of Earth's gravity varies with location. The agreed upon value for is by definition. This quantity is denoted variously as , (though this sometimes means the normal gravity at the equator, ), , or simply (which is also used for the variable local value). The weight of an object on Earth's surface is the downwards force on that object, given by Newton's second law of motion, or (). Gravitational acceleration contributes to the total gravity acceleration, but other factors, such as the rotation of Earth, also contribute, and, therefore, affect the weight of the object. Gravity does not normally include the gravitational pull of the Moon and Sun, which are accounted for in terms of tidal effects. Variation in magnitude A non-rotating perfect sphere of uniform mass density, or whose density varies solely with distance from the centre (spherical symmetry), would produce a gravitational field of uniform magnitude at all points on its surface. The Earth is rotating and is also not spherically symmetric; rather, it is slightly flatter at the poles while bulging at the Equator: an oblate spheroid. Document 1::: In common usage, the mass of an object is often referred to as its weight, though these are in fact different concepts and quantities. Nevertheless, one object will always weigh more than another with less mass if both are subject to the same gravity (i.e. the same gravitational field strength). In scientific contexts, mass is the amount of "matter" in an object (though "matter" may be difficult to define), but weight is the force exerted on an object's matter by gravity. At the Earth's surface, an object whose mass is exactly one kilogram weighs approximately 9.81 newtons, the product of its mass and the gravitational field strength there. The object's weight is less on Mars, where gravity is weaker; more on Saturn, where gravity is stronger; and very small in space, far from significant sources of gravity, but it always has the same mass. Material objects at the surface of the Earth have weight despite such sometimes being difficult to measure. An object floating freely on water, for example, does not appear to have weight since it is buoyed by the water. But its weight can be measured if it is added to water in a container which is entirely supported by and weighed on a scale. Thus, the "weightless object" floating in water actually transfers its weight to the bottom of the container (where the pressure increases). Similarly, a balloon has mass but may appear to have no weight or even negative weight, due to buoyancy in air. However the weight of the balloon and the gas inside it has merely been transferred to a large area of the Earth's surface, making the weight difficult to measure. The weight of a flying airplane is similarly distributed to the ground, but does not disappear. If the airplane is in level flight, the same weight-force is distributed to the surface of the Earth as when the plane was on the runway, but spread over a larger area. A better scientific definition of mass is its description as being a measure of inertia, which is the tendency of an Document 2::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 3::: The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered. The 1995 version has 30 five-way multiple choice questions. Example question (question 4): Gender differences The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher. Document 4::: Gravimetry is the measurement of the strength of a gravitational field. Gravimetry may be used when either the magnitude of a gravitational field or the properties of matter responsible for its creation are of interest. Units of measurement Gravity is usually measured in units of acceleration. In the SI system of units, the standard unit of acceleration is 1 metre per second squared (abbreviated as m/s2). Other units include the cgs gal (sometimes known as a galileo, in either case with symbol Gal), which equals 1 centimetre per second squared, and the g (gn), equal to 9.80665 m/s2. The value of the gn is defined approximately equal to the acceleration due to gravity at the Earth's surface (although the value of g varies by location). Gravimeters An instrument used to measure gravity is known as a gravimeter. For a small body, general relativity predicts gravitational effects indistinguishable from the effects of acceleration by the equivalence principle. Thus, gravimeters can be regarded as special-purpose accelerometers. Many weighing scales may be regarded as simple gravimeters. In one common form, a spring is used to counteract the force of gravity pulling on an object. The change in length of the spring may be calibrated to the force required to balance the gravitational pull. The resulting measurement may be made in units of force (such as the newton), but is more commonly made in units of gals or cm/s2. Researchers use more sophisticated gravimeters when precise measurements are needed. When measuring the Earth's gravitational field, measurements are made to the precision of microgals to find density variations in the rocks making up the Earth. Several types of gravimeters exist for making these measurements, including some that are essentially refined versions of the spring scale described above. These measurements are used to define gravity anomalies. Besides precision, stability is also an important property of a gravimeter, as it allows the monitor The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is a measure of the force of gravity pulling on an object of a given mass? A. solidity B. scale C. effect D. weight Answer:
sciq-1829
multiple_choice
What was the first amino acid to be isolated?
[ "histon", "histamine", "asparagine", "glutathione" ]
C
Relavent Documents: Document 0::: This is a list of topics in molecular biology. See also index of biochemistry articles. Document 1::: Lafayette Benedict Mendel (February 5, 1872 – December 9, 1935) was an American biochemist known for his work in nutrition, with longtime collaborator Thomas B. Osborne, including the study of Vitamin A, Vitamin B, lysine and tryptophan. Life Mendel was born in Delhi, New York, son of Benedict Mendel, a merchant born in Aufhausen, Germany in 1833, and Pauline Ullman, born in Eschenau, Germany. His father immigrated to the United States from Germany in 1851, his mother in 1870. At 15, he won a New York State scholarship. Mendel studied classics, economics and the humanities, as well as biology and chemistry at Yale University and graduated with honors in 1891. He then began graduate work at the Sheffield Scientific School on a fellowship and studied physiological chemistry under Russell Henry Chittenden. He finished his Ph.D. 1893 after only two years; his thesis topic was the study of the seed storage protein edestin extracted from hemp seed. Upon graduation, he began as an assistant at the Sheffield School in Physiological chemistry. He also studied in Germany and was made an assistant professor on his return in 1896. He became a full professor in 1903 with appointments in the Yale School of Medicine and the Yale Graduate School as well as Sheffield. With Chittenden, Mendel became one of the founders of the science of nutrition. Together with longtime collaborator Thomas B. Osborne he established essential amino acids. As early as 1910 he found an important growth factor...later to be known as vitamin B. In 1903, at age 31, he was appointed full professor of physiological chemistry. In promoting Mendel, Yale made him one of the first high-ranking Jewish professors in the United States. Capping his illustrious career Mendel was appointed Sterling Professor of Physiological Chemistry in 1921. Of the twenty professors to be designated Sterling professors in the decade following their inception in 1920, only two were selected before Mendel. Of the twenty, M Document 2::: The Rowett Institute is a research centre for studies into food and nutrition, located in Aberdeen, Scotland. History The institute was founded in 1913 when the University of Aberdeen and the North of Scotland College of Agriculture agreed that an "Institute for Research into Animal Nutrition" should be established in Scotland. The first director was John Boyd Orr, later to become Lord Boyd Orr, who moved from Glasgow to "the wilds of Aberdeenshire" in 1914. Orr drew up some plans for a nutrition research institute. Orr also donated £5000 for the building of a granite laboratory building at Craibstone, not far from the Bucksburn site of the Rowett. At the breakout of the Great War, Orr left the institute, but returned in 1919 with a staff of four to begin work in the new laboratory. Orr continued to push for a new research institute and finally the Government agreed to pay half the costs but stipulated that the other half was to be found from other sources. The extra money was donated by Dr John Quiller Rowett, a businessman and director of a wine and spirits merchants in London. Rowett's donation allowed the purchase of 41 acres of land for the institute to be built on. Rowett also contributed £10,000 towards the cost of the buildings. The money was donated with one very important stipulation from Rowett—"if any work done at the institute on animal nutrition were found to have a bearing on human nutrition, the institute would be allowed to follow up this work." The institute was formally opened in 1922 by Queen Mary. In 1927, the Rowett was given £5000 to carry out an investigation to test whether health could be improved by the consumption of milk. After some further tests on other groups, a bill was passed in the House of Commons enabling local authorities in Scotland to provide cheap or free milk to all school children. It was soon applied in England too. This helped reduce the surplus of milk at the time and also helped rescue the milk industry which was i Document 3::: Johannes Friedrich Miescher (13 August 1844 – 26 August 1895) was a Swiss physician and biologist. He was the first scientist to isolate nucleic acid in 1869. He also identified protamine and made a number of other discoveries. Miescher had isolated various phosphate-rich chemicals, which he called nuclein (now nucleic acids), from the nuclei of white blood cells in Felix Hoppe-Seyler's laboratory at the University of Tübingen, Germany, paving the way for the identification of DNA as the carrier of inheritance. The significance of the discovery, first published in 1871, was not at first apparent, and Albrecht Kossel made the initial inquiries into its chemical structure. Later, Miescher raised the idea that the nucleic acids could be involved in heredity and even posited that there might be something akin to an alphabet that might explain how variation is produced. Early life and education Friedrich Miescher came from a scientific family; his father and his uncle held the chair of anatomy at the University of Basel. As a boy, he was shy but intelligent. He had an interest in music as his father performed publicly. Miescher studied medicine at Basel. In the summer of 1865, he worked for the organic chemist Adolf Stecker at the University of Göttingen, but his studies were interrupted for the year when he became ill with typhoid fever, which left him hearing-impaired. He received his MD in 1868. Career Miescher felt that his partial deafness would be a disadvantage as a doctor, so he turned to physiological chemistry. He originally wanted to study lymphocytes, but was encouraged by Felix Hoppe-Seyler to study neutrophils. He was interested in studying the chemistry of the nucleus. Lymphocytes were difficult to obtain in sufficient numbers to study, while neutrophils were known to be one of the main and first components in pus and could be obtained from bandages at the nearby hospital. The problem was, however, washing the cells off the bandages without damaging th Document 4::: The Bayliss and Starling Society was founded in 1979 as a forum for research scientists with specific interests in the chemistry, physiology and function of central and autonomic peptides. The society was named in honour of William Bayliss and Ernest Starling, who discovered the gastrointestinal peptide secretin in 1902 and coined the term hormone in 1905. The society's main objective was to "advance education and science by the promotion, for the benefit of the public, the study of the chemistry, physiology and disorders of central and peripheral regulating peptides and by the dissemination of the results of such study and research." In doing so, the Society promoted research into peptides and facilitated scientists with research interests in peptides by aiding in the organisation of symposia and relevant conferences. Additionally the Society offered the John Calam Travelling Fellowship Award for members who wanted to attend national and international academic conferences or visit laboratories to gain experience in new techniques to facilitate their research. The Bayliss and Starling Society merged with The Physiological Society in 2014. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What was the first amino acid to be isolated? A. histon B. histamine C. asparagine D. glutathione Answer:
sciq-7157
multiple_choice
The genome of an organism consists of one or more what?
[ "hemoglobin molecules", "require molecules", "rna molecules", "dna molecules" ]
D
Relavent Documents: Document 0::: Semantides (or semantophoretic molecules) are biological macromolecules that carry genetic information or a transcript thereof. Three different categories or semantides are distinguished: primary, secondary and tertiary. Primary Semantides are genes, which consist of DNA. Secondary semantides are chains of messenger RNA, which are transcribed from DNA. Tertiary semantides are polypeptides, which are translated from messenger RNA. In eukaryotic organisms, primary semantides may consist of nuclear, mitochondrial or plastid DNA. Not all primary semantides ultimately form tertiary semantides. Some primary semantides are not transcribed into mRNA (non-coding DNA) and some secondary semantides are not translated into polypeptides (non-coding RNA). The complexity of semantides varies greatly. For tertiary semantides, large globular polypeptide chains are most complex while structural proteins, consisting of repeating simple sequences, are least complex. The term semantide and related terms were coined by Linus Pauling and Emile Zuckerkandl. Although semantides are the major type of data used in modern phylogenetics, the term itself is not commonly used. Related terms Isosemantic DNA or RNA that differs in base sequence, but translate into identical polypeptide chains are referred to as being isosemantic. Episemantic Molecules that are synthesized by enzymes (tertiary semantides) are referred to as episemantic molecules. Episemantic molecules have a larger variety in types than semantides, which only consist of three types (DNA, RNA or polypeptides). Not all polypeptides are tertiary semantides. Some, mainly small polypeptides, can also be episemantic molecules. Asemantic Molecules that are not produced by an organism are referred to as asemantic molecules, because they do not contain any genetic information. Asementic molecules may be changed into episemantic molecules by anabolic processes. Asemantic molecules may also become semantic molecules when they integrate Document 1::: This list of life sciences comprises the branches of science that involve the scientific study of life – such as microorganisms, plants, and animals including human beings. This science is one of the two major branches of natural science, the other being physical science, which is concerned with non-living matter. Biology is the overall natural science that studies life, with the other life sciences as its sub-disciplines. Some life sciences focus on a specific type of organism. For example, zoology is the study of animals, while botany is the study of plants. Other life sciences focus on aspects common to all or many life forms, such as anatomy and genetics. Some focus on the micro-scale (e.g. molecular biology, biochemistry) other on larger scales (e.g. cytology, immunology, ethology, pharmacy, ecology). Another major branch of life sciences involves understanding the mindneuroscience. Life sciences discoveries are helpful in improving the quality and standard of life and have applications in health, agriculture, medicine, and the pharmaceutical and food science industries. For example, it has provided information on certain diseases which has overall aided in the understanding of human health. Basic life science branches Biology – scientific study of life Anatomy – study of form and function, in plants, animals, and other organisms, or specifically in humans Astrobiology – the study of the formation and presence of life in the universe Bacteriology – study of bacteria Biotechnology – study of combination of both the living organism and technology Biochemistry – study of the chemical reactions required for life to exist and function, usually a focus on the cellular level Bioinformatics – developing of methods or software tools for storing, retrieving, organizing and analyzing biological data to generate useful biological knowledge Biolinguistics – the study of the biology and evolution of language. Biological anthropology – the study of humans, non-hum Document 2::: MicrobeLibrary is a permanent collection of over 1400 original peer-reviewed resources for teaching undergraduate microbiology. It is provided by the American Society for Microbiology, Washington DC, United States. Contents include curriculum activities; images and animations; reviews of books, websites and other resources; and articles from Focus on Microbiology Education, Microbiology Education and Microbe. Around 40% of the materials are free to educators and students, the remainder require a subscription. the service is suspended with the message to: "Please check back with us in 2017". External links MicrobeLibrary Microbiology Document 3::: A compositional domain in genetics is a region of DNA with a distinct guanine (G) and cytosine (C) G-C and C-G content (collectively GC content). The homogeneity of compositional domains is compared to that of the chromosome on which they reside. As such, compositional domains can be homogeneous or nonhomogeneous domains. Compositionally homogeneous domains that are sufficiently long (= 300 kb) are termed isochores or isochoric domains. The compositional domain model was proposed as an alternative to the isochoric model. The isochore model was proposed by Bernardi and colleagues to explain the observed non-uniformity of genomic fragments in the genome. However, recent sequencing of complete genomic data refuted the isochoric model. Its main predictions were: GC content of the third codon position (GC3) of protein coding genes is correlated with the GC content of the isochores embedding the corresponding genes. This prediction was found to be incorrect. GC3 could not predict the GC content of nearby sequences. The genome organization of warm-blooded vertebrates is a mosaic of isochores. This prediction was rejected by many studies that used the complete human genome data. The genome organization of cold-blooded vertebrates is characterized by low GC content levels and lower compositional heterogeneity. This prediction was disproved by finding high and low GC content domains in fish genomes. The compositional domain model describes the genome as a mosaic of short and long homogeneous and nonhomogeneous domains. The composition and organization of the domains were shaped by different evolutionary processes that either fused or broke down the domains. This genomic organization model was confirmed in many new genomic studies of cow, honeybee, sea urchin, body louse, Nasonia, beetle, and ant genomes. The human genome was described as consisting of a mixture of compositionally nonhomogeneous domains with numerous short compositionally homogeneous domains and relativ Document 4::: In molecular biology, a library is a collection of DNA fragments that is stored and propagated in a population of micro-organisms through the process of molecular cloning. There are different types of DNA libraries, including cDNA libraries (formed from reverse-transcribed RNA), genomic libraries (formed from genomic DNA) and randomized mutant libraries (formed by de novo gene synthesis where alternative nucleotides or codons are incorporated). DNA library technology is a mainstay of current molecular biology, genetic engineering, and protein engineering, and the applications of these libraries depend on the source of the original DNA fragments. There are differences in the cloning vectors and techniques used in library preparation, but in general each DNA fragment is uniquely inserted into a cloning vector and the pool of recombinant DNA molecules is then transferred into a population of bacteria (a Bacterial Artificial Chromosome or BAC library) or yeast such that each organism contains on average one construct (vector + insert). As the population of organisms is grown in culture, the DNA molecules contained within them are copied and propagated (thus, "cloned"). Terminology The term "library" can refer to a population of organisms, each of which carries a DNA molecule inserted into a cloning vector, or alternatively to the collection of all of the cloned vector molecules. cDNA libraries A cDNA library represents a sample of the mRNA purified from a particular source (either a collection of cells, a particular tissue, or an entire organism), which has been converted back to a DNA template by the use of the enzyme reverse transcriptase. It thus represents the genes that were being actively transcribed in that particular source under the physiological, developmental, or environmental conditions that existed when the mRNA was purified. cDNA libraries can be generated using techniques that promote "full-length" clones or under conditions that generate shorter f The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The genome of an organism consists of one or more what? A. hemoglobin molecules B. require molecules C. rna molecules D. dna molecules Answer:
sciq-695
multiple_choice
Water has the properties of cohesion and what else?
[ "degradation", "absorption", "adhesion", "diffusion" ]
C
Relavent Documents: Document 0::: At equilibrium, the relationship between water content and equilibrium relative humidity of a material can be displayed graphically by a curve, the so-called moisture sorption isotherm. For each humidity value, a sorption isotherm indicates the corresponding water content value at a given, constant temperature. If the composition or quality of the material changes, then its sorption behaviour also changes. Because of the complexity of sorption process the isotherms cannot be determined explicitly by calculation, but must be recorded experimentally for each product. The relationship between water content and water activity (aw) is complex. An increase in aw is usually accompanied by an increase in water content, but in a non-linear fashion. This relationship between water activity and moisture content at a given temperature is called the moisture sorption isotherm. These curves are determined experimentally and constitute the fingerprint of a food system. BET theory (Brunauer-Emmett-Teller) provides a calculation to describe the physical adsorption of gas molecules on a solid surface. Because of the complexity of the process, these calculations are only moderately successful; however, Stephen Brunauer was able to classify sorption isotherms into five generalized shapes as shown in Figure 2. He found that Type II and Type III isotherms require highly porous materials or desiccants, with first monolayer adsorption, followed by multilayer adsorption and finally leading to capillary condensation, explaining these materials high moisture capacity at high relative humidity. Care must be used in extracting data from isotherms, as the representation for each axis may vary in its designation. Brunauer provided the vertical axis as moles of gas adsorbed divided by the moles of the dry material, and on the horizontal axis he used the ratio of partial pressure of the gas just over the sample, divided by its partial pressure at saturation. More modern isotherms showing the Document 1::: Wet Processing Engineering is one of the major streams in Textile Engineering or Textile manufacturing which refers to the engineering of textile chemical processes and associated applied science. The other three streams in textile engineering are yarn engineering, fabric engineering, and apparel engineering. The processes of this stream are involved or carried out in an aqueous stage. Hence, it is called a wet process which usually covers pre-treatment, dyeing, printing, and finishing. The wet process is usually done in the manufactured assembly of interlacing fibers, filaments and yarns, having a substantial surface (planar) area in relation to its thickness, and adequate mechanical strength to give it a cohesive structure. In other words, the wet process is done on manufactured fiber, yarn and fabric. All of these stages require an aqueous medium which is created by water. A massive amount of water is required in these processes per day. It is estimated that, on an average, almost 50–100 liters of water is used to process only 1 kilogram of textile goods, depending on the process engineering and applications. Water can be of various qualities and attributes. Not all water can be used in the textile processes; it must have some certain properties, quality, color and attributes of being used. This is the reason why water is a prime concern in wet processing engineering. Water Water consumption and discharge of wastewater are the two major concerns. The textile industry uses a large amount of water in its varied processes especially in wet operations such as pre-treatment, dyeing, and printing. Water is required as a solvent of various dyes and chemicals and it is used in washing or rinsing baths in different steps. Water consumption depends upon the application methods, processes, dyestuffs, equipment/machines and technology which may vary mill to mill and material composition. Longer processing sequences, processing of extra dark colors and reprocessing lead Document 2::: Mucoadhesion describes the attractive forces between a biological material and mucus or mucous membrane. Mucous membranes adhere to epithelial surfaces such as the gastrointestinal tract (GI-tract), the vagina, the lung, the eye, etc. They are generally hydrophilic as they contain many hydrogen macromolecules due to the large amount of water (approximately 95%) within its composition. However, mucin also contains glycoproteins that enable the formation of a gel-like substance. Understanding the hydrophilic bonding and adhesion mechanisms of mucus to biological material is of utmost importance in order to produce the most efficient applications. For example, in drug delivery systems, the mucus layer must be penetrated in order to effectively transport micro- or nanosized drug particles into the body. Bioadhesion is the mechanism by which two biological materials are held together by interfacial forces. The mucoadhesive properties of polymers can be evaluated via rheological synergism studies with freshly isolated mucus, tensile studies and mucosal residence time studies. Results obtained with these in vitro methods show a high correlation with results obtained in humans. Mucoadhesive bondings Mucoadhesion involves several types of bonding mechanisms, and it is the interaction between each process that allows for the adhesive process. The major categories are wetting theory, adsorption theory, diffusion theory, electrostatic theory, and fracture theory. Specific processes include mechanical interlocking, electrostatic, diffusion interpenetration, adsorption and fracture processes. Bonding mechanisms Wetting theory: Wetting is the oldest and most prevalent theory of adhesion. The adhesive components in a liquid solution anchor themselves in irregularities on the substrate and eventually harden, providing sites on which to adhere. Surface tension effects restrict the movement of the adhesive along the surface of the substrate, and is related to the thermodynamic wor Document 3::: Interface and colloid science is an interdisciplinary intersection of branches of chemistry, physics, nanoscience and other fields dealing with colloids, heterogeneous systems consisting of a mechanical mixture of particles between 1 nm and 1000 nm dispersed in a continuous medium. A colloidal solution is a heterogeneous mixture in which the particle size of the substance is intermediate between a true solution and a suspension, i.e. between 1–1000 nm. Smoke from a fire is an example of a colloidal system in which tiny particles of solid float in air. Just like true solutions, colloidal particles are small and cannot be seen by the naked eye. They easily pass through filter paper. But colloidal particles are big enough to be blocked by parchment paper or animal membrane. Interface and colloid science has applications and ramifications in the chemical industry, pharmaceuticals, biotechnology, ceramics, minerals, nanotechnology, and microfluidics, among others. There are many books dedicated to this scientific discipline, and there is a glossary of terms, Nomenclature in Dispersion Science and Technology, published by the US National Institute of Standards and Technology. See also Interface (matter) Electrokinetic phenomena Surface science Document 4::: The hydrophobic effect is the observed tendency of nonpolar substances to aggregate in an aqueous solution and exclude water molecules. The word hydrophobic literally means "water-fearing", and it describes the segregation of water and nonpolar substances, which maximizes hydrogen bonding between molecules of water and minimizes the area of contact between water and nonpolar molecules. In terms of thermodynamics, the hydrophobic effect is the free energy change of water surrounding a solute. A positive free energy change of the surrounding solvent indicates hydrophobicity, whereas a negative free energy change implies hydrophilicity. The hydrophobic effect is responsible for the separation of a mixture of oil and water into its two components. It is also responsible for effects related to biology, including: cell membrane and vesicle formation, protein folding, insertion of membrane proteins into the nonpolar lipid environment and protein-small molecule associations. Hence the hydrophobic effect is essential to life. Substances for which this effect is observed are known as hydrophobes. Amphiphiles Amphiphiles are molecules that have both hydrophobic and hydrophilic domains. Detergents are composed of amphiphiles that allow hydrophobic molecules to be solubilized in water by forming micelles and bilayers (as in soap bubbles). They are also important to cell membranes composed of amphiphilic phospholipids that prevent the internal aqueous environment of a cell from mixing with external water. Folding of macromolecules In the case of protein folding, the hydrophobic effect is important to understanding the structure of proteins that have hydrophobic amino acids (such as glycine, alanine, valine, leucine, isoleucine, phenylalanine, tryptophan and methionine) clustered together within the protein. Structures of water-soluble proteins have a hydrophobic core in which side chains are buried from water, which stabilizes the folded state. Charged and polar side ch The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Water has the properties of cohesion and what else? A. degradation B. absorption C. adhesion D. diffusion Answer:
sciq-11446
multiple_choice
Oxygen reaches what veinless part of the eye by diffusing through its tear layer?
[ "pupil", "retina", "cornea", "membranes" ]
C
Relavent Documents: Document 0::: The stroma of the iris is a fibrovascular layer of tissue. It is the upper layer of two in the iris. Structure The stroma is a delicate interlacement of fibres. Some circle the circumference of the iris and the majority radiate toward the pupil. Blood vessels and nerves intersperse this mesh. In dark eyes, the stroma often contains pigment granules. Blue eyes and the eyes of albinos, however, lack pigment. The stroma connects to a sphincter muscle (sphincter pupillae), which contracts the pupil in a circular motion, and a set of dilator muscles (dilator pupillae) which pull the iris radially to enlarge the pupil, pulling it in folds. The back surface is covered by a commonly, heavily pigmented epithelial layer that is two cells thick (the iris pigment epithelium), but the front surface has no epithelium. This anterior surface projects as the muscles dilate. Document 1::: Sattler's layer, named after Hubert Sattler, an Austrian ophthalmologist, is one of five (or six) layers of medium-diameter blood vessels of the choroid, and a layer of the eye. It is situated between the Bruch's membrane, choriocapillaris below, and the Haller's layer and suprachoroidea above, respectively. The origin seems to be related to a continuous differentiation throughout the growth of the tissue and even further differentiation during adulthood. Measurement methods and clinical impact After excision the choroid collapses partially, histologic preparations also alter the local pressure and fluid content of different sections in the tissue, thus requiring preparations with rubber solution or others that can conserve the vascular status of living tissue. Novel diagnostic methods, especially optical coherence tomography have widened the understanding of the real-time, in vivo status of the different layers. Several papers have shown the relationship between the thickness of the choroidal, Sattler's and Haller's layer between healthy individuals and in people with age-related macular degeneration (AMD). The studies showed significant reduction of layer thickness in relation to the progression of AMD, which may be important in the understanding of choriopathy in the pathophysiology of AMD. However, also strong variations even throughout the diurnal cycle, as well as the influence of optical stimuli during eye-growth, indicate that the complex function of this tissue is not entirely understood and might be one of the reasons for the frequently found separation in vascular size between Haller's and Sattler's layer. Notes Document 2::: The pigmented layer of retina or retinal pigment epithelium (RPE) is the pigmented cell layer just outside the neurosensory retina that nourishes retinal visual cells, and is firmly attached to the underlying choroid and overlying retinal visual cells. History The RPE was known in the 18th and 19th centuries as the pigmentum nigrum, referring to the observation that the RPE is dark (black in many animals, brown in humans); and as the tapetum nigrum, referring to the observation that in animals with a tapetum lucidum, in the region of the tapetum lucidum the RPE is not pigmented. Anatomy The RPE is composed of a single layer of hexagonal cells that are densely packed with pigment granules. When viewed from the outer surface, these cells are smooth and hexagonal in shape. When seen in section, each cell consists of an outer non-pigmented part containing a large oval nucleus and an inner pigmented portion which extends as a series of straight thread-like processes between the rods, this being especially the case when the eye is exposed to light. Function The RPE has several functions, namely, light absorption, epithelial transport, spatial ion buffering, visual cycle, phagocytosis, secretion and immune modulation. Light absorption: RPE are responsible for absorbing scattered light. This role is very important for two main reasons, first, to improve the quality of the optical system, second, light is radiation, and it is concentrated by a lens onto the cells of the macula, resulting in a strong concentration of photo-oxidative energy. Melanosomes absorb the scattered light and thus diminish the photo-oxidative stress. The high perfusion of retina brings a high oxygen tension environment. The combination of light and oxygen brings oxidative stress, and RPE has many mechanisms to cope with it. Epithelial transport: As mentioned above, RPE compose the outer blood–retinal barrier, the epithelia has tight junctions between the lateral surfaces and implies an isolation Document 3::: Stem cell therapy for macular degeneration is the use of stem cells to heal, replace dead or damaged cells of the macula in the retina. Stem cell based therapies using bone marrow stem cells as well as retinal pigment epithelial transplantation are being studied. A number of trials have occurred in humans with encouraging results. Historical background In 1959, the first fetal retinal transplant into the anterior chamber of the eyes of animals was reported. Cell culture experiments on RPE were carried out in 1980. Cultured human RPE cells were transplanted into the eyes of animals, first with open techniques and methods and later with closed cavity vitrectomy techniques. In 1991, Gholam Peyman transplanted RPE (Retinal Pigment Epithelium) in humans but with limited success rate. Later, allogenic fetal RPE cell transplantation was tried in which immune rejection of the graft was a major problem. It has also been observed that the rejection rates were lower in dry AMD than that in wet AMD. Autologous RPE transplantation is conventionally done employing two techniques, namely, RPE suspension and autologous full-thickness RPE-choroid transplantation. Encouraging clinical outcomes has already been reported with the transplantation of the autologous RPE choroid from the periphery of the eye to a disease affected portion. Since 2003, researchers have successfully transplanted corneal stem cells into damaged eyes to restore vision. "Sheets of retinal cells used by the team are harvested from aborted fetuses, which some people find objectionable." When these sheets are transplanted over the damaged cornea, the stem cells stimulate renewed repair, eventually restore vision. The such development was in June 2005, when researchers at the Queen Victoria Hospital of Sussex, England were able to restore the sight of forty people using the same technique. The group, led by Sheraz Daya, was able to successfully use adult stem cells obtained from the patient, a relative, or even Document 4::: The pupil is a hole located in the center of the iris of the eye that allows light to strike the retina. It appears black because light rays entering the pupil are either absorbed by the tissues inside the eye directly, or absorbed after diffuse reflections within the eye that mostly miss exiting the narrow pupil. The size of the pupil is controlled by the iris, and varies depending on many factors, the most significant being the amount of light in the environment. The term "pupil" was coined by Gerard of Cremona. In humans, the pupil is circular, but its shape varies between species; some cats, reptiles, and foxes have vertical slit pupils, goats have horizontally oriented pupils, and some catfish have annular types. In optical terms, the anatomical pupil is the eye's aperture and the iris is the aperture stop. The image of the pupil as seen from outside the eye is the entrance pupil, which does not exactly correspond to the location and size of the physical pupil because it is magnified by the cornea. On the inner edge lies a prominent structure, the collarette, marking the junction of the embryonic pupillary membrane covering the embryonic pupil. Function The iris is a contractile structure, consisting mainly of smooth muscle, surrounding the pupil. Light enters the eye through the pupil, and the iris regulates the amount of light by controlling the size of the pupil. This is known as the pupillary light reflex. The iris contains two groups of smooth muscles; a circular group called the sphincter pupillae, and a radial group called the dilator pupillae. When the sphincter pupillae contract, the iris decreases or constricts the size of the pupil. The dilator pupillae, innervated by sympathetic nerves from the superior cervical ganglion, cause the pupil to dilate when they contract. These muscles are sometimes referred to as intrinsic eye muscles. The sensory pathway (rod or cone, bipolar, ganglion) is linked with its counterpart in the other eye by a partial The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Oxygen reaches what veinless part of the eye by diffusing through its tear layer? A. pupil B. retina C. cornea D. membranes Answer:
ai2_arc-131
multiple_choice
Which question can most likely be determined through a scientific investigation?
[ "Who will be the winner of the next lottery?", "What football team will win the next game?", "What is the amount of light needed to grow tomatoes?", "Which four types of bird feathers have the prettiest colors?" ]
C
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 2::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of Document 3::: The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields. Description The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions. The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.” Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers. Current efforts The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo Document 4::: The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591. On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education. Format This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions. The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test. Preparation The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Which question can most likely be determined through a scientific investigation? A. Who will be the winner of the next lottery? B. What football team will win the next game? C. What is the amount of light needed to grow tomatoes? D. Which four types of bird feathers have the prettiest colors? Answer:
sciq-428
multiple_choice
What is the base of nearly all food chains on earth?
[ "atherosclerosis", "synthesis", "photosynthesis", "glycolysis" ]
C
Relavent Documents: Document 0::: Food biodiversity is defined as "the diversity of plants, animals and other organisms used for food, covering the genetic resources within species, between species and provided by ecosystems." Food biodiversity can be considered from two main perspectives: production and consumption. From a consumption perspective, food biodiversity describes the diversity of foods in human diets and their contribution to dietary diversity, cultural identity and good nutrition. Production of food biodiversity looks at the thousands of food products, such as fruits, nuts, vegetables, meat and condiments sourced from agriculture and from the wild (e.g. forests, uncultivated fields, water bodies). Food biodiversity covers the diversity between species, for example different animal and crop species, including those considered neglected and underutilized species. Food biodiversity also comprises the diversity within species, for example different varieties of fruit and vegetables, or different breeds of animals. Food diversity, diet diversity nutritional diversity, are also terms used in the new diet culture spawned by Brandon Eisler, in the study known as Nutritional Diversity. Consumption of food biodiversity Food biodiversity, nutrition, and health Promoting diversity of foods and species consumed in human diets in particular has potential co-benefits for public health as well as sustainable food systems perspective. Food biodiversity provides necessary nutrients for quality diets and is an essential part of local food systems, cultures and food security. Promoting diversity of foods and species consumed in human diets in particular has potential co-benefits for sustainable food systems. Nutritionally, diversity in food is associated with higher micronutrient adequacy of diets. On average, per additional species consumed, mean adequacy of vitamin A, vitamin C, folate, calcium, iron, and zinc increased by 3%. From a conservation point of view, diets based on a wide variety of Document 1::: The Vegetarian Myth: Food, Justice, and Sustainability is a 2009 book by Lierre Keith published by PM Press. Keith is an ex-vegan who believes that "veganism has damaged her health and others". Keith argues that agriculture is destroying not only human health but entire ecosystems, such as the North American prairie, and destroying topsoil. Keith also considers modern agriculture to be the root cause of slavery, imperialism, militarism, chronic hunger and disease. Keith argues humans should accept death as a necessary precursor to food in that "everyone will get eaten, sooner or later". In sum, her proposition is that herbivores eat grass, humans eat herbivores, and then (eventually) worms/bacteria/etc. eat humans. Reception The book resulted in extreme controversy, going as far as Keith being physically assaulted at a book reading. Aric McBay says that Keith is not being provocative for the sake of it, rather she believes vegetarians have the right impulse but are misinformed about the facts. Ian Fitzpatrick wrote that the book is at the core about the unsustainable nature of modern agriculture, but is "disguised" as a treatise against vegetarianism. Susan Schenck said the book was "full of hard core indisputable research". She agrees with the book's discounting of the contested scientific hypothesis on the effects of cholesterol on coronary heart disease. She further agrees with the book's claims linking soy with several ailments. However Schenck disagrees that vegetarians are necessarily unhealthy, believing each individual has different abilities to synthesize the needed nutrients from different foods. Patrick Nicholson writes that the book misinterprets scientific articles, cherry-picks facts, uses strawman arguments and relies heavily on anecdotes and faulty generalisations. John Sanbonmatsu interprets Keith's rhetoric as apocalyptic and millenarian. He argues that Keith's nutritional arguments are grounded on anecdotes and lack proper scientific backing. A Document 2::: Food science is the basic science and applied science of food; its scope starts at overlap with agricultural science and nutritional science and leads through the scientific aspects of food safety and food processing, informing the development of food technology. Food science brings together multiple scientific disciplines. It incorporates concepts from fields such as chemistry, physics, physiology, microbiology, and biochemistry. Food technology incorporates concepts from chemical engineering, for example. Activities of food scientists include the development of new food products, design of processes to produce these foods, choice of packaging materials, shelf-life studies, sensory evaluation of products using survey panels or potential consumers, as well as microbiological and chemical testing. Food scientists may study more fundamental phenomena that are directly linked to the production of food products and its properties. Definition The Institute of Food Technologists defines food science as "the discipline in which the engineering, biological, and physical sciences are used to study the nature of foods, the causes of deterioration, the principles underlying food processing, and the improvement of foods for the consuming public". The textbook Food Science defines food science in simpler terms as "the application of basic sciences and engineering to study the physical, chemical, and biochemical nature of foods and the principles of food processing". Disciplines Some of the subdisciplines of food science are described below. Food chemistry Food chemistry is the study of chemical processes and interactions of all biological and non-biological components of foods. The biological substances include such items as meat, poultry, lettuce, beer, and milk. It is similar to biochemistry in its main components such as carbohydrates, lipids, and protein, but it also includes areas such as water, vitamins, minerals, enzymes, food additives, flavors, and colors. This Document 3::: Animal nutrition focuses on the dietary nutrients needs of animals, primarily those in agriculture and food production, but also in zoos, aquariums, and wildlife management. Constituents of diet Macronutrients (excluding fiber and water) provide structural material (amino acids from which proteins are built, and lipids from which cell membranes and some signaling molecules are built) and energy. Some of the structural material can be used to generate energy internally, though the net energy depends on such factors as absorption and digestive effort, which vary substantially from instance to instance. Vitamins, minerals, fiber, and water do not provide energy, but are required for other reasons. A third class dietary material, fiber (i.e., non-digestible material such as cellulose), seems also to be required, for both mechanical and biochemical reasons, though the exact reasons remain unclear. Molecules of carbohydrates and fats consist of carbon, hydrogen, and oxygen atoms. Carbohydrates range from simple monosaccharides (glucose, fructose, galactose) to complex polysaccharides (starch). Fats are triglycerides, made of assorted fatty acid monomers bound to glycerol backbone. Some fatty acids, but not all, are essential in the diet: they cannot be synthesized in the body. Protein molecules contain nitrogen atoms in addition to carbon, oxygen, and hydrogen. The fundamental components of protein are nitrogen-containing amino acids. Essential amino acids cannot be made by the animal. Some of the amino acids are convertible (with the expenditure of energy) to glucose and can be used for energy production just as ordinary glucose. By breaking down existing protein, some glucose can be produced internally; the remaining amino acids are discarded, primarily as urea in urine. This occurs normally only during prolonged starvation. Other dietary substances found in plant foods (phytochemicals, polyphenols) are not identified as essential nutrients but appear to impact healt Document 4::: Relatively speaking, the brain consumes an immense amount of energy in comparison to the rest of the body. The mechanisms involved in the transfer of energy from foods to neurons are likely to be fundamental to the control of brain function. Human bodily processes, including the brain, all require both macronutrients, as well as micronutrients. Insufficient intake of selected vitamins, or certain metabolic disorders, may affect cognitive processes by disrupting the nutrient-dependent processes within the body that are associated with the management of energy in neurons, which can subsequently affect synaptic plasticity, or the ability to encode new memories. Macronutrients The human brain requires nutrients obtained from the diet to develop and sustain its physical structure and cognitive functions. Additionally, the brain requires caloric energy predominately derived from the primary macronutrients to operate. The three primary macronutrients include carbohydrates, proteins, and fats. Each macronutrient can impact cognition through multiple mechanisms, including glucose and insulin metabolism, neurotransmitter actions, oxidative stress and inflammation, and the gut-brain axis. Inadequate macronutrient consumption or proportion could impair optimal cognitive functioning and have long-term health implications. Carbohydrates Through digestion, dietary carbohydrates are broken down and converted into glucose, which is the sole energy source for the brain. Optimal brain function relies on adequate carbohydrate consumption, as carbohydrates provide the quickest source of glucose for the brain. Glucose deficiencies such as hypoglycaemia reduce available energy for the brain and impair all cognitive processes and performance. Additionally, situations with high cognitive demand, such as learning a new task, increase brain glucose utilization, depleting blood glucose stores and initiating the need for supplementation. Complex carbohydrates, especially those with high d The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the base of nearly all food chains on earth? A. atherosclerosis B. synthesis C. photosynthesis D. glycolysis Answer:
sciq-9620
multiple_choice
What term is used to describe the conditions in the sky on any particular day?
[ "forecast", "humidity", "weather", "temperature" ]
C
Relavent Documents: Document 0::: This is a list of meteorology topics. The terms relate to meteorology, the interdisciplinary scientific study of the atmosphere that focuses on weather processes and forecasting. (see also: List of meteorological phenomena) A advection aeroacoustics aerobiology aerography (meteorology) aerology air parcel (in meteorology) air quality index (AQI) airshed (in meteorology) American Geophysical Union (AGU) American Meteorological Society (AMS) anabatic wind anemometer annular hurricane anticyclone (in meteorology) apparent wind Atlantic Oceanographic and Meteorological Laboratory (AOML) Atlantic hurricane season atmometer atmosphere Atmospheric Model Intercomparison Project (AMIP) Atmospheric Radiation Measurement (ARM) (atmospheric boundary layer [ABL]) planetary boundary layer (PBL) atmospheric chemistry atmospheric circulation atmospheric convection atmospheric dispersion modeling atmospheric electricity atmospheric icing atmospheric physics atmospheric pressure atmospheric sciences atmospheric stratification atmospheric thermodynamics atmospheric window (see under Threats) B ball lightning balloon (aircraft) baroclinity barotropity barometer ("to measure atmospheric pressure") berg wind biometeorology blizzard bomb (meteorology) buoyancy Bureau of Meteorology (in Australia) C Canada Weather Extremes Canadian Hurricane Centre (CHC) Cape Verde-type hurricane capping inversion (in meteorology) (see "severe thunderstorms" in paragraph 5) carbon cycle carbon fixation carbon flux carbon monoxide (see under Atmospheric presence) ceiling balloon ("to determine the height of the base of clouds above ground level") ceilometer ("to determine the height of a cloud base") celestial coordinate system celestial equator celestial horizon (rational horizon) celestial navigation (astronavigation) celestial pole Celsius Center for Analysis and Prediction of Storms (CAPS) (in Oklahoma in the US) Center for the Study o Document 1::: The following outline is provided as an overview of and topical guide to the field of Meteorology. Meteorology The interdisciplinary, scientific study of the Earth's atmosphere with the primary focus being to understand, explain, and forecast weather events. Meteorology, is applied to and employed by a wide variety of diverse fields, including the military, energy production, transport, agriculture, and construction. Essence of meteorology Meteorology Climate – the average and variations of weather in a region over long periods of time. Meteorology – the interdisciplinary scientific study of the atmosphere that focuses on weather processes and forecasting (in contrast with climatology). Weather – the set of all the phenomena in a given atmosphere at a given time. Branches of meteorology Microscale meteorology – the study of atmospheric phenomena about 1 km or less, smaller than mesoscale, including small and generally fleeting cloud "puffs" and other small cloud features Mesoscale meteorology – the study of weather systems about 5 kilometers to several hundred kilometers, smaller than synoptic scale systems but larger than microscale and storm-scale cumulus systems, skjjoch as sea breezes, squall lines, and mesoscale convective complexes Synoptic scale meteorology – is a horizontal length scale of the order of 1000 kilometres (about 620 miles) or more Methods in meteorology Surface weather analysis – a special type of weather map that provides a view of weather elements over a geographical area at a specified time based on information from ground-based weather stations Weather forecasting Weather forecasting – the application of science and technology to predict the state of the atmosphere for a future time and a given location Data collection Pilot Reports Weather maps Weather map Surface weather analysis Forecasts and reporting of Atmospheric pressure Dew point High-pressure area Ice Black ice Frost Low-pressure area Precipitation Document 2::: In aviation, ceiling is a measurement of the height of the base of the lowest clouds (not to be confused with cloud base which has a specific definition) that cover more than half of the sky (more than 4 oktas) relative to the ground. Ceiling is not specifically reported as part of the METAR (METeorological Aviation Report) used for flight planning by pilots worldwide, but can be deduced from the lowest height with broken (BKN) or overcast (OVC) reported. A ceiling listed as "unlimited" means either that the sky is mostly free of cloud cover, or that the cloud is high enough not to impede Visual Flight Rules (VFR) operation. Definitions ICAO The height above the ground or water of the base of the lowest layer of cloud below 6000 meters (20,000 feet) covering more than half the sky. United Kingdom The vertical distance from the elevation of an aerodrome to the lowest part of any cloud visible from the aerodrome which is sufficient to obscure more than half of the sky. United States The height above the Earth's surface of the lowest layer of clouds or obscuring phenomena that is reported as broken, overcast, or obscuration, and not classified as thin or partial. See also Cloud base Document 3::: A cloud base (or the base of the cloud) is the lowest altitude of the visible portion of a cloud. It is traditionally expressed either in metres or feet above mean sea level or above a planetary surface, or as the pressure level corresponding to this altitude in hectopascals (hPa, equivalent to the millibar). Measurement The height of the cloud base can be measured using a ceilometer. This device reflects a beam of light off the cloud base and then calculates its distance using either triangulation or travel time. Alternatively, the cloud base can be estimated from surface measurements of air temperature and humidity by calculating the lifted condensation level. One method for doing this, used by the U.S. Federal Aviation Administration and often named after Tom Bradbury, is as follows: Find the difference between the surface temperature and the dew point. This value is known as the "spread". Divide the spread by 4.4 (if temperatures are in °F) or 2.5 (if temperatures are in °C), then multiply by 1000. This will give the altitude of the cloud base in feet above ground level. Put in a simpler way, 400 feet for every 1°C dew point spread. For metric divide the spread in °C by 8 and multiply by 1000 and get the cloud base in meters. Add the results from step (2) to the field elevation to obtain the altitude of the cloud base above mean sea level. Weather and climate relevance Rain clouds and snow clouds are clouds that have their bases below 2,000 meters above the ground. In well-defined air masses, many (or even most) clouds may have a similar cloud base because this variable is largely controlled by the thermodynamic properties of that air mass, which are relatively homogeneous on a large spatial scale. This is not the case for the cloud tops, which can vary widely from cloud to cloud, as the depth of the cloud is determined by the strength of local convection. Clouds greatly affect the transfer of radiation in the atmosphere. In the thermal spectral domain, w Document 4::: Ensemble forecasting is a method used in or within numerical weather prediction. Instead of making a single forecast of the most likely weather, a set (or ensemble) of forecasts is produced. This set of forecasts aims to give an indication of the range of possible future states of the atmosphere. Ensemble forecasting is a form of Monte Carlo analysis. The multiple simulations are conducted to account for the two usual sources of uncertainty in forecast models: (1) the errors introduced by the use of imperfect initial conditions, amplified by the chaotic nature of the evolution equations of the atmosphere, which is often referred to as sensitive dependence on initial conditions; and (2) errors introduced because of imperfections in the model formulation, such as the approximate mathematical methods to solve the equations. Ideally, the verified future atmospheric state should fall within the predicted ensemble spread, and the amount of spread should be related to the uncertainty (error) of the forecast. In general, this approach can be used to make probabilistic forecasts of any dynamical system, and not just for weather prediction. Instances Today ensemble predictions are commonly made at most of the major operational weather prediction facilities worldwide, including: National Centers for Environmental Prediction (NCEP of the US) European Centre for Medium-Range Weather Forecasts (ECMWF) United Kingdom Met Office Météo-France Environment Canada Japan Meteorological Agency Bureau of Meteorology (Australia) China Meteorological Administration (CMA) Korea Meteorological Administration CPTEC (Brazil) Ministry of Earth Sciences (IMD, IITM & NCMRWF) (India) Experimental ensemble forecasts are made at a number of universities, such as the University of Washington, and ensemble forecasts in the US are also generated by the US Navy and Air Force. There are various ways of viewing the data such as spaghetti plots, ensemble means or Postage Stamps where a number o The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What term is used to describe the conditions in the sky on any particular day? A. forecast B. humidity C. weather D. temperature Answer:
sciq-10890
multiple_choice
What are all of the digits that can be known with certainty in a measurement plus an estimatied last digit called?
[ "determined figures", "significant figures", "miniature figures", "important figures" ]
B
Relavent Documents: Document 0::: Significant figures, also referred to as significant digits or sig figs, are specific digits within a number written in positional notation that carry both reliability and necessity in conveying a particular quantity. When presenting the outcome of a measurement (such as length, pressure, volume, or mass), if the number of digits exceeds what the measurement instrument can resolve, only the number of digits within the resolution's capability are dependable and therefore considered significant. For instance, if a length measurement yields 114.8 mm, using a ruler with the smallest interval between marks at 1 mm, the first three digits (1, 1, and 4, representing 114 mm) are certain and constitute significant figures. Even digits that are uncertain yet reliable are also included in the significant figures. In this scenario, the last digit (8, contributing 0.8 mm) is likewise considered significant despite its uncertainty. Therefore, this measurement contains four significant figures. Another example involves a volume measurement of 2.98 L with an uncertainty of ± 0.05 L. The actual volume falls between 2.93 L and 3.03 L. Even if certain digits are not completely known, they are still significant if they are reliable, as they indicate the actual volume within an acceptable range of uncertainty. In this case, the actual volume might be 2.94 L or possibly 3.02 L, so all three digits are considered significant. Thus, there are three significant figures in this example. The following types of digits are not considered significant: Leading zeros. For instance, 013 kg has two significant figures—1 and 3—while the leading zero is insignificant since it does not impact the mass indication; 013 kg is equivalent to 13 kg, rendering the zero unnecessary. Similarly, in the case of 0.056 m, there are two insignificant leading zeros since 0.056 m is the same as 56 mm, thus the leading zeros do not contribute to the length indication. Trailing zeros when they serve as placeholder Document 1::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of Document 2::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 3::: Large numbers are numbers significantly larger than those typically used in everyday life (for instance in simple counting or in monetary transactions), appearing frequently in fields such as mathematics, cosmology, cryptography, and statistical mechanics. They are typically large positive integers, or more generally, large positive real numbers, but may also be other numbers in other contexts. Googology is the study of nomenclature and properties of large numbers. In the everyday world Scientific notation was created to handle the wide range of values that occur in scientific study. 1.0 × 109, for example, means one billion, or a 1 followed by nine zeros: 1 000 000 000. The reciprocal, 1.0 × 10−9, means one billionth, or 0.000 000 001. Writing 109 instead of nine zeros saves readers the effort and hazard of counting a long series of zeros to see how large the number is. In addition to scientific (powers of 10) notation, the following examples include (short scale) systematic nomenclature of large numbers. Examples of large numbers describing everyday real-world objects include: The number of cells in the human body (estimated at 3.72 × 1013), or 37.2 trillion The number of bits on a computer hard disk (, typically about 1013, 1–2 TB), or 10 trillion The number of neuronal connections in the human brain (estimated at 1014), or 100 trillion The Avogadro constant is the number of “elementary entities” (usually atoms or molecules) in one mole; the number of atoms in 12 grams of carbon-12 approximately , or 602.2 sextillion. The total number of DNA base pairs within the entire biomass on Earth, as a possible approximation of global biodiversity, is estimated at (5.3 ± 3.6) × 1037, or 53±36 undecillion The mass of Earth consists of about 4 × 1051, or 4 sexdecillion, nucleons The estimated number of atoms in the observable universe (1080), or 100 quinvigintillion The lower bound on the game-tree complexity of chess, also known as the “Shannon number” (estim Document 4::: is a world mathematics certification program and examination established in Japan in 1988. Outline of Suken Each Suken level (Kyu) has two sections. Section 1 is calculation and Section 2 is application. Passing Rate In order to pass the Suken, you must correctly answer approximately 70% of section 1 and approximately 60% of section 2. Levels Level 5 (7th grade math) The examination time is 180 minutes for section 1, 60 minutes for section 2. Level 4 (8th grade) The examination time is 60 minutes for section 1, 60 minutes for section 2. 3rd Kyu, suits for 9th grade The examination time is 60 minutes for section 1, 60 minutes for section 2. Levels 5 - 3 include the following subjects: Calculation with negative numbers Inequalities Simultaneous equations Congruency and similarities Square roots Factorization Quadratic equations and functions The Pythagorean theorem Probabilities Level pre-2 (10th grade) The examination time is 60 minutes for section 1, 90 minutes for section 2. Level 2 (11th grade) The examination time is 60 minutes for section 1, 90 minutes for section 2. Level pre-1st (12th grade) The examination time is 60 minutes for section 1, 120 minutes for section 2. Levels pre-2 - pre-1 include the following subjects: Quadratic functions Trigonometry Sequences Vectors Complex numbers Basic calculus Matrices Simple curved lines Probability Level 1 (undergrad and graduate) The examination time is 60 minutes for section 1, 120 minutes for section 2. Level 1 includes the following subjects: Linear algebra Vectors Matrices Differential equations Statistics Probability The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What are all of the digits that can be known with certainty in a measurement plus an estimatied last digit called? A. determined figures B. significant figures C. miniature figures D. important figures Answer:
sciq-6515
multiple_choice
What is the term for a symbiotic relationship in which one species benefits while the other species is not affected?
[ "pollenation", "parasitism", "mutualism", "commensalism" ]
D
Relavent Documents: Document 0::: Mutualism describes the ecological interaction between two or more species where each species has a net benefit. Mutualism is a common type of ecological interaction. Prominent examples include most vascular plants engaged in mutualistic interactions with mycorrhizae, flowering plants being pollinated by animals, vascular plants being dispersed by animals, and corals with zooxanthellae, among many others. Mutualism can be contrasted with interspecific competition, in which each species experiences reduced fitness, and exploitation, or parasitism, in which one species benefits at the expense of the other. The term mutualism was introduced by Pierre-Joseph van Beneden in his 1876 book Animal Parasites and Messmates to mean "mutual aid among species". Mutualism is often conflated with two other types of ecological phenomena: cooperation and symbiosis. Cooperation most commonly refers to increases in fitness through within-species (intraspecific) interactions, although it has been used (especially in the past) to refer to mutualistic interactions, and it is sometimes used to refer to mutualistic interactions that are not obligate. Symbiosis involves two species living in close physical contact over a long period of their existence and may be mutualistic, parasitic, or commensal, so symbiotic relationships are not always mutualistic, and mutualistic interactions are not always symbiotic. Despite a different definition between mutualistic interactions and symbiosis, mutualistic and symbiosis have been largely used interchangeably in the past, and confusion on their use has persisted. Mutualism plays a key part in ecology and evolution. For example, mutualistic interactions are vital for terrestrial ecosystem function as about 80% of land plants species rely on mycorrhizal relationships with fungi to provide them with inorganic compounds and trace elements. As another example, the estimate of tropical rainforest plants with seed dispersal mutualisms with animals ranges Document 1::: In ecology, a biological interaction is the effect that a pair of organisms living together in a community have on each other. They can be either of the same species (intraspecific interactions), or of different species (interspecific interactions). These effects may be short-term, or long-term, both often strongly influence the adaptation and evolution of the species involved. Biological interactions range from mutualism, beneficial to both partners, to competition, harmful to both partners. Interactions can be direct when physical contact is established or indirect, through intermediaries such as shared resources, territories, ecological services, metabolic waste, toxins or growth inhibitors. This type of relationship can be shown by net effect based on individual effects on both organisms arising out of relationship. Several recent studies have suggested non-trophic species interactions such as habitat modification and mutualisms can be important determinants of food web structures. However, it remains unclear whether these findings generalize across ecosystems, and whether non-trophic interactions affect food webs randomly, or affect specific trophic levels or functional groups. History Although biological interactions, more or less individually, were studied earlier, Edward Haskell (1949) gave an integrative approach to the thematic, proposing a classification of "co-actions", later adopted by biologists as "interactions". Close and long-term interactions are described as symbiosis; symbioses that are mutually beneficial are called mutualistic. The term symbiosis was subject to a century-long debate about whether it should specifically denote mutualism, as in lichens or in parasites that benefit themselves. This debate created two different classifications for biotic interactions, one based on the time (long-term and short-term interactions), and other based on the magnitud of interaction force (competition/mutualism) or effect of individual fitness, accordi Document 2::: Commensalism is a long-term biological interaction (symbiosis) in which members of one species gain benefits while those of the other species neither benefit nor are harmed. This is in contrast with mutualism, in which both organisms benefit from each other; amensalism, where one is harmed while the other is unaffected; and parasitism, where one is harmed and the other benefits. The commensal (the species that benefits from the association) may obtain nutrients, shelter, support, or locomotion from the host species, which is substantially unaffected. The commensal relation is often between a larger host and a smaller commensal; the host organism is unmodified, whereas the commensal species may show great structural adaptation consistent with its habits, as in the remoras that ride attached to sharks and other fishes. Remoras feed on their hosts' fecal matter, while pilot fish feed on the leftovers of their hosts' meals. Numerous birds perch on bodies of large mammal herbivores or feed on the insects turned up by grazing mammals. Etymology The word "commensalism" is derived from the word "commensal", meaning "eating at the same table" in human social interaction, which in turn comes through French from the Medieval Latin commensalis, meaning "sharing a table", from the prefix com-, meaning "together", and mensa, meaning "table" or "meal". Commensality, at the Universities of Oxford and Cambridge, refers to professors eating at the same table as students (as they live in the same "college"). Pierre-Joseph van Beneden introduced the term "commensalism" in 1876. Examples of commensal relationships The commensal pathway was traveled by animals that fed on refuse around human habitats or by animals that preyed on other animals drawn to human camps. Those animals established a commensal relationship with humans in which the animals benefited but the humans received little benefit or harm. Those animals that were most capable of taking advantage of the resources associ Document 3::: The hypothesis or paradigm of Mutualism Parasitism Continuum postulates that compatible host-symbiont associations can occupy a broad continuum of interactions with different fitness outcomes for each member. At one end of the continuum lies obligate mutualism where both host and symbiont benefit from the interaction and are dependent on it for survival. At the other end of the continuum highly parasitic interactions can occur, where one member gains a fitness benefit at the expense of the others survival. Between these extremes many different types of interaction are possible. The degree of change between mutualism or parasitism varies depending on the availability of resources, where there is environmental stress generated by few resources, symbiotic relationships are formed while in environments where there is an excess of resources, biological interactions turn to competition and parasitism. Classically the transmission mode of the symbiont can also be important in predicting where on the mutualism-parasitism-continuum an interaction will sit. Symbionts that are vertically transmitted (inherited symbionts) frequently occupy mutualism space on the continuum, this is due to the aligned reproductive interests between host and symbiont that are generated under vertical transmission. In some systems increases in the relative contribution of horizontal transmission can drive selection for parasitism. Studies of this hypothesis have focused on host-symbiont models of plants and fungi, and also of animals and microbes. See also Red King Hypothesis Red Queen Hypothesis Black Queen Hypothesis Biological interaction Document 4::: Ecological facilitation or probiosis describes species interactions that benefit at least one of the participants and cause harm to neither. Facilitations can be categorized as mutualisms, in which both species benefit, or commensalisms, in which one species benefits and the other is unaffected. This article addresses both the mechanisms of facilitation and the increasing information available concerning the impacts of facilitation on community ecology. Categories There are two basic categories of facilitative interactions: Mutualism is an interaction between species that is beneficial to both. A familiar example of a mutualism is the relationship between flowering plants and their pollinators. The plant benefits from the spread of pollen between flowers, while the pollinator receives some form of nourishment, either from nectar or the pollen itself. Commensalism is an interaction in which one species benefits and the other species is unaffected. Epiphytes (plants growing on other plants, usually trees) have a commensal relationship with their host plant because the epiphyte benefits in some way (e.g., by escaping competition with terrestrial plants or by gaining greater access to sunlight) while the host plant is apparently unaffected. Strict categorization, however, is not possible for some complex species interactions. For example, seed germination and survival in harsh environments is often higher under so-called nurse plants than on open ground. A nurse plant is one with an established canopy, beneath which germination and survival are more likely due to increased shade, soil moisture, and nutrients. Thus, the relationship between seedlings and their nurse plants is commensal. However, as the seedlings grow into established plants, they are likely to compete with their former benefactors for resources. Mechanisms The beneficial effects of species on one another are realized in various ways, including refuge from physical stress, predation, and competi The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the term for a symbiotic relationship in which one species benefits while the other species is not affected? A. pollenation B. parasitism C. mutualism D. commensalism Answer:
sciq-8062
multiple_choice
What part of the body do hookworms infest?
[ "intestines", "skin", "lungs", "brain" ]
A
Relavent Documents: Document 0::: Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research. Americas Human Biology major at Stanford University, Palo Alto (since 1970) Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government. Human and Social Biology (Caribbean) Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment. Human Biology Program at University of Toronto The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications. Asia BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002) BSc (honours) Human Biology at AIIMS (New Document 1::: Animal science is described as "studying the biology of animals that are under the control of humankind". It can also be described as the production and management of farm animals. Historically, the degree was called animal husbandry and the animals studied were livestock species, like cattle, sheep, pigs, poultry, and horses. Today, courses available look at a broader area, including companion animals, like dogs and cats, and many exotic species. Degrees in Animal Science are offered at a number of colleges and universities. Animal science degrees are often offered at land-grant universities, which will often have on-campus farms to give students hands-on experience with livestock animals. Education Professional education in animal science prepares students for careers in areas such as animal breeding, food and fiber production, nutrition, animal agribusiness, animal behavior, and welfare. Courses in a typical Animal Science program may include genetics, microbiology, animal behavior, nutrition, physiology, and reproduction. Courses in support areas, such as genetics, soils, agricultural economics and marketing, legal aspects, and the environment also are offered. Bachelor degree At many universities, a Bachelor of Science (BS) degree in Animal Science allows emphasis in certain areas. Typical areas are species-specific or career-specific. Species-specific areas of emphasis prepare students for a career in dairy management, beef management, swine management, sheep or small ruminant management, poultry production, or the horse industry. Other career-specific areas of study include pre-veterinary medicine studies, livestock business and marketing, animal welfare and behavior, animal nutrition science, animal reproduction science, or genetics. Youth programs are also an important part of animal science programs. Pre-veterinary emphasis Many schools that offer a degree option in Animal Science also offer a pre-veterinary emphasis such as Iowa State University, th Document 2::: The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work. History It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council. Function Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres. STEM ambassadors To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell. Funding STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments. See also The WISE Campaign Engineering and Physical Sciences Research Council National Centre for Excellence in Teaching Mathematics Association for Science Education Glossary of areas of mathematics Glossary of astronomy Glossary of biology Glossary of chemistry Glossary of engineering Glossary of physics Document 3::: Acetabulum (plural acetabula) in invertebrate zoology is a saucer-shaped organ of attachment in some annelid worms (like leech) and flatworms. It is a specialised sucker for parasitic adaptation in trematodes by which the worms are able to attach on the host. In annelids, it is basically a locomotory organ for attaching to a substratum. The name also applies to the suction appendage on the arms of cephalopod molluscs such as squid, octopus, cuttlefish, Nautilus, etc. Etymology Acetabulum literally means "a small saucer for vinegar". It is derived from two Latin words acetum, meaning "vinegar", and -bulum, a suffix denoting "saucer" or "vessel" or "bowl". The name is used because of the saucer-like structure in the invertebrates. Structure Annelids In leeches, acetabulum refers to the prominent posterior sucker at the extreme end of the body. In fact it forms a head-like structure, while the actual head is relatively small. It is a thick disc-shaped muscular system composed of circular, longitudinal and radial fibers. Trematode In flatworms, acetabulum is the ventral sucker situated towards the anterior part of the body, but behind the anterior oral sucker. It is composed of numerous spines for penetrating and gripping the host tissue. The location and structure of the acetabulum, and the pattern of the spine alignment are important diagnostic tool among trematode species. Mollusc Acetabulum in molluscs is a circular hollow opening on the arms. It occupies the central portion of the sucker and surrounded by a larger spherical cavity infundibulum. Both these structures are thick muscles, and the acetabulum is specifically composed of radial muscles. They are covered with chitinous cuticle to make a protective surface. Function Acetabulum is essentially an organ of attachment. In annelids, it is used for adherence to the substratum during a looping locomotion. Annelid worms such as leeches move by repeated alternating extensions and shortenings of the body. Document 4::: This list of life sciences comprises the branches of science that involve the scientific study of life – such as microorganisms, plants, and animals including human beings. This science is one of the two major branches of natural science, the other being physical science, which is concerned with non-living matter. Biology is the overall natural science that studies life, with the other life sciences as its sub-disciplines. Some life sciences focus on a specific type of organism. For example, zoology is the study of animals, while botany is the study of plants. Other life sciences focus on aspects common to all or many life forms, such as anatomy and genetics. Some focus on the micro-scale (e.g. molecular biology, biochemistry) other on larger scales (e.g. cytology, immunology, ethology, pharmacy, ecology). Another major branch of life sciences involves understanding the mindneuroscience. Life sciences discoveries are helpful in improving the quality and standard of life and have applications in health, agriculture, medicine, and the pharmaceutical and food science industries. For example, it has provided information on certain diseases which has overall aided in the understanding of human health. Basic life science branches Biology – scientific study of life Anatomy – study of form and function, in plants, animals, and other organisms, or specifically in humans Astrobiology – the study of the formation and presence of life in the universe Bacteriology – study of bacteria Biotechnology – study of combination of both the living organism and technology Biochemistry – study of the chemical reactions required for life to exist and function, usually a focus on the cellular level Bioinformatics – developing of methods or software tools for storing, retrieving, organizing and analyzing biological data to generate useful biological knowledge Biolinguistics – the study of the biology and evolution of language. Biological anthropology – the study of humans, non-hum The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What part of the body do hookworms infest? A. intestines B. skin C. lungs D. brain Answer:
sciq-2787
multiple_choice
What are two types of lobe finned fish?
[ "piranha and pike", "sharks and piranha", "moles and lungfish", "coelacanths and lungfish" ]
D
Relavent Documents: Document 0::: Fisheries science is the academic discipline of managing and understanding fisheries. It is a multidisciplinary science, which draws on the disciplines of limnology, oceanography, freshwater biology, marine biology, meteorology, conservation, ecology, population dynamics, economics, statistics, decision analysis, management, and many others in an attempt to provide an integrated picture of fisheries. In some cases new disciplines have emerged, as in the case of bioeconomics and fisheries law. Because fisheries science is such an all-encompassing field, fisheries scientists often use methods from a broad array of academic disciplines. Over the most recent several decades, there have been declines in fish stocks (populations) in many regions along with increasing concern about the impact of intensive fishing on marine and freshwater biodiversity. Fisheries science is typically taught in a university setting, and can be the focus of an undergraduate, master's or Ph.D. program. Some universities offer fully integrated programs in fisheries science. Graduates of university fisheries programs typically find employment as scientists, fisheries managers of both recreational and commercial fisheries, researchers, aquaculturists, educators, environmental consultants and planners, conservation officers, and many others. Fisheries research Because fisheries take place in a diverse set of aquatic environments (i.e., high seas, coastal areas, large and small rivers, and lakes of all sizes), research requires different sampling equipment, tools, and techniques. For example, studying trout populations inhabiting mountain lakes requires a very different set of sampling tools than, say, studying salmon in the high seas. Ocean fisheries research vessels (FRVs) often require platforms which are capable of towing different types of fishing nets, collecting plankton or water samples from a range of depths, and carrying acoustic fish-finding equipment. Fisheries research vessels a Document 1::: Fish intelligence is the resultant of the process of acquiring, storing in memory, retrieving, combining, comparing, and using in new contexts information and conceptual skills" as it applies to fish. According to Culum Brown from Macquarie University, "Fish are more intelligent than they appear. In many areas, such as memory, their cognitive powers match or exceed those of ‘higher’ vertebrates including non-human primates." Fish hold records for the relative brain weights of vertebrates. Most vertebrate species have similar brain-to-body mass ratios. The deep sea bathypelagic bony-eared assfish has the smallest ratio of all known vertebrates. At the other extreme, the electrogenic elephantnose fish, an African freshwater fish, has one of the largest brain-to-body weight ratios of all known vertebrates (slightly higher than humans) and the highest brain-to-body oxygen consumption ratio of all known vertebrates (three times that for humans). Brain Fish typically have quite small brains relative to body size compared with other vertebrates, typically one-fifteenth the brain mass of a similarly sized bird or mammal. However, some fish have relatively large brains, most notably mormyrids and sharks, which have brains about as massive relative to body weight as birds and marsupials. The cerebellum of cartilaginous and bony fishes is large and complex. In at least one important respect, it differs in internal structure from the mammalian cerebellum: The fish cerebellum does not contain discrete deep cerebellar nuclei. Instead, the primary targets of Purkinje cells are a distinct type of cell distributed across the cerebellar cortex, a type not seen in mammals. The circuits in the cerebellum are similar across all classes of vertebrates, including fish, reptiles, birds, and mammals. There is also an analogous brain structure in cephalopods with well-developed brains, such as octopuses. This has been taken as evidence that the cerebellum performs functions important to Document 2::: Age class structure in fisheries and wildlife management is a part of population assessment. Age class structures can be used to model many populations including trees and fish. This method can be used to predict the occurrence of forest fires within a forest population. Age can be determined by counting growth rings in fish scales, otoliths, cross-sections of fin spines for species with thick spines such as triggerfish, or teeth for a few species. Each method has its merits and drawbacks. Fish scales are easiest to obtain, but may be unreliable if scales have fallen off the fish and new ones grown in their places. Fin spines may be unreliable for the same reason, and most fish do not have spines of sufficient thickness for clear rings to be visible. Otoliths will have stayed with the fish throughout its life history, but obtaining them requires killing the fish. Also, otoliths often require more preparation before ageing can occur. Analyzing fisheries age class structure An example of using age class structure to learn about a population is a regular bell curve for the population of 1-5 year-old fish with a very low population for the 3-year-olds. An age class structure with gaps in population size like the one described earlier implies a bad spawning year 3 years ago in that species. Often fish in younger age class structures have very low numbers because they were small enough to slip through the sampling nets, and may in fact have a very healthy population. See also Identification of aging in fish Population pyramid Population dynamics of fisheries Document 3::: Otolith microchemical analysis is a technique used in fisheries management and fisheries biology to delineate stocks and characterize movements, and natal origin of fish. The concentrations of elements and isotopes in otoliths are compared to those in the water in which the fish inhabits in order to identify where it has been. In non-ostariophysian fishes, the largest of the three otoliths, or ear bones, the sagitta is analyzed by one of several methods to determine the concentrations of various trace elements and stable isotopes. In ostariophysian fishes, the lapilli is the largest otolith and may be more commonly analysed. Relevance Fisheries management requires intimate knowledge of fish life history traits. Migration patterns and spawning areas are key life history traits in the management of many species. If a fish is migrating between two regions that are managed separately then it will be managed as two separate stocks unless this migration can be understood. If this migration is not discovered then overfishing of the stock may occur because managers assume there is double the amount of fish. In the past costly and inefficient tag and recapture studies were needed to discover such migration patterns. Today otolith microchemistry provides a simpler way to assess migration patterns of fish. Otolith microchemistry has been used to identify and delineate Atlantic cod stocks in Canadian waters. It has also been used to determine the migratory patterns of anadromous whitefish. Natal origin is equally critical to understand because areas where fish spawn and inhabit during their critical larval period must be identified and protected. Natal origin is also important in determining whether regions are sources or sinks for stocks of fish. In the past natal origin had to be assumed based upon collection on spawning grounds. In recent years otolith microchemistry has shown that this is not always the case. It has provided an accurate way to assess the natal origin Document 4::: Fish measurement is the measuring of individual fish and various parts of their anatomies, for data used in many areas of ichthyology, including taxonomy and fishery biology. Overall length Standard length (SL) is the length of a fish measured from the tip of the snout to the posterior end of the last vertebra or to the posterior end of the midlateral portion of the hypural plate. This measurement excludes the length of the caudal (tail) fin. Total length (TL) is the length of a fish measured from the tip of the snout to the tip of the longer lobe of the caudal fin, usually measured with the lobes compressed along the midline. It is a straight-line measure, not measured over the curve of the body. Standard length measurements are used with Teleostei (most bony fish), while total length measurements are used with Myxini (hagfish), Petromyzontiformes (lampreys) and usually Elasmobranchii (sharks and rays), as well as some other fishes. Total length measurements are used in slot limit and minimum landing size regulations. Fishery biologists often use a third measure in fishes with forked tails, called Fork length (FL), the length of a fish measured from the tip of the snout to the end of the middle caudal fin rays, and is used in fishes in which it is difficult to tell where the vertebral column ends. Fin lengths and eye diameter Other possible measurements include the lengths of various fins, the lengths of fin bases and the diameter of the eye. See also Ichthyology terms Standard weight in fish The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What are two types of lobe finned fish? A. piranha and pike B. sharks and piranha C. moles and lungfish D. coelacanths and lungfish Answer:
sciq-11359
multiple_choice
What term is used to describe large numbers of species that go extinct in a short amount of time?
[ "mass extinction", "species extinction", "formation extinction", "organic extinction" ]
A
Relavent Documents: Document 0::: Extinction is the termination of a taxon by the death of its last member. A taxon may become functionally extinct before the death of its last member if it loses the capacity to reproduce and recover. Because a species' potential range may be very large, determining this moment is difficult, and is usually done retrospectively. This difficulty leads to phenomena such as Lazarus taxa, where a species presumed extinct abruptly "reappears" (typically in the fossil record) after a period of apparent absence. More than 99% of all species that ever lived on Earth, amounting to over five billion species, are estimated to have died out. It is estimated that there are currently around 8.7 million species of eukaryote globally, and possibly many times more if microorganisms, like bacteria, are included. Notable extinct animal species include non-avian dinosaurs, saber-toothed cats, dodos, mammoths, ground sloths, thylacines, trilobites, and golden toads. Through evolution, species arise through the process of speciation—where new varieties of organisms arise and thrive when they are able to find and exploit an ecological niche—and species become extinct when they are no longer able to survive in changing conditions or against superior competition. The relationship between animals and their ecological niches has been firmly established. A typical species becomes extinct within 10 million years of its first appearance, although some species, called living fossils, survive with little to no morphological change for hundreds of millions of years. Mass extinctions are relatively rare events; however, isolated extinctions of species and clades are quite common, and are a natural part of the evolutionary process. Only recently have extinctions been recorded and scientists have become alarmed at the current high rate of extinctions. Most species that become extinct are never scientifically documented. Some scientists estimate that up to half of presently existing plant and animal Document 1::: Background extinction rate, also known as the normal extinction rate, refers to the standard rate of extinction in Earth's geological and biological history before humans became a primary contributor to extinctions. This is primarily the pre-human extinction rates during periods in between major extinction events. Currently there have been five mass extinctions that have happened since the beginning of time all resulting in a variety of reasons. Overview Extinctions are a normal part of the evolutionary process, and the background extinction rate is a measurement of "how often" they naturally occur. Normal extinction rates are often used as a comparison to present day extinction rates, to illustrate the higher frequency of extinction today than in all periods of non-extinction events before it. Background extinction rates have not remained constant, although changes are measured over geological time, covering millions of years. Measurement Background extinction rates are typically measured in order to give a specific classification to a species and this is obtained over a certain period of time. There is three different ways to calculate background extinction rate.. The first is simply the number of species that normally go extinct over a given period of time. For example, at the background rate one species of bird will go extinct every estimated 400 years. Another way the extinction rate can be given is in million species years (MSY). For example, there is approximately one extinction estimated per million species years. From a purely mathematical standpoint this means that if there are a million species on the planet earth, one would go extinct every year, while if there was only one species it would go extinct in one million years, etc. The third way is in giving species survival rates over time. For example, given normal extinction rates species typically exist for 5–10 million years before going extinct. Lifespan estimates Some species lifespan es Document 2::: Plant evolution is the subset of evolutionary phenomena that concern plants. Evolutionary phenomena are characteristics of populations that are described by averages, medians, distributions, and other statistical methods. This distinguishes plant evolution from plant development, a branch of developmental biology which concerns the changes that individuals go through in their lives. The study of plant evolution attempts to explain how the present diversity of plants arose over geologic time. It includes the study of genetic change and the consequent variation that often results in speciation, one of the most important types of radiation into taxonomic groups called clades. A description of radiation is called a phylogeny and is often represented by type of diagram called a phylogenetic tree. Evolutionary trends Differences between plant and animal physiology and reproduction cause minor differences in how they evolve. One major difference is the totipotent nature of plant cells, allowing them to reproduce asexually much more easily than most animals. They are also capable of polyploidy – where more than two chromosome sets are inherited from the parents. This allows relatively fast bursts of evolution to occur, for example by the effect of gene duplication. The long periods of dormancy that seed plants can employ also makes them less vulnerable to extinction, as they can "sit out" the tough periods and wait until more clement times to leap back to life. The effect of these differences is most profoundly seen during extinction events. These events, which wiped out between 6 and 62% of terrestrial animal families, had "negligible" effect on plant families. However, the ecosystem structure is significantly rearranged, with the abundances and distributions of different groups of plants changing profoundly. These effects are perhaps due to the higher diversity within families, as extinction – which was common at the species level – was very selective. For example, win Document 3::: This article is a list of biological species, subspecies, and evolutionary significant units that are known to have become extinct during the Holocene, the current geologic epoch, ordered by their known or approximate date of disappearance from oldest to most recent. The Holocene is considered to have started with the Holocene glacial retreat around 11650 years Before Present ( BC). It is characterized by a general trend towards global warming, the expansion of anatomically modern humans (Homo sapiens) to all emerged land masses, the appearance of agriculture and animal husbandry, and a reduction in global biodiversity. The latter, dubbed the sixth mass extinction in Earth history, is largely attributed to increased human population and activity, and may have started already during the preceding Pleistocene epoch with the demise of the Pleistocene megafauna. The following list is incomplete by necessity, since the majority of extinctions are thought to be undocumented, and for many others there isn't a definitive, widely accepted last, or most recent record. According to the species-area theory, the present rate of extinction may be up to 140,000 species per year. 10th millennium BC 9th millennium BC 8th millennium BC 7th millennium BC 6th millennium BC 5th millennium BC 4th millennium BC 3rd millennium BC 2nd millennium BC 1st millennium BC 1st millennium CE 1st–5th centuries 6th–10th centuries 2nd millennium CE 11th-12th century 13th-14th century 15th-16th century 17th century 18th century 19th century 1800s-1820s 1830s-1840s 1850s-1860s 1870s 1880s 1890s 20th century 1900s 1910s 1920s 1930s 1940s 1950s 1960s 1970s 1980s 1990s 3rd millennium CE 21st century 2000s 2010s See also List of extinct animals Extinction event Quaternary extinction event Holocene extinction Timeline of the evolutionary history of life Timeline of environmental history Index of environmental articles List of environmental issues Document 4::: Biodiversity loss includes the worldwide extinction of different species, as well as the local reduction or loss of species in a certain habitat, resulting in a loss of biological diversity. The latter phenomenon can be temporary or permanent, depending on whether the environmental degradation that leads to the loss is reversible through ecological restoration/ecological resilience or effectively permanent (e.g. through land loss). The current global extinction (frequently called the sixth mass extinction or Anthropocene extinction), has resulted in a biodiversity crisis being driven by human activities which push beyond the planetary boundaries and so far has proven irreversible. The main direct threats to conservation (and thus causes for biodiversity loss) fall in eleven categories: Residential and commercial development; farming activities; energy production and mining; transportation and service corridors; biological resource usages; human intrusions and activities that alter, destroy, disturb habitats and species from exhibiting natural behaviors; natural system modification; invasive and problematic species, pathogens and genes; pollution; catastrophic geological events, climate change, and so on. Numerous scientists and the IPBES Global Assessment Report on Biodiversity and Ecosystem Services assert that human population growth and overconsumption are the primary factors in this decline. However other scientists have criticized this, saying that loss of habitat is caused mainly by "the growth of commodities for export" and that population has very little to do with overall consumption, due to country wealth disparities. Climate change is another threat to global biodiversity. For example, coral reefs – which are biodiversity hotspots – will be lost within the century if global warming continues at the current rate. However, habitat destruction e.g. for the expansion of agriculture, is currently the more significant driver of contemporary biodiversity lo The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What term is used to describe large numbers of species that go extinct in a short amount of time? A. mass extinction B. species extinction C. formation extinction D. organic extinction Answer:
sciq-2615
multiple_choice
Eukaryotic cells contain what type of structures that possess special functions?
[ "organelles", "chloroplasts", "fibers", "cell membranes" ]
A
Relavent Documents: Document 0::: Cell physiology is the biological study of the activities that take place in a cell to keep it alive. The term physiology refers to normal functions in a living organism. Animal cells, plant cells and microorganism cells show similarities in their functions even though they vary in structure. General characteristics There are two types of cells: prokaryotes and eukaryotes. Prokaryotes were the first of the two to develop and do not have a self-contained nucleus. Their mechanisms are simpler than later-evolved eukaryotes, which contain a nucleus that envelops the cell's DNA and some organelles. Prokaryotes Prokaryotes have DNA located in an area called the nucleoid, which is not separated from other parts of the cell by a membrane. There are two domains of prokaryotes: bacteria and archaea. Prokaryotes have fewer organelles than eukaryotes. Both have plasma membranes and ribosomes (structures that synthesize proteins and float free in cytoplasm). Two unique characteristics of prokaryotes are fimbriae (finger-like projections on the surface of a cell) and flagella (threadlike structures that aid movement). Eukaryotes Eukaryotes have a nucleus where DNA is contained. They are usually larger than prokaryotes and contain many more organelles. The nucleus, the feature of a eukaryote that distinguishes it from a prokaryote, contains a nuclear envelope, nucleolus and chromatin. In cytoplasm, endoplasmic reticulum (ER) synthesizes membranes and performs other metabolic activities. There are two types, rough ER (containing ribosomes) and smooth ER (lacking ribosomes). The Golgi apparatus consists of multiple membranous sacs, responsible for manufacturing and shipping out materials such as proteins. Lysosomes are structures that use enzymes to break down substances through phagocytosis, a process that comprises endocytosis and exocytosis. In the mitochondria, metabolic processes such as cellular respiration occur. The cytoskeleton is made of fibers that support the str Document 1::: Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence. Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism. Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry. See also Cell (biology) Cell biology Biomolecule Organelle Tissue (biology) External links https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm Document 2::: The cell is the basic structural and functional unit of all forms of life. Every cell consists of cytoplasm enclosed within a membrane, and contains many macromolecules such as proteins, DNA and RNA, as well as many small molecules of nutrients and metabolites. The term comes from the Latin word meaning 'small room'. Cells can acquire specified function and carry out various tasks within the cell such as replication, DNA repair, protein synthesis, and motility. Cells are capable of specialization and mobility within the cell. Most plant and animal cells are only visible under a light microscope, with dimensions between 1 and 100 micrometres. Electron microscopy gives a much higher resolution showing greatly detailed cell structure. Organisms can be classified as unicellular (consisting of a single cell such as bacteria) or multicellular (including plants and animals). Most unicellular organisms are classed as microorganisms. The study of cells and how they work has led to many other studies in related areas of biology, including: discovery of DNA, cancer systems biology, aging and developmental biology. Cell biology is the study of cells, which were discovered by Robert Hooke in 1665, who named them for their resemblance to cells inhabited by Christian monks in a monastery. Cell theory, first developed in 1839 by Matthias Jakob Schleiden and Theodor Schwann, states that all organisms are composed of one or more cells, that cells are the fundamental unit of structure and function in all living organisms, and that all cells come from pre-existing cells. Cells emerged on Earth about 4 billion years ago. Discovery With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as i Document 3::: In a multicellular organism, an organ is a collection of tissues joined in a structural unit to serve a common function. In the hierarchy of life, an organ lies between tissue and an organ system. Tissues are formed from same type cells to act together in a function. Tissues of different types combine to form an organ which has a specific function. The intestinal wall for example is formed by epithelial tissue and smooth muscle tissue. Two or more organs working together in the execution of a specific body function form an organ system, also called a biological system or body system. An organ's tissues can be broadly categorized as parenchyma, the functional tissue, and stroma, the structural tissue with supportive, connective, or ancillary functions. For example, the gland's tissue that makes the hormones is the parenchyma, whereas the stroma includes the nerves that innervate the parenchyma, the blood vessels that oxygenate and nourish it and carry away its metabolic wastes, and the connective tissues that provide a suitable place for it to be situated and anchored. The main tissues that make up an organ tend to have common embryologic origins, such as arising from the same germ layer. Organs exist in most multicellular organisms. In single-celled organisms such as members of the eukaryotes, the functional analogue of an organ is known as an organelle. In plants, there are three main organs. The number of organs in any organism depends on the definition used. By one widely adopted definition, 79 organs have been identified in the human body. Animals Except for placozoans, multicellular animals including humans have a variety of organ systems. These specific systems are widely studied in human anatomy. The functions of these organ systems often share significant overlap. For instance, the nervous and endocrine system both operate via a shared organ, the hypothalamus. For this reason, the two systems are combined and studied as the neuroendocrine system. The sam Document 4::: A cell type is a classification used to identify cells that share morphological or phenotypical features. A multicellular organism may contain cells of a number of widely differing and specialized cell types, such as muscle cells and skin cells, that differ both in appearance and function yet have identical genomic sequences. Cells may have the same genotype, but belong to different cell types due to the differential regulation of the genes they contain. Classification of a specific cell type is often done through the use of microscopy (such as those from the cluster of differentiation family that are commonly used for this purpose in immunology). Recent developments in single cell RNA sequencing facilitated classification of cell types based on shared gene expression patterns. This has led to the discovery of many new cell types in e.g. mouse cortex, hippocampus, dorsal root ganglion and spinal cord. Animals have evolved a greater diversity of cell types in a multicellular body (100–150 different cell types), compared with 10–20 in plants, fungi, and protists. The exact number of cell types is, however, undefined, and the Cell Ontology, as of 2021, lists over 2,300 different cell types. Multicellular organisms All higher multicellular organisms contain cells specialised for different functions. Most distinct cell types arise from a single totipotent cell that differentiates into hundreds of different cell types during the course of development. Differentiation of cells is driven by different environmental cues (such as cell–cell interaction) and intrinsic differences (such as those caused by the uneven distribution of molecules during division). Multicellular organisms are composed of cells that fall into two fundamental types: germ cells and somatic cells. During development, somatic cells will become more specialized and form the three primary germ layers: ectoderm, mesoderm, and endoderm. After formation of the three germ layers, cells will continue to special The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Eukaryotic cells contain what type of structures that possess special functions? A. organelles B. chloroplasts C. fibers D. cell membranes Answer:
sciq-7420
multiple_choice
The loss of oxygen to the heart muscle causes that part of the tissue to what?
[ "thrive", "beat erratically", "beat harder", "die" ]
D
Relavent Documents: Document 0::: Cardiac muscle (also called heart muscle or myocardium) is one of three types of vertebrate muscle tissues, with the other two being skeletal muscle and smooth muscle. It is an involuntary, striated muscle that constitutes the main tissue of the wall of the heart. The cardiac muscle (myocardium) forms a thick middle layer between the outer layer of the heart wall (the pericardium) and the inner layer (the endocardium), with blood supplied via the coronary circulation. It is composed of individual cardiac muscle cells joined by intercalated discs, and encased by collagen fibers and other substances that form the extracellular matrix. Cardiac muscle contracts in a similar manner to skeletal muscle, although with some important differences. Electrical stimulation in the form of a cardiac action potential triggers the release of calcium from the cell's internal calcium store, the sarcoplasmic reticulum. The rise in calcium causes the cell's myofilaments to slide past each other in a process called excitation-contraction coupling. Diseases of the heart muscle known as cardiomyopathies are of major importance. These include ischemic conditions caused by a restricted blood supply to the muscle such as angina, and myocardial infarction. Structure Gross anatomy Cardiac muscle tissue or myocardium forms the bulk of the heart. The heart wall is a three-layered structure with a thick layer of myocardium sandwiched between the inner endocardium and the outer epicardium (also known as the visceral pericardium). The inner endocardium lines the cardiac chambers, covers the cardiac valves, and joins with the endothelium that lines the blood vessels that connect to the heart. On the outer aspect of the myocardium is the epicardium which forms part of the pericardial sac that surrounds, protects, and lubricates the heart. Within the myocardium, there are several sheets of cardiac muscle cells or cardiomyocytes. The sheets of muscle that wrap around the left ventricle clos Document 1::: A cardiac function curve is a graph showing the relationship between right atrial pressure (x-axis) and cardiac output (y-axis).Superimposition of the cardiac function curve and venous return curve is used in one hemodynamic model. Shape of curve It shows a steep relationship at relatively low filling pressures and a plateau, where further stretch is not possible and so increases in pressure have little effect on output. The pressures where there is a steep relationship lie within the normal range of right atrial pressure (RAP) found in the healthy human during life. This range is about -1 to +2 mmHg. The higher pressures normally occur only in disease, in conditions such as heart failure, where the heart is unable to pump forward all the blood returning to it and so the pressure builds up in the right atrium and the great veins. Swollen neck veins are often an indicator of this type of heart failure. At low right atrial pressures this graph serves as a graphic demonstration of the Frank–Starling mechanism, that is as more blood is returned to the heart, more blood is pumped from it without extrinsic signals. Changes in the cardiac function curve In vivo however, extrinsic factors such as an increase in activity of the sympathetic nerves, and a decrease in vagal tone cause the heart to beat more frequently and more forcefully. This alters the cardiac function curve, shifting it upwards. This allows the heart to cope with the required cardiac output at a relatively low right atrial pressure. We get what is known as a family of cardiac function curves, as the heart rate increases before the plateau is reached, and without the RAP having to rise dramatically to stretch the heart more and get the Starling effect. In vivo sympathetic outflow within the myocardium is probably best described by the time honored description of the sinoatrial tree branching out to Purkinges fibers. Parasympathetic inflow within the myocardium is probably best described by influ Document 2::: The Frank–Starling law of the heart (also known as Starling's law and the Frank–Starling mechanism) represents the relationship between stroke volume and end diastolic volume. The law states that the stroke volume of the heart increases in response to an increase in the volume of blood in the ventricles, before contraction (the end diastolic volume), when all other factors remain constant. As a larger volume of blood flows into the ventricle, the blood stretches cardiac muscle, leading to an increase in the force of contraction. The Frank-Starling mechanism allows the cardiac output to be synchronized with the venous return, arterial blood supply and humoral length, without depending upon external regulation to make alterations. The physiological importance of the mechanism lies mainly in maintaining left and right ventricular output equality. Physiology The Frank-Starling mechanism occurs as the result of the length-tension relationship observed in striated muscle, including for example skeletal muscles, arthropod muscle and cardiac (heart) muscle. As striated muscle is stretched, active tension is created by altering the overlap of thick and thin filaments. The greatest isometric active tension is developed when a muscle is at its optimal length. In most relaxed skeletal muscle fibers, passive elastic properties maintain the muscle fibers length near optimal, as determined usually by the fixed distance between the attachment points of tendons to the bones (or the exoskeleton of arthropods) at either end of the muscle. In contrast, the relaxed sarcomere length of cardiac muscle cells, in a resting ventricle, is lower than the optimal length for contraction. There is no bone to fix sarcomere length in the heart (of any animal) so sarcomere length is very variable and depends directly upon blood filling and thereby expanding the heart chambers. In the human heart, maximal force is generated with an initial sarcomere length of 2.2 micrometers, a length which is rare Document 3::: In cardiology, ventricular remodeling (or cardiac remodeling) refers to changes in the size, shape, structure, and function of the heart. This can happen as a result of exercise (physiological remodeling) or after injury to the heart muscle (pathological remodeling). The injury is typically due to acute myocardial infarction (usually transmural or ST segment elevation infarction), but may be from a number of causes that result in increased pressure or volume, causing pressure overload or volume overload (forms of strain) on the heart. Chronic hypertension, congenital heart disease with intracardiac shunting, and valvular heart disease may also lead to remodeling. After the insult occurs, a series of histopathological and structural changes occur in the left ventricular myocardium that lead to progressive decline in left ventricular performance. Ultimately, ventricular remodeling may result in diminished contractile (systolic) function and reduced stroke volume. Physiological remodeling is reversible while pathological remodeling is mostly irreversible. Remodeling of the ventricles under left/right pressure demand make mismatches inevitable. Pathologic pressure mismatches between the pulmonary and systemic circulation guide compensatory remodeling of the left and right ventricles. The term "reverse remodeling" in cardiology implies an improvement in ventricular mechanics and function following a remote injury or pathological process. Ventricular remodeling may include ventricular hypertrophy, ventricular dilation, cardiomegaly, and other changes. It is an aspect of cardiomyopathy, of which there are many types. Concentric hypertrophy is due to pressure overload, while eccentric hypertrophy is due to volume overload. Pathophysiology The cardiac myocyte is the major cell involved in remodeling. Fibroblasts, collagen, the interstitium, and the coronary vessels to a lesser extent, also play a role. A common scenario for remodeling is after myocardial infarction. Ther Document 4::: Cardiophysics is an interdisciplinary science that stands at the junction of cardiology and medical physics, with researchers using the methods of, and theories from, physics to study cardiovascular system at different levels of its organisation, from the molecular scale to whole organisms. Being formed historically as part of systems biology, cardiophysics designed to reveal connections between the physical mechanisms, underlying the organization of the cardiovascular system, and biological features of its functioning. Zbigniew R. Struzik seems to be a first author who used the term in a scientific publication in 2004. One can use interchangeably also the terms cardiovascular physics. See also Medical physics Important publications in medical physics Biomedicine Biomedical engineering Physiome Nanomedicine The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The loss of oxygen to the heart muscle causes that part of the tissue to what? A. thrive B. beat erratically C. beat harder D. die Answer:
sciq-3250
multiple_choice
What kind of path does the energy of an electromagnetic wave take?
[ "straight line", "circuitous", "fluctuating", "elliptical" ]
A
Relavent Documents: Document 0::: Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams. Course content E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are: Electrostatics Conductors, capacitors, and dielectrics Electric circuits Magnetic fields Electromagnetism. Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class. AP test The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution. Registration The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test. Format The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with Document 1::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 2::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 3::: Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women. The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development. Current status of girls and women in STEM education Overall trends in STEM education Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle. Learning achievement in STEM education Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and Document 4::: There are four Advanced Placement (AP) Physics courses administered by the College Board as part of its Advanced Placement program: the algebra-based Physics 1 and Physics 2 and the calculus-based Physics C: Mechanics and Physics C: Electricity and Magnetism. All are intended to be at the college level. Each AP Physics course has an exam for which high-performing students may receive credit toward their college coursework. AP Physics 1 and 2 AP Physics 1 and AP Physics 2 were introduced in 2015, replacing AP Physics B. The courses were designed to emphasize critical thinking and reasoning as well as learning through inquiry. They are algebra-based and do not require any calculus knowledge. AP Physics 1 AP Physics 1 covers Newtonian mechanics, including: Unit 1: Kinematics Unit 2: Dynamics Unit 3: Circular Motion and Gravitation Unit 4: Energy Unit 5: Momentum Unit 6: Simple Harmonic Motion Unit 7: Torque and Rotational Motion Until 2020, the course also covered topics in electricity (including Coulomb's Law and resistive DC circuits), mechanical waves, and sound. These units were removed because they are included in AP Physics 2. AP Physics 2 AP Physics 2 covers the following topics: Unit 1: Fluids Unit 2: Thermodynamics Unit 3: Electric Force, Field, and Potential Unit 4: Electric Circuits Unit 5: Magnetism and Electromagnetic Induction Unit 6: Geometric and Physical Optics Unit 7: Quantum, Atomic, and Nuclear Physics AP Physics C From 1969 to 1972, AP Physics C was a single course with a single exam that covered all standard introductory university physics topics, including mechanics, fluids, electricity and magnetism, optics, and modern physics. In 1973, the College Board split the course into AP Physics C: Mechanics and AP Physics C: Electricity and Magnetism. The exam was also split into two separate 90-minute tests, each equivalent to a semester-length calculus-based college course. Until 2006, both exams could be taken for a single The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What kind of path does the energy of an electromagnetic wave take? A. straight line B. circuitous C. fluctuating D. elliptical Answer:
sciq-2937
multiple_choice
Root-like projections anchor adults of what colony-dwelling animals to solid surfaces such as rocks and reefs?
[ "corals", "anemones", "sponges", "molluscs" ]
C
Relavent Documents: Document 0::: Pseudoplanktonic organisms are those that attach themselves to planktonic organisms or other floating objects, such as drifting wood, buoyant shells of organisms such as Spirula, or man-made flotsam. Examples include goose barnacles and the bryozoan Jellyella. By themselves these animals cannot float, which contrasts them with true planktonic organisms, such as Velella and the Portuguese Man o' War, which are buoyant. Pseudoplankton are often found in the guts of filtering zooplankters. Document 1::: One of the marine ecosystems found in the Virgin Islands are the coral reefs. These coral reefs can be located between the islands of St. Croix, St. Thomas, and St. John. These coral reefs have an area of 297.9 km2, along with other marine habitats that are in between. The way these coral reefs grow are by coral larvae swimming freely and attaching themselves to hard surfaces around the islands and start to develop a skeleton on the outside of their skin to protect themselves from predators but also allow a new place for other coral larvae to attach to and grow on. These corals can form into three different structures; fringing reefs, which are reefs that are close to the shore, barrier reefs, which are reefs that are alongside the shore and is separated by deep water, and an atoll reef which is a coral reef that circles a lagoon or body of water. Distribution As stated, the coral reefs such as fringing reefs, deep reefs, patch reefs and spur and groove formation are distributed over three islands in the Virgin Islands which are St. Croix (Salt River Bay National Historical Park and Ecological Preserve, Buck Island Reef National Monument), St. Thomas, and St. John (Virgin Islands Coral Reef National Monument). The coral reefs found offshore of St. Thomas and St. John are distributed patchily around the islands. Additionally, a developed barrier reef system surrounds St. Croix along its eastern and southern shores. Ecology The coral reefs as well as hard-bottom habitat accounts for 297.9 km2. The coral reefs are home to diverse species. There are over 40 species of scleractinian corals and three species of Millepora. Live scleractinian species are found throughout the Virgin Islands, but mainly around Buck Island, St. Croix and St. John. More specifically based on a survey from 2001-2006, listed are a total of 215 fishes from St. John and 202 from St. Croix. Four species of sea turtles are found within the Virgin Islands. The coral reefs are impacted by freshwa Document 2::: A cnidariologist is a zoologist specializing in Cnidaria, a group of freshwater and marine aquatic animals that include the sea anemones, corals, and jellyfish. Examples Edward Thomas Browne (1866-1937) Henry Bryant Bigelow (1879-1967) Randolph Kirkpatrick (1863–1950) Kamakichi Kishinouye (1867-1929) Paul Lassenius Kramp (1887-1975) Alfred G. Mayer (1868-1922) See also Document 3::: Sessility is the biological property of an organism describing its lack of a means of self-locomotion. Sessile organisms for which natural motility is absent are normally immobile. This is distinct from the botanical concept of sessility, which refers to an organism or biological structure attached directly by its base without a stalk. Sessile organisms can move via external forces (such as water currents), but are usually permanently attached to something. Organisms such as corals lay down their own substrate from which they grow. Other sessile organisms grow from a solid object, such as a rock, a dead tree trunk, or a man-made object such as a buoy or ship's hull. Mobility Sessile animals typically have a motile phase in their development. Sponges have a motile larval stage and become sessile at maturity. Conversely, many jellyfish develop as sessile polyps early in their life cycle. In the case of the cochineal, it is in the nymph stage (also called the crawler stage) that the cochineal disperses. The juveniles move to a feeding spot and produce long wax filaments. Later they move to the edge of the cactus pad where the wind catches the wax filaments and carries the tiny larval cochineals to a new host. Reproduction Many sessile animals, including sponges, corals and hydra, are capable of asexual reproduction in situ by the process of budding. Sessile organisms such as barnacles and tunicates need some mechanism to move their young into new territory. This is why the most widely accepted theory explaining the evolution of a larval stage is the need for long-distance dispersal ability. Biologist Wayne Sousa's 1979 study in intertidal disturbance added support for the theory of nonequilibrium community structure, "suggesting that open space is necessary for the maintenance of diversity in most communities of sessile organisms". Clumping Clumping is a behavior in sessile organisms in which individuals of a particular species group closely to one another for ben Document 4::: The Sponge Reef Project is a binational scientific project between Germany and Canada to study the sponge reefs off British Columbia, Canada, reefs formed by sponges of the Hexactinellid family. The project was started in 1999, following the discovery of the reefs in 1991; earlier, this reef type was thought to have existed mainly in the Jurassic period. External links The Sponge Reef Project B.C.'s Reefs Among Science's Great Finds | Straight.com Reefs of the Pacific Ocean Reefs The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Root-like projections anchor adults of what colony-dwelling animals to solid surfaces such as rocks and reefs? A. corals B. anemones C. sponges D. molluscs Answer:
sciq-33
multiple_choice
What type of ions do ionic compounds contain?
[ "positive and charged", "positive and negative", "regular and irregular", "negative and neutal" ]
B
Relavent Documents: Document 0::: An ion () is an atom or molecule with a net electrical charge. The charge of an electron is considered to be negative by convention and this charge is equal and opposite to the charge of a proton, which is considered to be positive by convention. The net charge of an ion is not zero because its total number of electrons is unequal to its total number of protons. A cation is a positively charged ion with fewer electrons than protons while an anion is a negatively charged ion with more electrons than protons. Opposite electric charges are pulled towards one another by electrostatic force, so cations and anions attract each other and readily form ionic compounds. Ions consisting of only a single atom are termed atomic or monatomic ions, while two or more atoms form molecular ions or polyatomic ions. In the case of physical ionization in a fluid (gas or liquid), "ion pairs" are created by spontaneous molecule collisions, where each generated pair consists of a free electron and a positive ion. Ions are also created by chemical interactions, such as the dissolution of a salt in liquids, or by other means, such as passing a direct current through a conducting solution, dissolving an anode via ionization. History of discovery The word ion was coined from Greek neuter present participle of ienai (), meaning "to go". A cation is something that moves down ( pronounced kato, meaning "down") and an anion is something that moves up (, meaning "up"). They are so called because ions move toward the electrode of opposite charge. This term was introduced (after a suggestion by the English polymath William Whewell) by English physicist and chemist Michael Faraday in 1834 for the then-unknown species that goes from one electrode to the other through an aqueous medium. Faraday did not know the nature of these species, but he knew that since metals dissolved into and entered a solution at one electrode and new metal came forth from a solution at the other electrode; that some kind of Document 1::: A monatomic ion (also called simple ion) is an ion consisting of exactly one atom. If, instead of being monatomic, an ion contains more than one atom, even if these are of the same element, it is called a polyatomic ion. For example, calcium carbonate consists of the monatomic cation Ca2+ and the polyatomic anion ; both pentazenium () and azide () are polyatomic as well. A type I binary ionic compound contains a metal that forms only one type of ion. A type II ionic compound contains a metal that forms more than one type of ion, i.e., the same element in different oxidation states. {|class="wikitable" |- ! colspan="2" | Common type I monatomic cations |- | Hydrogen | H+ |- | Lithium | Li+ |- | Sodium | Na+ |- | Potassium | K+ |- | Rubidium | Rb+ |- | Caesium | Cs+ |- | Magnesium | Mg2+ |- | Calcium | Ca2+ |- | Strontium | Sr2+ |- | Barium | Ba2+ |- | Aluminium | Al3+ |- | Silver | Ag+ |- | Zinc | Zn2+ |- |} {|class="wikitable" |- ! colspan="3" | Common type II monatomic cations |- |- | iron(II) | Fe2+ | ferrous |- | iron(III) | Fe3+ | ferric |- | copper(I) | Cu+ | cuprous |- | copper(II) | Cu2+ | cupric |- | cobalt(II) | Co+2 | cobaltous |- | cobalt(III) | Co3+ | cobaltic |- | tin(II) | Sn2+ | stannous |- | tin(IV) | Sn4+ | stannic |} Document 2::: The use of ionic liquids in carbon capture is a potential application of ionic liquids as absorbents for use in carbon capture and sequestration. Ionic liquids, which are salts that exist as liquids near room temperature, are polar, nonvolatile materials that have been considered for many applications. The urgency of climate change has spurred research into their use in energy-related applications such as carbon capture and storage. Carbon capture using absorption Ionic liquids as solvents Amines are the most prevalent absorbent in postcombustion carbon capture technology today. In particular, monoethanolamine (MEA) has been used in industrial scales in postcombustion carbon capture, as well as in other CO2 separations, such as "sweetening" of natural gas. However, amines are corrosive, degrade over time, and require large industrial facilities. Ionic liquids on the other hand, have low vapor pressures . This property results from their strong Coulombic attractive force. Vapor pressure remains low through the substance's thermal decomposition point (typically >300 °C). In principle, this low vapor pressure simplifies their use and makes them "green" alternatives. Additionally, it reduces risk of contamination of the CO2 gas stream and of leakage into the environment. The solubility of CO2 in ionic liquids is governed primarily by the anion, less so by the cation. The hexafluorophosphate (PF6–) and tetrafluoroborate (BF4–) anions have been shown to be especially amenable to CO2 capture. Ionic liquids have been considered as solvents in a variety of liquid-liquid extraction processes, but never commercialized. Beside that, ionic liquids have replaced the conventional volatile solvents in industry such as absorption of gases or extractive distillation. Additionally, ionic liquids are used as co-solutes for the generation of aqueous biphasic systems, or purification of biomolecules. Process A typical CO2 absorption process consists of a feed gas, an absorptio Document 3::: An ionic liquid (IL) is a salt in the liquid state. In some contexts, the term has been restricted to salts whose melting point is below a specific temperature, such as . While ordinary liquids such as water and gasoline are predominantly made of electrically neutral molecules, ionic liquids are largely made of ions. These substances are variously called liquid electrolytes, ionic melts, ionic fluids, fused salts, liquid salts, or ionic glasses. Ionic liquids have many potential applications. They are powerful solvents and can be used as electrolytes. Salts that are liquid at near-ambient temperature are important for electric battery applications, and have been considered as sealants due to their very low vapor pressure. Any salt that melts without decomposing or vaporizing usually yields an ionic liquid. Sodium chloride (NaCl), for example, melts at into a liquid that consists largely of sodium cations () and chloride anions (). Conversely, when an ionic liquid is cooled, it often forms an ionic solid—which may be either crystalline or glassy. The ionic bond is usually stronger than the Van der Waals forces between the molecules of ordinary liquids. Because of these strong interactions, salts tend to have high lattice energies, manifested in high melting points. Some salts, especially those with organic cations, have low lattice energies and thus are liquid at or below room temperature. Examples include compounds based on the 1-ethyl-3-methylimidazolium (EMIM) cation and include: EMIM:Cl, EMIMAc (acetate anion), EMIM dicyanamide, ()()·, that melts at ; and 1-butyl-3,5-dimethylpyridinium bromide which becomes a glass below . Low-temperature ionic liquids can be compared to ionic solutions, liquids that contain both ions and neutral molecules, and in particular to the so-called deep eutectic solvents, mixtures of ionic and non-ionic solid substances which have much lower melting points than the pure compounds. Certain mixtures of nitrate salts can have melt Document 4::: The ionic strength of a solution is a measure of the concentration of ions in that solution. Ionic compounds, when dissolved in water, dissociate into ions. The total electrolyte concentration in solution will affect important properties such as the dissociation constant or the solubility of different salts. One of the main characteristics of a solution with dissolved ions is the ionic strength. Ionic strength can be molar (mol/L solution) or molal (mol/kg solvent) and to avoid confusion the units should be stated explicitly. The concept of ionic strength was first introduced by Lewis and Randall in 1921 while describing the activity coefficients of strong electrolytes. Quantifying ionic strength The molar ionic strength, I, of a solution is a function of the concentration of all ions present in that solution. where one half is because we are including both cations and anions, ci is the molar concentration of ion i (M, mol/L), zi is the charge number of that ion, and the sum is taken over all ions in the solution. For a 1:1 electrolyte such as sodium chloride, where each ion is singly-charged, the ionic strength is equal to the concentration. For the electrolyte MgSO4, however, each ion is doubly-charged, leading to an ionic strength that is four times higher than an equivalent concentration of sodium chloride: Generally multivalent ions contribute strongly to the ionic strength. Calculation example As a more complex example, the ionic strength of a mixed solution 0.050 M in Na2SO4 and 0.020 M in KCl is: Non-ideal solutions Because in non-ideal solutions volumes are no longer strictly additive it is often preferable to work with molality b (mol/kg of H2O) rather than molarity c (mol/L). In that case, molal ionic strength is defined as: in which i = ion identification number z = charge of ion b = molality (mol solute per Kg solvent) Importance The ionic strength plays a central role in the Debye–Hückel theory that describes the strong deviations from id The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What type of ions do ionic compounds contain? A. positive and charged B. positive and negative C. regular and irregular D. negative and neutal Answer:
sciq-3859
multiple_choice
The reaction of calcium oxide with carbon dioxide forms what?
[ "calcium carbonate", "carbon monoxide", "nitrogen carbonate", "nitrogen" ]
A
Relavent Documents: Document 0::: Calcium carbonate is a chemical compound with the chemical formula . It is a common substance found in rocks as the minerals calcite and aragonite, most notably in chalk and limestone, eggshells, gastropod shells, shellfish skeletons and pearls. Materials containing much calcium carbonate or resembling it are described as calcareous. Calcium carbonate is the active ingredient in agricultural lime and is produced when calcium ions in hard water react with carbonate ions to form limescale. It has medical use as a calcium supplement or as an antacid, but excessive consumption can be hazardous and cause hypercalcemia and digestive issues. Chemistry Calcium carbonate shares the typical properties of other carbonates. Notably it reacts with acids, releasing carbonic acid which quickly disintegrates into carbon dioxide and water: CaCO_3(s) {+} 2H^{+}(aq) -> Ca^{2+}(aq) + CO2(g) + H_2O(l) releases carbon dioxide upon heating, called a thermal decomposition reaction, or calcination (to above 840 °C in the case of ), to form calcium oxide, CaO, commonly called quicklime, with reaction enthalpy 178 kJ/mol: CaCO3(s)->[\Delta]CaO(s){+}CO2\uparrow Calcium carbonate reacts with water that is saturated with carbon dioxide to form the soluble calcium bicarbonate. CaCO3(s){+}CO2(g){+}H2O(l)-> Ca(HCO3)2(aq) This reaction is important in the erosion of carbonate rock, forming caverns, and leads to hard water in many regions. An unusual form of calcium carbonate is the hexahydrate ikaite, . Ikaite is stable only below 8 °C. Preparation The vast majority of calcium carbonate used in industry is extracted by mining or quarrying. Pure calcium carbonate (such as for food or pharmaceutical use), can be produced from a pure quarried source (usually marble). Alternatively, calcium carbonate is prepared from calcium oxide. Water is added to give calcium hydroxide then carbon dioxide is passed through this solution to precipitate the desired calcium carbonate, referred to in the industry Document 1::: The Dundee Society was a society of graduates of CA-400, a National Security Agency course in cryptology devised by Lambros D. Callimahos, which included the Zendian Problem (a practical exercise in traffic analysis and cryptanalysis). The class was held once a year, and new members were inducted into the Society upon completion of the class. The Society was founded in the mid-1950s and continued on after Callimahos' retirement from NSA in 1976. The last CA-400 class was held at NSA in 1979, formally closing the society's membership rolls. The society took its name from an empty jar of Dundee Marmalade that Callimahos kept on his desk for use as a pencil caddy. Callimahos came up with the society's name while trying to schedule a luncheon for former CA-400 students at the Ft. Meade Officers' Club; being unable to use either the course name or the underlying government agency's name for security reasons, he spotted the ceramic Dundee jar and decided to use "The Dundee Society" as the cover name for the luncheon reservation. CA-400 students were presented with ceramic Dundee Marmalade jars at the close of the course as part of the induction ceremony into the Dundee Society. When Dundee switched from ceramic to glass jars, Callimahos would still present graduates with ceramic Dundee jars, but the jars were then collected back up for use in next year's induction ceremony, and members were "encouraged" to seek out Dundee jars for their own collections if they wished to have a permanent token of induction. See also American Cryptogram Association National Cryptologic School Document 2::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 3::: Advanced Placement (AP) Calculus (also known as AP Calc, Calc AB / Calc BC or simply AB / BC) is a set of two distinct Advanced Placement calculus courses and exams offered by the American nonprofit organization College Board. AP Calculus AB covers basic introductions to limits, derivatives, and integrals. AP Calculus BC covers all AP Calculus AB topics plus additional topics (including integration by parts, Taylor series, parametric equations, vector calculus, and polar coordinate functions). AP Calculus AB AP Calculus AB is an Advanced Placement calculus course. It is traditionally taken after precalculus and is the first calculus course offered at most schools except for possibly a regular calculus class. The Pre-Advanced Placement pathway for math helps prepare students for further Advanced Placement classes and exams. Purpose According to the College Board: Topic outline The material includes the study and application of differentiation and integration, and graphical analysis including limits, asymptotes, and continuity. An AP Calculus AB course is typically equivalent to one semester of college calculus. Analysis of graphs (predicting and explaining behavior) Limits of functions (one and two sided) Asymptotic and unbounded behavior Continuity Derivatives Concept At a point As a function Applications Higher order derivatives Techniques Integrals Interpretations Properties Applications Techniques Numerical approximations Fundamental theorem of calculus Antidifferentiation L'Hôpital's rule Separable differential equations AP Calculus BC AP Calculus BC is equivalent to a full year regular college course, covering both Calculus I and II. After passing the exam, students may move on to Calculus III (Multivariable Calculus). Purpose According to the College Board, Topic outline AP Calculus BC includes all of the topics covered in AP Calculus AB, as well as the following: Convergence tests for series Taylor series Parametric equations Polar functions (inclu Document 4::: North Carolina State University's College of Agriculture and Life Sciences (CALS) is the fourth largest college in the university and one of the largest colleges of its kind in the nation, with nearly 3,400 students pursuing associate, bachelor's, master's and doctoral degrees and 1,300 on-campus and 700 off-campus faculty and staff members. With headquarters in Raleigh, North Carolina, the college includes 12 academic departments, the North Carolina Agricultural Research Service and the North Carolina Cooperative Extension Service. The college dean is Dr. Garey Fox. The research service is the state's principal agency of agricultural and life sciences research, with close to 600 projects related to more than 70 agricultural commodities, related agribusinesses and life science industries. Scientists work not only on the college campus in Raleigh but also at 18 agricultural research stations and 10 field laboratories across the state. The extension service is the largest outreach effort at North Carolina State University, with local centers serving all 100 of North Carolina's counties as well as the Eastern Band of the Cherokee Indians. Cooperative Extension's educational programs, carried out by state specialists and county agents, focus on agriculture, food and 4-H youth development. About 43,000 volunteers and advisory leaders also contribute to Extension's efforts. The college staffs the Plants for Human Health Institute at the N.C. Research Campus in Kannapolis with faculty from the departments of horticultural science; food, bioprocessing and nutrition sciences; plant biology; genetics; and agricultural and resource economics. The college's Department of Plant Pathology helps sponsor the Bailey Memorial Tour each year. This tour is offered to prospective agriculture students and gives them a broad based taste of the work of agricultural pathology, and is named after Dr. Jack Bailey, late pioneering Professor of Plant Pathology. Departments The college ha The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The reaction of calcium oxide with carbon dioxide forms what? A. calcium carbonate B. carbon monoxide C. nitrogen carbonate D. nitrogen Answer:
sciq-11242
multiple_choice
Photosynthesis involves reactions that are dependent on what?
[ "food", "light", "water", "air" ]
B
Relavent Documents: Document 0::: The evolution of photosynthesis refers to the origin and subsequent evolution of photosynthesis, the process by which light energy is used to assemble sugars from carbon dioxide and a hydrogen and electron source such as water. The process of photosynthesis was discovered by Jan Ingenhousz, a Dutch-born British physician and scientist, first publishing about it in 1779. The first photosynthetic organisms probably evolved early in the evolutionary history of life and most likely used reducing agents such as hydrogen rather than water. There are three major metabolic pathways by which photosynthesis is carried out: C3 photosynthesis, C4 photosynthesis, and CAM photosynthesis. C3 photosynthesis is the oldest and most common form. A C3 plant uses the Calvin cycle for the initial steps that incorporate into organic material. A C4 plant prefaces the Calvin cycle with reactions that incorporate into four-carbon compounds. A CAM plant uses crassulacean acid metabolism, an adaptation for photosynthesis in arid conditions. C4 and CAM plants have special adaptations that save water. Origin Available evidence from geobiological studies of Archean (>2500 Ma) sedimentary rocks indicates that life existed 3500 Ma. Fossils of what are thought to be filamentous photosynthetic organisms have been dated at 3.4 billion years old, consistent with recent studies of photosynthesis. Early photosynthetic systems, such as those from green and purple sulfur and green and purple nonsulfur bacteria, are thought to have been anoxygenic, using various molecules as electron donors. Green and purple sulfur bacteria are thought to have used hydrogen and hydrogen sulfide as electron and hydrogen donors. Green nonsulfur bacteria used various amino and other organic acids. Purple nonsulfur bacteria used a variety of nonspecific organic and inorganic molecules. It is suggested that photosynthesis likely originated at low-wavelength geothermal light from acidic hydrothermal vents, Zn-tetrapyrroles w Document 1::: {{DISPLAYTITLE: C3 carbon fixation}} carbon fixation is the most common of three metabolic pathways for carbon fixation in photosynthesis, the other two being and CAM. This process converts carbon dioxide and ribulose bisphosphate (RuBP, a 5-carbon sugar) into two molecules of 3-phosphoglycerate through the following reaction: CO2 + H2O + RuBP → (2) 3-phosphoglycerate This reaction was first discovered by Melvin Calvin, Andrew Benson and James Bassham in 1950. C3 carbon fixation occurs in all plants as the first step of the Calvin–Benson cycle. (In and CAM plants, carbon dioxide is drawn out of malate and into this reaction rather than directly from the air.) Plants that survive solely on fixation ( plants) tend to thrive in areas where sunlight intensity is moderate, temperatures are moderate, carbon dioxide concentrations are around 200 ppm or higher, and groundwater is plentiful. The plants, originating during Mesozoic and Paleozoic eras, predate the plants and still represent approximately 95% of Earth's plant biomass, including important food crops such as rice, wheat, soybeans and barley. plants cannot grow in very hot areas at today's atmospheric CO2 level (significantly depleted during hundreds of millions of years from above 5000 ppm) because RuBisCO incorporates more oxygen into RuBP as temperatures increase. This leads to photorespiration (also known as the oxidative photosynthetic carbon cycle, or C2 photosynthesis), which leads to a net loss of carbon and nitrogen from the plant and can therefore limit growth. plants lose up to 97% of the water taken up through their roots by transpiration. In dry areas, plants shut their stomata to reduce water loss, but this stops from entering the leaves and therefore reduces the concentration of in the leaves. This lowers the :O2 ratio and therefore also increases photorespiration. and CAM plants have adaptations that allow them to survive in hot and dry areas, and they can therefore out-compete Document 2::: Ecophysiology (from Greek , oikos, "house(hold)"; , physis, "nature, origin"; and , -logia), environmental physiology or physiological ecology is a biological discipline that studies the response of an organism's physiology to environmental conditions. It is closely related to comparative physiology and evolutionary physiology. Ernst Haeckel's coinage bionomy is sometimes employed as a synonym. Plants Plant ecophysiology is concerned largely with two topics: mechanisms (how plants sense and respond to environmental change) and scaling or integration (how the responses to highly variable conditions—for example, gradients from full sunlight to 95% shade within tree canopies—are coordinated with one another), and how their collective effect on plant growth and gas exchange can be understood on this basis. In many cases, animals are able to escape unfavourable and changing environmental factors such as heat, cold, drought or floods, while plants are unable to move away and therefore must endure the adverse conditions or perish (animals go places, plants grow places). Plants are therefore phenotypically plastic and have an impressive array of genes that aid in acclimating to changing conditions. It is hypothesized that this large number of genes can be partly explained by plant species' need to live in a wider range of conditions. Light Light is the food of plants, i.e. the form of energy that plants use to build themselves and reproduce. The organs harvesting light in plants are leaves and the process through which light is converted into biomass is photosynthesis. The response of photosynthesis to light is called light response curve of net photosynthesis (PI curve). The shape is typically described by a non-rectangular hyperbola. Three quantities of the light response curve are particularly useful in characterising a plant's response to light intensities. The inclined asymptote has a positive slope representing the efficiency of light use, and is called quantum Document 3::: The photosynthetic efficiency is the fraction of light energy converted into chemical energy during photosynthesis in green plants and algae. Photosynthesis can be described by the simplified chemical reaction 6 H2O + 6 CO2 + energy → C6H12O6 + 6 O2 where C6H12O6 is glucose (which is subsequently transformed into other sugars, starches, cellulose, lignin, and so forth). The value of the photosynthetic efficiency is dependent on how light energy is defined – it depends on whether we count only the light that is absorbed, and on what kind of light is used (see Photosynthetically active radiation). It takes eight (or perhaps ten or more) photons to use one molecule of CO2. The Gibbs free energy for converting a mole of CO2 to glucose is 114 kcal, whereas eight moles of photons of wavelength 600 nm contains 381 kcal, giving a nominal efficiency of 30%. However, photosynthesis can occur with light up to wavelength 720 nm so long as there is also light at wavelengths below 680 nm to keep Photosystem II operating (see Chlorophyll). Using longer wavelengths means less light energy is needed for the same number of photons and therefore for the same amount of photosynthesis. For actual sunlight, where only 45% of the light is in the photosynthetically active wavelength range, the theoretical maximum efficiency of solar energy conversion is approximately 11%. In actuality, however, plants do not absorb all incoming sunlight (due to reflection, respiration requirements of photosynthesis and the need for optimal solar radiation levels) and do not convert all harvested energy into biomass, which results in a maximum overall photosynthetic efficiency of 3 to 6% of total solar radiation. If photosynthesis is inefficient, excess light energy must be dissipated to avoid damaging the photosynthetic apparatus. Energy can be dissipated as heat (non-photochemical quenching), or emitted as chlorophyll fluorescence. Typical efficiencies Plants Quoted values sunlight-to-biomass efficien Document 4::: Light-dependent reactions refers to certain photochemical reactions that are involved in photosynthesis, the main process by which plants acquire energy. There are two light dependent reactions, the first occurs at photosystem II (PSII) and the second occurs at photosystem I (PSI). PSII absorbs a photon to produce a so-called high energy electron which transfers via an electron transport chain to cytochrome bf and then to PSI. The then-reduced PSI, absorbs another photon producing a more highly reducing electron, which converts NADP to NADPH. In oxygenic photosynthesis, the first electron donor is water, creating oxygen (O2) as a by-product. In anoxygenic photosynthesis various electron donors are used. Cytochrome b6f and ATP synthase work together to produce ATP (photophosphorylation) in two distinct ways. In non-cyclic photophosphorylation, cytochrome b6f uses electrons from PSII and energy from PSI to pump protons from the stroma to the lumen. The resulting proton gradient across the thylakoid membrane creates a proton-motive force, used by ATP synthase to form ATP. In cyclic photophosphorylation, cytochrome b6f uses electrons and energy from PSI to create more ATP and to stop the production of NADPH. Cyclic phosphorylation is important to create ATP and maintain NADPH in the right proportion for the light-independent reactions. The net-reaction of all light-dependent reactions in oxygenic photosynthesis is: 2 + 2 + 3ADP + 3P → + 2 H + 2NADPH + 3ATP PSI and PSII are light-harvesting complexes. If a special pigment molecule in a photosynthetic reaction center absorbs a photon, an electron in this pigment attains the excited state and then is transferred to another molecule in the reaction center. This reaction, called photoinduced charge separation, is the start of the electron flow and transforms light energy into chemical forms. Light dependent reactions In chemistry, many reactions depend on the absorption of photons to provide the energy needed to ove The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Photosynthesis involves reactions that are dependent on what? A. food B. light C. water D. air Answer:
sciq-7029
multiple_choice
A crater can usually be found on the top of what kind of volcanoes?
[ "Compound", "composite", "Dome", "Shield" ]
B
Relavent Documents: Document 0::: Maui Nui is a modern geologists' name given to a prehistoric Hawaiian island and the corresponding modern biogeographic region. Maui Nui is composed of four modern islands: Maui, Molokaʻi, Lānaʻi, and Kahoʻolawe. Administratively, the four modern islands comprise Maui County (and a tiny part of Molokaʻi called Kalawao County). Long after the breakup of Maui Nui, the four modern islands retained plant and animal life similar to each other. Thus, Maui Nui is not only a prehistoric island but also a modern biogeographic region. Geology Maui Nui formed and broke up during the Pleistocene Epoch, which lasted from about 2.58 million to 11,700 years ago. Maui Nui is built from seven shield volcanoes. The three oldest are Penguin Bank, West Molokaʻi, and East Molokaʻi, which probably range from slightly over to slightly less than 2 million years old. The four younger volcanoes are Lāna‘i, West Maui, Kaho‘olawe, and Haleakalā, which probably formed between 1.5 and 2 million years ago. At its prime 1.2 million years ago, Maui Nui was , 50% larger than today's Hawaiʻi Island. The island of Maui Nui included four modern islands (Maui, Molokaʻi, Lānaʻi, and Kahoʻolawe) and landmass west of Molokaʻi called Penguin Bank, which is now completely submerged. Maui Nui broke up as rising sea levels flooded the connections between the volcanoes. The breakup was complex because global sea levels rose and fell intermittently during the Quaternary glaciation. About 600,000 years ago, the connection between Molokaʻi and the island of Lāna‘i/Maui/Kahoʻolawe became intermittent. About 400,000 years ago, the connection between Lāna‘i and Maui/Kahoʻolawe also became intermittent. The connection between Maui and Kahoʻolawe was permanently broken between 200,000 and 150,000 years ago. Maui, Lāna‘i, and Molokaʻi were connected intermittently thereafter, most recently about 18,000 years ago during the Last Glacial Maximum. Today, the sea floor between these four islands is relatively shallow Document 1::: Lunar swirls are enigmatic features found across the Moon's surface, which are characterized by having a high albedo, appearing optically immature (i.e. having the optical characteristics of a relatively young regolith), and (often) having a sinuous shape. Their curvilinear shape is often accentuated by low albedo regions that wind between the bright swirls. They appear to overlay the lunar surface, superposed on craters and ejecta deposits, but impart no observable topography. Swirls have been identified on the lunar maria and on highlands - they are not associated with a specific lithologic composition. Swirls on the maria are characterized by strong albedo contrasts and complex, sinuous morphology, whereas those on highland terrain appear less prominent and exhibit simpler shapes, such as single loops or diffuse bright spots. Association with magnetic anomalies The lunar swirls are coincident with regions of the magnetic field of the Moon with relatively high strength on a planetary body that lacks, and may never have had, an active core dynamo with which to generate its own magnetic field. Every swirl has an associated magnetic anomaly, but not every magnetic anomaly has an identifiable swirl. Orbital magnetic field mapping by the Apollo 15 and 16 sub-satellites, Lunar Prospector, and Kaguya show regions with a local magnetic field. Because the Moon has no currently active global magnetic field, these regional anomalies are regions of remnant magnetism; their origin remains controversial. Formation models There are three leading models for swirl formation. Each model must address two characteristics of lunar swirls formation, namely that a swirl is optically immature, and that it is associated with magnetic anomaly. Models for creation of the magnetic anomalies associated with lunar swirls point to the observation that several of the magnetic anomalies are antipodal to the younger, large impact basins on the Moon. Cometary impact model This theory argues tha Document 2::: A kīpuka is an area of land surrounded by one or more younger lava flows. A kīpuka forms when lava flows on either side of a hill, ridge, or older lava dome as it moves downslope or spreads from its source. Older and more weathered than their surroundings, kīpukas often appear to be like islands within a sea of lava flows. They are often covered with soil and late ecological successional vegetation that provide visual contrast as well as habitat for animals in an otherwise inhospitable environment. In volcanic landscapes, kīpukas play an important role as biological reservoirs or refugia for plants and animals, from which the covered land can be recolonized. Etymology Kīpuka, along with aā and pāhoehoe, are Hawaiian words related to volcanology that have entered the lexicon of geology. Descriptive proverbs and poetical sayings in Hawaiian oral tradition also use the word, in an allusive sense, to mean a place where life or culture endures, regardless of any encroachment or interference. By extension, from the appearance of island "patches" within a highly contrasted background, any similarly noticeable variation or change of form, such as an opening in a forest, or a clear place in a congested setting, may be colloquially called kīpuka. Significance to research Kīpuka provides useful study sites for ecological research because they facilitate replication; multiple kīpuka in a system (isolated by the same lava flow) will tend to have uniform substrate age and successional characteristics, but are often isolated-enough from their neighbors to provide meaningful, comparable differences in size, invasion, etc. They are also receptive to experimental treatments. Kīpuka along Saddle Road on Hawaii have served as the natural laboratory for a variety of studies, examining ecological principles like island biogeography, food web control, and biotic resistance to invasiveness. In addition, Drosophila silvestris populations inhabit kīpukas, making kīpukas useful for unders Document 3::: The mid-24th century BCE climate anomaly is the period, between 2354–2345 BCE, of consistently, reduced annual temperatures that are reconstructed from consecutive abnormally narrow, Irish oak tree rings. These tree rings are indicative of a period of catastrophically reduced growth in Irish trees during that period. This range of dates also matches the transition from the Neolithic to the Bronze Age in the British Isles and a period of widespread societal collapse in the Near East. It has been proposed that this anomalous downturn in the climate might have been the result of comet debris suspended in the atmosphere. In 1997, Marie-Agnès Courty proposed that a natural disaster involving wildfires, floods, and an air blast of over 100 megatons power occurred about 2350 BCE. This proposal is based on unusual "dust" deposits which have been reported from archaeological sites in Mesopotamia that are a few hundred kilometres from each other. In later papers, Courty subsequently revised the date of this event from 2350 BCE to 2000 BCE. Based only upon the analysis of satellite imagery, Umm al Binni lake in southern Iraq has been suggested as a possible extraterrestrial impact crater and possible cause of this natural disaster. More recent sources have argued for a formation of the lake through the subsidence of the underlying basement fault blocks. Baillie and McAneney's 2015 discussion of this climate anomaly discusses its abnormally narrow Irish tree rings and the anomalous dust deposits of Courty. However, this paper lacks any mention of Umm al Binni lake. See also 4.2-kiloyear event, c. 2200 BCE Great Flood (China), c. 2300 BCE Document 4::: The geologic record in stratigraphy, paleontology and other natural sciences refers to the entirety of the layers of rock strata. That is, deposits laid down by volcanism or by deposition of sediment derived from weathering detritus (clays, sands etc.). This includes all its fossil content and the information it yields about the history of the Earth: its past climate, geography, geology and the evolution of life on its surface. According to the law of superposition, sedimentary and volcanic rock layers are deposited on top of each other. They harden over time to become a solidified (competent) rock column, that may be intruded by igneous rocks and disrupted by tectonic events. Correlating the rock record At a certain locality on the Earth's surface, the rock column provides a cross section of the natural history in the area during the time covered by the age of the rocks. This is sometimes called the rock history and gives a window into the natural history of the location that spans many geological time units such as ages, epochs, or in some cases even multiple major geologic periods—for the particular geographic region or regions. The geologic record is in no one place entirely complete for where geologic forces one age provide a low-lying region accumulating deposits much like a layer cake, in the next may have uplifted the region, and the same area is instead one that is weathering and being torn down by chemistry, wind, temperature, and water. This is to say that in a given location, the geologic record can be and is quite often interrupted as the ancient local environment was converted by geological forces into new landforms and features. Sediment core data at the mouths of large riverine drainage basins, some of which go deep thoroughly support the law of superposition. However using broadly occurring deposited layers trapped within differently located rock columns, geologists have pieced together a system of units covering most of the geologic time scale The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. A crater can usually be found on the top of what kind of volcanoes? A. Compound B. composite C. Dome D. Shield Answer:
sciq-2983
multiple_choice
What term is used to describe the sequence of elementary steps that together comprise an entire chemical reaction?
[ "potassium mechanism", "source mechanism", "reaction mechanism", "elemental mechanism" ]
C
Relavent Documents: Document 0::: Activation, in chemistry and biology, is the process whereby something is prepared or excited for a subsequent reaction. Chemistry In chemistry, "activation" refers to the reversible transition of a molecule into a nearly identical chemical or physical state, with the defining characteristic being that this resultant state exhibits an increased propensity to undergo a specified chemical reaction. Thus, activation is conceptually the opposite of protection, in which the resulting state exhibits a decreased propensity to undergo a certain reaction. The energy of activation specifies the amount of free energy the reactants must possess (in addition to their rest energy) in order to initiate their conversion into corresponding products—that is, in order to reach the transition state for the reaction. The energy needed for activation can be quite small, and often it is provided by the natural random thermal fluctuations of the molecules themselves (i.e. without any external sources of energy). The branch of chemistry that deals with this topic is called chemical kinetics. Biology Biochemistry In biochemistry, activation, specifically called bioactivation, is where enzymes or other biologically active molecules acquire the ability to perform their biological function, such as inactive proenzymes being converted into active enzymes that are able to catalyze their substrates' reactions into products. Bioactivation may also refer to the process where inactive prodrugs are converted into their active metabolites, or the toxication of protoxins into actual toxins. An enzyme may be reversibly or irreversibly bioactivated. A major mechanism of irreversible bioactivation is where a piece of a protein is cut off by cleavage, producing an enzyme that will then stay active. A major mechanism of reversible bioactivation is substrate presentation where an enzyme translocates near its substrate. Another reversible reaction is where a cofactor binds to an enzyme, which then rem Document 1::: In physics, chemistry and related fields, a kinetic scheme is a network of states and connections between them representing the scheme of a dynamical process. Usually a kinetic scheme represents a Markovian process, while for non-Markovian processes generalized kinetic schemes are used. Figure 1 shows an illustration of a kinetic scheme. A Markovian kinetic scheme Mathematical description A kinetic scheme is a network (a directed graph) of distinct states (although repetition of states may occur and this depends on the system), where each pair of states i and j are associated with directional rates, (and ). It is described with a master equation: a first-order differential equation for the probability of a system to occupy each one its states at time t (element i represents state i). Written in a matrix form, this states: , where is the matrix of connections (rates) . In a Markovian kinetic scheme the connections are constant with respect to time (and any jumping time probability density function for state i is an exponential, with a rate equal the value of all the exiting connections). When detailed balance exists in a system, the relation holds for every connected states i and j. The result represents the fact that any closed loop in a Markovian network in equilibrium does not have a net flow. Matrix can also represent birth and death, meaning that probability is injected (birth) or taken from (death) the system, where then, the process is not in equilibrium. These terms are different than a birth–death process, where there is simply a linear kinetic scheme. Specific Markovian kinetic schemes A birth–death process is a linear one-dimensional Markovian kinetic scheme. Michaelis–Menten kinetics are a type of a Markovian kinetic scheme when solved with the steady state assumption for the creation of intermediates in the reaction pathway. Generalizations of Markovian kinetic schemes A kinetic scheme with time dependent rates: When the connections depen Document 2::: Analysis (: analyses) is the process of breaking a complex topic or substance into smaller parts in order to gain a better understanding of it. The technique has been applied in the study of mathematics and logic since before Aristotle (384–322 B.C.), though analysis as a formal concept is a relatively recent development. The word comes from the Ancient Greek (analysis, "a breaking-up" or "an untying;" from ana- "up, throughout" and lysis "a loosening"). From it also comes the word's plural, analyses. As a formal concept, the method has variously been ascribed to Alhazen, René Descartes (Discourse on the Method), and Galileo Galilei. It has also been ascribed to Isaac Newton, in the form of a practical method of physical discovery (which he did not name). The converse of analysis is synthesis: putting the pieces back together again in a new or different whole. Applications Science The field of chemistry uses analysis in three ways: to identify the components of a particular chemical compound (qualitative analysis), to identify the proportions of components in a mixture (quantitative analysis), and to break down chemical processes and examine chemical reactions between elements of matter. For an example of its use, analysis of the concentration of elements is important in managing a nuclear reactor, so nuclear scientists will analyze neutron activation to develop discrete measurements within vast samples. A matrix can have a considerable effect on the way a chemical analysis is conducted and the quality of its results. Analysis can be done manually or with a device. Types of Analysis: A) Qualitative Analysis: It is concerned with which components are in a given sample or compound. Example: Precipitation reaction B) Quantitative Analysis: It is to determine the quantity of individual component present in a given sample or compound. Example: To find concentration by uv-spectrophotometer. Isotopes Chemists can use isotope analysis to assist analysts with i Document 3::: In molecular biology, biosynthesis is a multi-step, enzyme-catalyzed process where substrates are converted into more complex products in living organisms. In biosynthesis, simple compounds are modified, converted into other compounds, or joined to form macromolecules. This process often consists of metabolic pathways. Some of these biosynthetic pathways are located within a single cellular organelle, while others involve enzymes that are located within multiple cellular organelles. Examples of these biosynthetic pathways include the production of lipid membrane components and nucleotides. Biosynthesis is usually synonymous with anabolism. The prerequisite elements for biosynthesis include: precursor compounds, chemical energy (e.g. ATP), and catalytic enzymes which may need coenzymes (e.g. NADH, NADPH). These elements create monomers, the building blocks for macromolecules. Some important biological macromolecules include: proteins, which are composed of amino acid monomers joined via peptide bonds, and DNA molecules, which are composed of nucleotides joined via phosphodiester bonds. Properties of chemical reactions Biosynthesis occurs due to a series of chemical reactions. For these reactions to take place, the following elements are necessary: Precursor compounds: these compounds are the starting molecules or substrates in a reaction. These may also be viewed as the reactants in a given chemical process. Chemical energy: chemical energy can be found in the form of high energy molecules. These molecules are required for energetically unfavorable reactions. Furthermore, the hydrolysis of these compounds drives a reaction forward. High energy molecules, such as ATP, have three phosphates. Often, the terminal phosphate is split off during hydrolysis and transferred to another molecule. Catalysts: these may be for example metal ions or coenzymes and they catalyze a reaction by increasing the rate of the reaction and lowering the activation energy. In the sim Document 4::: An elementary reaction is a chemical reaction in which one or more chemical species react directly to form products in a single reaction step and with a single transition state. In practice, a reaction is assumed to be elementary if no reaction intermediates have been detected or need to be postulated to describe the reaction on a molecular scale. An apparently elementary reaction may be in fact a stepwise reaction, i.e. a complicated sequence of chemical reactions, with reaction intermediates of variable lifetimes. In a unimolecular elementary reaction, a molecule dissociates or isomerises to form the products(s) At constant temperature, the rate of such a reaction is proportional to the concentration of the species In a bimolecular elementary reaction, two atoms, molecules, ions or radicals, and , react together to form the product(s) The rate of such a reaction, at constant temperature, is proportional to the product of the concentrations of the species and The rate expression for an elementary bimolecular reaction is sometimes referred to as the Law of Mass Action as it was first proposed by Guldberg and Waage in 1864. An example of this type of reaction is a cycloaddition reaction. This rate expression can be derived from first principles by using collision theory for ideal gases. For the case of dilute fluids equivalent results have been obtained from simple probabilistic arguments. According to collision theory the probability of three chemical species reacting simultaneously with each other in a termolecular elementary reaction is negligible. Hence such termolecular reactions are commonly referred as non-elementary reactions and can be broken down into a more fundamental set of bimolecular reactions, in agreement with the law of mass action. It is not always possible to derive overall reaction schemes, but solutions based on rate equations are often possible in terms of steady-state or Michaelis-Menten approximations. Notes Chemical kinetics Phy The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What term is used to describe the sequence of elementary steps that together comprise an entire chemical reaction? A. potassium mechanism B. source mechanism C. reaction mechanism D. elemental mechanism Answer:
sciq-2051
multiple_choice
What is another term for stored energy?
[ "potential energy", "mechanical energy", "inertia", "latency" ]
A
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Engineering mathematics is a branch of applied mathematics concerning mathematical methods and techniques that are typically used in engineering and industry. Along with fields like engineering physics and engineering geology, both of which may belong in the wider category engineering science, engineering mathematics is an interdisciplinary subject motivated by engineers' needs both for practical, theoretical and other considerations outside their specialization, and to deal with constraints to be effective in their work. Description Historically, engineering mathematics consisted mostly of applied analysis, most notably: differential equations; real and complex analysis (including vector and tensor analysis); approximation theory (broadly construed, to include asymptotic, variational, and perturbative methods, representations, numerical analysis); Fourier analysis; potential theory; as well as linear algebra and applied probability, outside of analysis. These areas of mathematics were intimately tied to the development of Newtonian physics, and the mathematical physics of that period. This history also left a legacy: until the early 20th century subjects such as classical mechanics were often taught in applied mathematics departments at American universities, and fluid mechanics may still be taught in (applied) mathematics as well as engineering departments. The success of modern numerical computer methods and software has led to the emergence of computational mathematics, computational science, and computational engineering (the last two are sometimes lumped together and abbreviated as CS&E), which occasionally use high-performance computing for the simulation of phenomena and the solution of problems in the sciences and engineering. These are often considered interdisciplinary fields, but are also of interest to engineering mathematics. Specialized branches include engineering optimization and engineering statistics. Engineering mathematics in tertiary educ Document 2::: This is a list of topics that are included in high school physics curricula or textbooks. Mathematical Background SI Units Scalar (physics) Euclidean vector Motion graphs and derivatives Pythagorean theorem Trigonometry Motion and forces Motion Force Linear motion Linear motion Displacement Speed Velocity Acceleration Center of mass Mass Momentum Newton's laws of motion Work (physics) Free body diagram Rotational motion Angular momentum (Introduction) Angular velocity Centrifugal force Centripetal force Circular motion Tangential velocity Torque Conservation of energy and momentum Energy Conservation of energy Elastic collision Inelastic collision Inertia Moment of inertia Momentum Kinetic energy Potential energy Rotational energy Electricity and magnetism Ampère's circuital law Capacitor Coulomb's law Diode Direct current Electric charge Electric current Alternating current Electric field Electric potential energy Electron Faraday's law of induction Ion Inductor Joule heating Lenz's law Magnetic field Ohm's law Resistor Transistor Transformer Voltage Heat Entropy First law of thermodynamics Heat Heat transfer Second law of thermodynamics Temperature Thermal energy Thermodynamic cycle Volume (thermodynamics) Work (thermodynamics) Waves Wave Longitudinal wave Transverse waves Transverse wave Standing Waves Wavelength Frequency Light Light ray Speed of light Sound Speed of sound Radio waves Harmonic oscillator Hooke's law Reflection Refraction Snell's law Refractive index Total internal reflection Diffraction Interference (wave propagation) Polarization (waves) Vibrating string Doppler effect Gravity Gravitational potential Newton's law of universal gravitation Newtonian constant of gravitation See also Outline of physics Physics education Document 3::: The minimum total potential energy principle is a fundamental concept used in physics and engineering. It dictates that at low temperatures a structure or body shall deform or displace to a position that (locally) minimizes the total potential energy, with the lost potential energy being converted into kinetic energy (specifically heat). Some examples A free proton and free electron will tend to combine to form the lowest energy state (the ground state) of a hydrogen atom, the most stable configuration. This is because that state's energy is 13.6 electron volts (eV) lower than when the two particles separated by an infinite distance. The dissipation in this system takes the form of spontaneous emission of electromagnetic radiation, which increases the entropy of the surroundings. A rolling ball will end up stationary at the bottom of a hill, the point of minimum potential energy. The reason is that as it rolls downward under the influence of gravity, friction produced by its motion transfers energy in the form of heat of the surroundings with an attendant increase in entropy. A protein folds into the state of lowest potential energy. In this case, the dissipation takes the form of vibration of atoms within or adjacent to the protein. Structural mechanics The total potential energy, , is the sum of the elastic strain energy, , stored in the deformed body and the potential energy, , associated to the applied forces: This energy is at a stationary position when an infinitesimal variation from such position involves no change in energy: The principle of minimum total potential energy may be derived as a special case of the virtual work principle for elastic systems subject to conservative forces. The equality between external and internal virtual work (due to virtual displacements) is: where = vector of displacements = vector of distributed forces acting on the part of the surface = vector of body forces In the special case of elastic bodies, the right Document 4::: The energy systems language, also referred to as energese, or energy circuit language, or generic systems symbols, is a modelling language used for composing energy flow diagrams in the field of systems ecology. It was developed by Howard T. Odum and colleagues in the 1950s during studies of the tropical forests funded by the United States Atomic Energy Commission. Design intent The design intent of the energy systems language was to facilitate the generic depiction of energy flows through any scale system while encompassing the laws of physics, and in particular, the laws of thermodynamics (see energy transformation for an example). In particular H.T. Odum aimed to produce a language which could facilitate the intellectual analysis, engineering synthesis and management of global systems such as the geobiosphere, and its many subsystems. Within this aim, H.T. Odum had a strong concern that many abstract mathematical models of such systems were not thermodynamically valid. Hence he used analog computers to make system models due to their intrinsic value; that is, the electronic circuits are of value for modelling natural systems which are assumed to obey the laws of energy flow, because, in themselves the circuits, like natural systems, also obey the known laws of energy flow, where the energy form is electrical. However Odum was interested not only in the electronic circuits themselves, but also in how they might be used as formal analogies for modeling other systems which also had energy flowing through them. As a result, Odum did not restrict his inquiry to the analysis and synthesis of any one system in isolation. The discipline that is most often associated with this kind of approach, together with the use of the energy systems language is known as systems ecology. General characteristics When applying the electronic circuits (and schematics) to modeling ecological and economic systems, Odum believed that generic categories, or characteristic modules, could The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is another term for stored energy? A. potential energy B. mechanical energy C. inertia D. latency Answer:
sciq-311
multiple_choice
The process of breaking down food into nutrients is known as __________
[ "digestion", "absorption", "filtration", "energy" ]
A
Relavent Documents: Document 0::: Digestion is the breakdown of large insoluble food compounds into small water-soluble components so that they can be absorbed into the blood plasma. In certain organisms, these smaller substances are absorbed through the small intestine into the blood stream. Digestion is a form of catabolism that is often divided into two processes based on how food is broken down: mechanical and chemical digestion. The term mechanical digestion refers to the physical breakdown of large pieces of food into smaller pieces which can subsequently be accessed by digestive enzymes. Mechanical digestion takes place in the mouth through mastication and in the small intestine through segmentation contractions. In chemical digestion, enzymes break down food into the small compounds that the body can use. In the human digestive system, food enters the mouth and mechanical digestion of the food starts by the action of mastication (chewing), a form of mechanical digestion, and the wetting contact of saliva. Saliva, a liquid secreted by the salivary glands, contains salivary amylase, an enzyme which starts the digestion of starch in the food; the saliva also contains mucus, which lubricates the food, and hydrogen carbonate, which provides the ideal conditions of pH (alkaline) for amylase to work, and electrolytes (Na+, K+, Cl−, HCO−3). About 30% of starch is hydrolyzed into disaccharide in the oral cavity (mouth). After undergoing mastication and starch digestion, the food will be in the form of a small, round slurry mass called a bolus. It will then travel down the esophagus and into the stomach by the action of peristalsis. Gastric juice in the stomach starts protein digestion. Gastric juice mainly contains hydrochloric acid and pepsin. In infants and toddlers, gastric juice also contains rennin to digest milk proteins. As the first two chemicals may damage the stomach wall, mucus and bicarbonates are secreted by the stomach. They provide a slimy layer that acts as a shield against the damag Document 1::: Relatively speaking, the brain consumes an immense amount of energy in comparison to the rest of the body. The mechanisms involved in the transfer of energy from foods to neurons are likely to be fundamental to the control of brain function. Human bodily processes, including the brain, all require both macronutrients, as well as micronutrients. Insufficient intake of selected vitamins, or certain metabolic disorders, may affect cognitive processes by disrupting the nutrient-dependent processes within the body that are associated with the management of energy in neurons, which can subsequently affect synaptic plasticity, or the ability to encode new memories. Macronutrients The human brain requires nutrients obtained from the diet to develop and sustain its physical structure and cognitive functions. Additionally, the brain requires caloric energy predominately derived from the primary macronutrients to operate. The three primary macronutrients include carbohydrates, proteins, and fats. Each macronutrient can impact cognition through multiple mechanisms, including glucose and insulin metabolism, neurotransmitter actions, oxidative stress and inflammation, and the gut-brain axis. Inadequate macronutrient consumption or proportion could impair optimal cognitive functioning and have long-term health implications. Carbohydrates Through digestion, dietary carbohydrates are broken down and converted into glucose, which is the sole energy source for the brain. Optimal brain function relies on adequate carbohydrate consumption, as carbohydrates provide the quickest source of glucose for the brain. Glucose deficiencies such as hypoglycaemia reduce available energy for the brain and impair all cognitive processes and performance. Additionally, situations with high cognitive demand, such as learning a new task, increase brain glucose utilization, depleting blood glucose stores and initiating the need for supplementation. Complex carbohydrates, especially those with high d Document 2::: Animal nutrition focuses on the dietary nutrients needs of animals, primarily those in agriculture and food production, but also in zoos, aquariums, and wildlife management. Constituents of diet Macronutrients (excluding fiber and water) provide structural material (amino acids from which proteins are built, and lipids from which cell membranes and some signaling molecules are built) and energy. Some of the structural material can be used to generate energy internally, though the net energy depends on such factors as absorption and digestive effort, which vary substantially from instance to instance. Vitamins, minerals, fiber, and water do not provide energy, but are required for other reasons. A third class dietary material, fiber (i.e., non-digestible material such as cellulose), seems also to be required, for both mechanical and biochemical reasons, though the exact reasons remain unclear. Molecules of carbohydrates and fats consist of carbon, hydrogen, and oxygen atoms. Carbohydrates range from simple monosaccharides (glucose, fructose, galactose) to complex polysaccharides (starch). Fats are triglycerides, made of assorted fatty acid monomers bound to glycerol backbone. Some fatty acids, but not all, are essential in the diet: they cannot be synthesized in the body. Protein molecules contain nitrogen atoms in addition to carbon, oxygen, and hydrogen. The fundamental components of protein are nitrogen-containing amino acids. Essential amino acids cannot be made by the animal. Some of the amino acids are convertible (with the expenditure of energy) to glucose and can be used for energy production just as ordinary glucose. By breaking down existing protein, some glucose can be produced internally; the remaining amino acids are discarded, primarily as urea in urine. This occurs normally only during prolonged starvation. Other dietary substances found in plant foods (phytochemicals, polyphenols) are not identified as essential nutrients but appear to impact healt Document 3::: The trophic level of an organism is the position it occupies in a food web. A food chain is a succession of organisms that eat other organisms and may, in turn, be eaten themselves. The trophic level of an organism is the number of steps it is from the start of the chain. A food web starts at trophic level 1 with primary producers such as plants, can move to herbivores at level 2, carnivores at level 3 or higher, and typically finish with apex predators at level 4 or 5. The path along the chain can form either a one-way flow or a food "web". Ecological communities with higher biodiversity form more complex trophic paths. The word trophic derives from the Greek τροφή (trophē) referring to food or nourishment. History The concept of trophic level was developed by Raymond Lindeman (1942), based on the terminology of August Thienemann (1926): "producers", "consumers", and "reducers" (modified to "decomposers" by Lindeman). Overview The three basic ways in which organisms get food are as producers, consumers, and decomposers. Producers (autotrophs) are typically plants or algae. Plants and algae do not usually eat other organisms, but pull nutrients from the soil or the ocean and manufacture their own food using photosynthesis. For this reason, they are called primary producers. In this way, it is energy from the sun that usually powers the base of the food chain. An exception occurs in deep-sea hydrothermal ecosystems, where there is no sunlight. Here primary producers manufacture food through a process called chemosynthesis. Consumers (heterotrophs) are species that cannot manufacture their own food and need to consume other organisms. Animals that eat primary producers (like plants) are called herbivores. Animals that eat other animals are called carnivores, and animals that eat both plants and other animals are called omnivores. Decomposers (detritivores) break down dead plant and animal material and wastes and release it again as energy and nutrients into Document 4::: Human nutrition deals with the provision of essential nutrients in food that are necessary to support human life and good health. Poor nutrition is a chronic problem often linked to poverty, food security, or a poor understanding of nutritional requirements. Malnutrition and its consequences are large contributors to deaths, physical deformities, and disabilities worldwide. Good nutrition is necessary for children to grow physically and mentally, and for normal human biological development. Overview The human body contains chemical compounds such as water, carbohydrates, amino acids (found in proteins), fatty acids (found in lipids), and nucleic acids (DNA and RNA). These compounds are composed of elements such as carbon, hydrogen, oxygen, nitrogen, and phosphorus. Any study done to determine nutritional status must take into account the state of the body before and after experiments, as well as the chemical composition of the whole diet and of all the materials excreted and eliminated from the body (including urine and feces). Nutrients The seven major classes of nutrients are carbohydrates, fats, fiber, minerals, proteins, vitamins, and water. Nutrients can be grouped as either macronutrients or micronutrients (needed in small quantities). Carbohydrates, fats, and proteins are macronutrients, and provide energy. Water and fiber are macronutrients but do not provide energy. The micronutrients are minerals and vitamins. The macronutrients (excluding fiber and water) provide structural material (amino acids from which proteins are built, and lipids from which cell membranes and some signaling molecules are built), and energy. Some of the structural material can also be used to generate energy internally, and in either case it is measured in Joules or kilocalories (often called "Calories" and written with a capital 'C' to distinguish them from little 'c' calories). Carbohydrates and proteins provide 17 kJ approximately (4 kcal) of energy per gram, while fats prov The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The process of breaking down food into nutrients is known as __________ A. digestion B. absorption C. filtration D. energy Answer:
ai2_arc-371
multiple_choice
Why do scientists perform multiple trials of the same experiment?
[ "to include additional variables in the experiment", "to complete the steps of the experiment in less time", "to find a less expensive way to conduct the experiment", "to increase the likelihood of accurate experiment results" ]
D
Relavent Documents: Document 0::: A glossary of terms used in experimental research. Concerned fields Statistics Experimental design Estimation theory Glossary Alias: When the estimate of an effect also includes the influence of one or more other effects (usually high order interactions) the effects are said to be aliased (see confounding). For example, if the estimate of effect D in a four factor experiment actually estimates (D + ABC), then the main effect D is aliased with the 3-way interaction ABC. Note: This causes no difficulty when the higher order interaction is either non-existent or insignificant. Analysis of variance (ANOVA): A mathematical process for separating the variability of a group of observations into assignable causes and setting up various significance tests. Balanced design: An experimental design where all cells (i.e. treatment combinations) have the same number of observations. Blocking: A schedule for conducting treatment combinations in an experimental study such that any effects on the experimental results due to a known change in raw materials, operators, machines, etc., become concentrated in the levels of the blocking variable. Note: the reason for blocking is to isolate a systematic effect and prevent it from obscuring the main effects. Blocking is achieved by restricting randomization. Center Points: Points at the center value of all factor ranges. Coding Factor Levels: Transforming the scale of measurement for a factor so that the high value becomes +1 and the low value becomes -1 (see scaling). After coding all factors in a 2-level full factorial experiment, the design matrix has all orthogonal columns. Coding is a simple linear transformation of the original measurement scale. If the "high" value is Xh and the "low" value is XL (in the original scale), then the scaling transformation takes any original X value and converts it to (X − a)/b, where a = (Xh + XL)/2 and b = (Xh−XL)/2. To go back to the original measurement scale, just take the coded value a Document 1::: In epidemiology and biostatistics, the experimental event rate (EER) is a measure of how often a particular statistical event (such as response to a drug, adverse event or death) occurs within the experimental group (non-control group) of an experiment. This value is very useful in determining the therapeutic benefit or risk to patients in experimental groups, in comparison to patients in placebo or traditionally treated control groups. Three statistical terms rely on EER for their calculation: absolute risk reduction, relative risk reduction and number needed to treat. Control event rate The control event rate (CER) is identical to the experimental event rate except that is measured within the scientific control group of an experiment. Worked example In a trial of hypothetical drug "X" where we are measuring event "Z", we have two groups. Our control group (25 people) is given a placebo, and the experimental group (25 people) is given drug "X". Event "Z" in control group : 4 in 25 people Control event rate : 4/25 Event "Z" in experimental group : 12 in 25 people Experimental event rate : 12/25 Another worked example is as follows: See also Absolute risk reduction Relative risk reduction Number needed to treat Document 2::: The Design of Experiments is a 1935 book by the English statistician Ronald Fisher about the design of experiments and is considered a foundational work in experimental design. Among other contributions, the book introduced the concept of the null hypothesis in the context of the lady tasting tea experiment. A chapter is devoted to the Latin square. Chapters Introduction The principles of experimentation, illustrated by a psycho-physical experiment A historical experiment on growth rate An agricultural experiment in randomized blocks The Latin square The factorial design in experimentation Confounding Special cases of partial confounding The increase of precision by concomitant measurements. Statistical Control The generalization of null hypotheses. Fiducial probability The measurement of amount of information in general Quotations regarding the null hypothesis Fisher introduced the null hypothesis by an example, the now famous Lady tasting tea experiment, as a casual wager. She claimed the ability to determine the means of tea preparation by taste. Fisher proposed an experiment and an analysis to test her claim. She was to be offered 8 cups of tea, 4 prepared by each method, for determination. He proposed the null hypothesis that she possessed no such ability, so she was just guessing. With this assumption, the number of correct guesses (the test statistic) formed a hypergeometric distribution. Fisher calculated that her chance of guessing all cups correctly was 1/70. He was provisionally willing to concede her ability (rejecting the null hypothesis) in this case only. Having an example, Fisher commented: "...the null hypothesis is never proved or established, but is possibly disproved, in the course of experimentation. Every experiment may be said to exist only in order to give the facts a chance of disproving the null hypothesis." "...the null hypothesis must be exact, that is free from vagueness and ambiguity, because it must supply the Document 3::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 4::: Adaptive comparative judgement is a technique borrowed from psychophysics which is able to generate reliable results for educational assessment – as such it is an alternative to traditional exam script marking. In the approach, judges are presented with pairs of student work and are then asked to choose which is better, one or the other. By means of an iterative and adaptive algorithm, a scaled distribution of student work can then be obtained without reference to criteria. Introduction Traditional exam script marking began in Cambridge 1792 when, with undergraduate numbers rising, the importance of proper ranking of students was growing. So in 1792 the new Proctor of Examinations, William Farish, introduced marking, a process in which every examiner gives a numerical score to each response by every student, and the overall total mark puts the students in the final rank order. Francis Galton (1869) noted that, in an unidentified year about 1863, the Senior Wrangler scored 7,634 out of a maximum of 17,000, while the Second Wrangler scored 4,123. (The 'Wooden Spoon' scored only 237.) Prior to 1792, a team of Cambridge examiners convened at 5pm on the last day of examining, reviewed the 19 papers each student had sat – and published their rank order at midnight. Marking solved the problems of numbers and prevented unfair personal bias, and its introduction was a step towards modern objective testing, the format it is best suited to. But the technology of testing that followed, with its major emphasis on reliability and the automatisation of marking, has been an uncomfortable partner for some areas of educational achievement: assessing writing or speaking, and other kinds of performance need something more qualitative and judgemental. The technique of Adaptive Comparative Judgement is an alternative to marking. It returns to the pre-1792 idea of sorting papers according to their quality, but retains the guarantee of reliability and fairness. It is by far the most rel The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Why do scientists perform multiple trials of the same experiment? A. to include additional variables in the experiment B. to complete the steps of the experiment in less time C. to find a less expensive way to conduct the experiment D. to increase the likelihood of accurate experiment results Answer:
sciq-4994
multiple_choice
Of all the mineral nutrients, what contributes the most to plant growth and crop yields?
[ "oxygen", "methane", "nitrogen", "silicon" ]
C
Relavent Documents: Document 0::: Plant nutrition is the study of the chemical elements and compounds necessary for plant growth and reproduction, plant metabolism and their external supply. In its absence the plant is unable to complete a normal life cycle, or that the element is part of some essential plant constituent or metabolite. This is in accordance with Justus von Liebig’s law of the minimum. The total essential plant nutrients include seventeen different elements: carbon, oxygen and hydrogen which are absorbed from the air, whereas other nutrients including nitrogen are typically obtained from the soil (exceptions include some parasitic or carnivorous plants). Plants must obtain the following mineral nutrients from their growing medium: the macronutrients: nitrogen (N), phosphorus (P), potassium (K), calcium (Ca), sulfur (S), magnesium (Mg) the micronutrients (or trace minerals): iron (Fe), boron (B), chlorine (Cl), manganese (Mn), zinc (Zn), copper (Cu), molybdenum (Mo), nickel (Ni) These elements stay beneath soil as salts, so plants absorb these elements as ions. The macronutrients are taken-up in larger quantities; hydrogen, oxygen, nitrogen and carbon contribute to over 95% of a plant's entire biomass on a dry matter weight basis. Micronutrients are present in plant tissue in quantities measured in parts per million, ranging from 0.1 to 200 ppm, or less than 0.02% dry weight. Most soil conditions across the world can provide plants adapted to that climate and soil with sufficient nutrition for a complete life cycle, without the addition of nutrients as fertilizer. However, if the soil is cropped it is necessary to artificially modify soil fertility through the addition of fertilizer to promote vigorous growth and increase or sustain yield. This is done because, even with adequate water and light, nutrient deficiency can limit growth and crop yield. History Carbon, hydrogen and oxygen are the basic nutrients plants receive from air and water. Justus von Liebig proved in 1840 tha Document 1::: Soil fertility refers to the ability of soil to sustain agricultural plant growth, i.e. to provide plant habitat and result in sustained and consistent yields of high quality. It also refers to the soil's ability to supply plant/crop nutrients in the right quantities and qualities over a sustained period of time. A fertile soil has the following properties: The ability to supply essential plant nutrients and water in adequate amounts and proportions for plant growth and reproduction; and The absence of toxic substances which may inhibit plant growth e.g Fe2+ which leads to nutrient toxicity. The following properties contribute to soil fertility in most situations: Sufficient soil depth for adequate root growth and water retention; Good internal drainage, allowing sufficient aeration for optimal root growth (although some plants, such as rice, tolerate waterlogging); Topsoil or horizon O is with sufficient soil organic matter for healthy soil structure and soil moisture retention; Soil pH in the range 5.5 to 7.0 (suitable for most plants but some prefer or tolerate more acid or alkaline conditions); Adequate concentrations of essential plant nutrients in plant-available forms; Presence of a range of microorganisms that support plant growth. In lands used for agriculture and other human activities, maintenance of soil fertility typically requires the use of soil conservation practices. This is because soil erosion and other forms of soil degradation generally result in a decline in quality with respect to one or more of the aspects indicated above. Soil fertilization Bioavailable phosphorus (available to soil life) is the element in soil that is most often lacking. Nitrogen and potassium are also needed in substantial amounts. For this reason these three elements are always identified on a commercial fertilizer analysis. For example, a 10-10-15 fertilizer has 10 percent nitrogen, 10 percent available phosphorus (P2O5) and 15 percent water-soluble potassiu Document 2::: The International Plant Nutrition Colloquium (IPNC) is an international conference held every four years for the promotion of research within the field of plant nutrition. Prior to 1981, it was known as the International Colloquium on Plant Analysis and Fertiliser Problems. The IPNC is organised by the International Plant Nutrition Council, which "seeks to advance science-based non-commercial research and education in plant nutrition in order to highlight the importance of this scientific field for crop production, food security, human health and sustainable environmental protection". It is considered that the IPNC is the most important international meeting on plant nutrition globally, with more than 800 delegates attending each meeting. The IPNC covers research in the fields of plant mineral nutrition, plant molecular biology, plant genetics, agronomy, horticulture, ecology, environmental sciences, and fertilizer use and production. In honour of Professor Horst Marschner, who was a passionate supporter of students and young researchers, the IPNC has established the Marschner Young Scientist Award for outstanding early-career researchers and PhD students with a potential to become future research leaders. The current President of the International Plant Nutrition Council is Professor Ciro A. Rosolem from the São Paulo State University. The next IPNC is to be held in Iguazu Falls, Brazil, from 22-27 August 2022. Past and future locations for the IPNC: Document 3::: Agricultural chemistry is the study of chemistry, especially organic chemistry and biochemistry, as they relate to agriculture. This includes agricultural production, the use of ammonia in fertilizer, pesticides, and how plant biochemistry can be used to genetically alter crops. Agricultural chemistry is not a distinct discipline, but a common thread that ties together genetics, physiology, microbiology, entomology, and numerous other sciences that impinge on agriculture. Agricultural chemistry studies the chemical compositions and reactions involved in the production, protection, and use of crops and livestock. Its applied science and technology aspects are directed towards increasing yields and improving quality, which comes with multiple advantages and disadvantages. Advantages and Disadvantages The goals of agricultural chemistry are to expand understanding of the causes and effects of biochemical reactions related to plant and animal growth, to reveal opportunities for controlling those reactions, and to develop chemical products that will provide the desired assistance or control. Agricultural chemistry is therefore used in processing of raw products into foods and beverages, as well as environmental monitoring and remediation. It is also used to make feed supplements for animals, as well as medicinal compounds for the prevention or control of disease. When agriculture is considered with ecology, the sustainablility of an operation is considered. However, modern agrochemical industry has gained a reputation for its maximising profits while violating sustainable and ecologically viable agricultural principles. Eutrophication, the prevalence of genetically modified crops and the increasing concentration of chemicals in the food chain (e.g. persistent organic pollutants) are only a few consequences of naive industrial agriculture. Soil Chemistry Agricultural chemistry often aims at preserving or increasing the fertility of soil, maintaining or improving the Document 4::: Murashige and Skoog medium (or MSO or MS0 (MS-zero)) is a plant growth medium used in the laboratories for cultivation of plant cell culture. MS0 was invented by plant scientists Toshio Murashige and Folke K. Skoog in 1962 during Murashige's search for a new plant growth regulator. A number behind the letters MS is used to indicate the sucrose concentration of the medium. For example, MS0 contains no sucrose and MS20 contains 20 g/L sucrose. Along with its modifications, it is the most commonly used medium in plant tissue culture experiments in the laboratory. However, according to recent scientific findings, MS medium is not suitable as a nutrient solution for deep water culture or hydroponics. As Skoog's doctoral student, Murashige originally set out to find an as-yet undiscovered growth hormone present in tobacco juice. No such component was discovered; instead, analysis of juiced tobacco and ashed tobacco revealed higher concentrations of specific minerals in plant tissues than were previously known. A series of experiments demonstrated that varying the levels of these nutrients enhanced growth substantially over existing formulations. It was determined that nitrogen in particular enhanced growth of tobacco in tissue culture. Ingredients Major salts (macronutrients) per litre Ammonium nitrate (NH4NO3) 1650 mg/l Calcium chloride (CaCl2 · 2H2O) 440 mg/l Magnesium sulfate (MgSO4 · 7H2O) 180.7 mg/l Monopotassium phosphate (KH2PO4) 170 mg/l Potassium nitrate (KNO3) 1900 mg/l. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Of all the mineral nutrients, what contributes the most to plant growth and crop yields? A. oxygen B. methane C. nitrogen D. silicon Answer:
sciq-8536
multiple_choice
What is a copy of an image formed by reflection or refraction?
[ "an example", "a photographic image", "an image", "a mirror image" ]
C
Relavent Documents: Document 0::: The study of image formation encompasses the radiometric and geometric processes by which 2D images of 3D objects are formed. In the case of digital images, the image formation process also includes analog to digital conversion and sampling. Imaging The imaging process is a mapping of an object to an image plane. Each point on the image corresponds to a point on the object. An illuminated object will scatter light toward a lens and the lens will collect and focus the light to create the image. The ratio of the height of the image to the height of the object is the magnification. The spatial extent of the image surface and the focal length of the lens determines the field of view of the lens. Image formation of mirror these have a center of curvature and its focal length of the mirror is half of the center of curvature. Illumination An object may be illuminated by the light from an emitting source such as the sun, a light bulb or a Light Emitting Diode. The light incident on the object is reflected in a manner dependent on the surface properties of the object. For rough surfaces, the reflected light is scattered in a manner described by the Bi-directional Reflectance Distribution Function (BRDF) of the surface. The BRDF of a surface is the ratio of the exiting power per square meter per steradian (radiance) to the incident power per square meter (irradiance). The BRDF typically varies with angle and may vary with wavelength, but a specific important case is a surface that has constant BRDF. This surface type is referred to as Lambertian and the magnitude of the BRDF is R/π, where R is the reflectivity of the surface. The portion of scattered light that propagates toward the lens is collected by the entrance pupil of the imaging lens over the field of view. Field of view and imagery The Field of view of a lens is limited by the size of the image plane and the focal length of the lens. The relationship between a location on the image and a location on t Document 1::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 2::: A mirror image (in a plane mirror) is a reflected duplication of an object that appears almost identical, but is reversed in the direction perpendicular to the mirror surface. As an optical effect it results from reflection off from substances such as a mirror or water. It is also a concept in geometry and can be used as a conceptualization process for 3-D structures. In geometry and geometrical optics In two dimensions In geometry, the mirror image of an object or two-dimensional figure is the virtual image formed by reflection in a plane mirror; it is of the same size as the original object, yet different, unless the object or figure has reflection symmetry (also known as a P-symmetry). Two-dimensional mirror images can be seen in the reflections of mirrors or other reflecting surfaces, or on a printed surface seen inside-out. If we first look at an object that is effectively two-dimensional (such as the writing on a card) and then turn the card to face a mirror, the object turns through an angle of 180° and we see a left-right reversal in the mirror. In this example, it is the change in orientation rather than the mirror itself that causes the observed reversal. Another example is when we stand with our backs to the mirror and face an object that is in front of the mirror. Then we compare the object with its reflection by turning ourselves 180°, towards the mirror. Again we perceive a left-right reversal due to a change in our orientation. So, in these examples the mirror does not actually cause the observed reversals. In three dimensions The concept of reflection can be extended to three-dimensional objects, including the inside parts, even if they are not transparent. The term then relates to structural as well as visual aspects. A three-dimensional object is reversed in the direction perpendicular to the mirror surface. In physics, mirror images are investigated in the subject called geometrical optics. More fundamentally in geometry and mathematics they Document 3::: "La dioptrique" (in English "Dioptrique", "Optics", or "Dioptrics"), is a short treatise published in 1637 included in one of the Essays written with Discourse on the Method by René Descartes. In this essay Descartes uses various models to understand the properties of light. This essay is known as Descartes' greatest contribution to optics, as it is the first publication of the Law of Refraction. First Discourse: On Light The first discourse captures Descartes' theories on the nature of light. In the first model, he compares light to a stick that allows a blind person to discern his environment through touch. Descartes says: You have only to consider that the differences which a blind man notes among trees, rocks, water, and similar things through the medium of his stick do not seem less to him than those among red, yellow, green, and all the other colors seem to us; and that nevertheless these differences are nothing other, in all these bodies, than the diverse ways of moving, or of resisting the movements of, this stick. Descartes' second model on light uses his theory of the elements to demonstrate the rectilinear transmission of light as well as the movement of light through solid objects. He uses a metaphor of wine flowing through a vat of grapes, then exiting through a hole at the bottom of the vat. Now consider that, since there is no vacuum in Nature as almost all the Philosophers affirm, and since there are nevertheless many pores in all the bodies that we perceive around us, as experiment can show quite clearly, it is necessary that these pores be filled with some very subtle and very fluid material, extending without interruption from the stars and planets to us. Thus, this subtle material being compared with the wine in that vat, and the less fluid or heavier parts, of the air as well as of other transparent bodies, being compared with the bunches of grapes which are mixed in, you will easily understand the following: Just as the parts of this wine.. Document 4::: Reflection is the change in direction of a wavefront at an interface between two different media so that the wavefront returns into the medium from which it originated. Common examples include the reflection of light, sound and water waves. The law of reflection says that for specular reflection (for example at a mirror) the angle at which the wave is incident on the surface equals the angle at which it is reflected. In acoustics, reflection causes echoes and is used in sonar. In geology, it is important in the study of seismic waves. Reflection is observed with surface waves in bodies of water. Reflection is observed with many types of electromagnetic wave, besides visible light. Reflection of VHF and higher frequencies is important for radio transmission and for radar. Even hard X-rays and gamma rays can be reflected at shallow angles with special "grazing" mirrors. Reflection of light Reflection of light is either specular (mirror-like) or diffuse (retaining the energy, but losing the image) depending on the nature of the interface. In specular reflection the phase of the reflected waves depends on the choice of the origin of coordinates, but the relative phase between s and p (TE and TM) polarizations is fixed by the properties of the media and of the interface between them. A mirror provides the most common model for specular light reflection, and typically consists of a glass sheet with a metallic coating where the significant reflection occurs. Reflection is enhanced in metals by suppression of wave propagation beyond their skin depths. Reflection also occurs at the surface of transparent media, such as water or glass. In the diagram, a light ray PO strikes a vertical mirror at point O, and the reflected ray is OQ. By projecting an imaginary line through point O perpendicular to the mirror, known as the normal, we can measure the angle of incidence, θi and the angle of reflection, θr. The law of reflection states that θi = θr, or in other words, the angl The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is a copy of an image formed by reflection or refraction? A. an example B. a photographic image C. an image D. a mirror image Answer:
sciq-7028
multiple_choice
Life on earth is carbon-based. organisms need not only energy but also carbon ________ for building bodies?
[ "ions", "monoxide", "atoms", "crystals" ]
C
Relavent Documents: Document 0::: Carbon is a primary component of all known life on Earth, representing approximately 45–50% of all dry biomass. Carbon compounds occur naturally in great abundance on Earth. Complex biological molecules consist of carbon atoms bonded with other elements, especially oxygen and hydrogen and frequently also nitrogen, phosphorus, and sulfur (collectively known as CHNOPS). Because it is lightweight and relatively small in size, carbon molecules are easy for enzymes to manipulate. It is frequently assumed in astrobiology that if life exists elsewhere in the Universe, it will also be carbon-based. Critics refer to this assumption as carbon chauvinism. Characteristics Carbon is capable of forming a vast number of compounds, more than any other element, with almost ten million compounds described to date, and yet that number is but a fraction of the number of theoretically possible compounds under standard conditions. The enormous diversity of carbon-containing compounds, known as organic compounds, has led to a distinction between them and compounds that do not contain carbon, known as inorganic compounds. The branch of chemistry that studies organic compounds is known as organic chemistry. Carbon is the 15th most abundant element in the Earth's crust, and the fourth most abundant element in the universe by mass, after hydrogen, helium, and oxygen. Carbon's widespread abundance, its ability to form stable bonds with numerous other elements, and its unusual ability to form polymers at the temperatures commonly encountered on Earth enables it to serve as a common element of all known living organisms. In a 2018 study, carbon was found to compose approximately 550 billion tons of all life on Earth. It is the second most abundant element in the human body by mass (about 18.5%) after oxygen. The most important characteristics of carbon as a basis for the chemistry of life are that each carbon atom is capable of forming up to four valence bonds with other atoms simultaneously Document 1::: Biotic material or biological derived material is any material that originates from living organisms. Most such materials contain carbon and are capable of decay. The earliest life on Earth arose at least 3.5 billion years ago. Earlier physical evidences of life include graphite, a biogenic substance, in 3.7 billion-year-old metasedimentary rocks discovered in southwestern Greenland, as well as, "remains of biotic life" found in 4.1 billion-year-old rocks in Western Australia. Earth's biodiversity has expanded continually except when interrupted by mass extinctions. Although scholars estimate that over 99 percent of all species of life (over five billion) that ever lived on Earth are extinct, there are still an estimated 10–14 million extant species, of which about 1.2 million have been documented and over 86% have not yet been described. Examples of biotic materials are wood, straw, humus, manure, bark, crude oil, cotton, spider silk, chitin, fibrin, and bone. The use of biotic materials, and processed biotic materials (bio-based material) as alternative natural materials, over synthetics is popular with those who are environmentally conscious because such materials are usually biodegradable, renewable, and the processing is commonly understood and has minimal environmental impact. However, not all biotic materials are used in an environmentally friendly way, such as those that require high levels of processing, are harvested unsustainably, or are used to produce carbon emissions. When the source of the recently living material has little importance to the product produced, such as in the production of biofuels, biotic material is simply called biomass. Many fuel sources may have biological sources, and may be divided roughly into fossil fuels, and biofuel. In soil science, biotic material is often referred to as organic matter. Biotic materials in soil include glomalin, Dopplerite and humic acid. Some biotic material may not be considered to be organic matte Document 2::: Carbon dioxide is a chemical compound with the chemical formula . It is made up of molecules that each have one carbon atom covalently double bonded to two oxygen atoms. It is found in the gas state at room temperature, and as the source of available carbon in the carbon cycle, atmospheric is the primary carbon source for life on Earth. In the air, carbon dioxide is transparent to visible light but absorbs infrared radiation, acting as a greenhouse gas. Carbon dioxide is soluble in water and is found in groundwater, lakes, ice caps, and seawater. When carbon dioxide dissolves in water, it forms carbonate and mainly bicarbonate (), which causes ocean acidification as atmospheric levels increase. It is a trace gas in Earth's atmosphere at 421 parts per million (ppm), or about 0.04% (as of May 2022) having risen from pre-industrial levels of 280 ppm or about 0.025%. Burning fossil fuels is the primary cause of these increased concentrations and also the primary cause of climate change. Its concentration in Earth's pre-industrial atmosphere since late in the Precambrian was regulated by organisms and geological phenomena. Plants, algae and cyanobacteria use energy from sunlight to synthesize carbohydrates from carbon dioxide and water in a process called photosynthesis, which produces oxygen as a waste product. In turn, oxygen is consumed and is released as waste by all aerobic organisms when they metabolize organic compounds to produce energy by respiration. is released from organic materials when they decay or combust, such as in forest fires. Since plants require for photosynthesis, and humans and animals depend on plants for food, is necessary for the survival of life on earth. Carbon dioxide is 53% more dense than dry air, but is long lived and thoroughly mixes in the atmosphere. About half of excess emissions to the atmosphere are absorbed by land and ocean carbon sinks. These sinks can become saturated and are volatile, as decay and wildfires result i Document 3::: The molecules that an organism uses as its carbon source for generating biomass are referred to as "carbon sources" in biology. It is possible for organic or inorganic sources of carbon. Heterotrophs must use organic molecules as both are a source of carbon and energy, in contrast to autotrophs, which can use inorganic materials as both a source of carbon and an abiotic source of energy, such as, for instance, inorganic chemical energy or light (photoautotrophs) (chemolithotrophs). The carbon cycle, which begins with a carbon source that is inorganic, such as carbon dioxide and progresses through the carbon fixation process, includes the biological use of carbon as one of its components.[1] Types of organism by carbon source Heterotrophs Autotrophs Document 4::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Life on earth is carbon-based. organisms need not only energy but also carbon ________ for building bodies? A. ions B. monoxide C. atoms D. crystals Answer:
sciq-2255
multiple_choice
What type of winds blow only over a limited area?
[ "Planetary", "periodic", "local", "trade" ]
C
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 2::: Atmospheric circulation of a planet is largely specific to the planet in question and the study of atmospheric circulation of exoplanets is a nascent field as direct observations of exoplanet atmospheres are still quite sparse. However, by considering the fundamental principles of fluid dynamics and imposing various limiting assumptions, a theoretical understanding of atmospheric motions can be developed. This theoretical framework can also be applied to planets within the Solar System and compared against direct observations of these planets, which have been studied more extensively than exoplanets, to validate the theory and understand its limitations as well. The theoretical framework first considers the Navier–Stokes equations, the governing equations of fluid motion. Then, limiting assumptions are imposed to produce simplified models of fluid motion specific to large scale motion atmospheric dynamics. These equations can then be studied for various conditions (i.e. fast vs. slow planetary rotation rate, stably stratified vs. unstably stratified atmosphere) to see how a planet's characteristics would impact its atmospheric circulation. For example, a planet may fall into one of two regimes based on its rotation rate: geostrophic balance or cyclostrophic balance. Atmospheric motions Coriolis force When considering atmospheric circulation we tend to take the planetary body as the frame of reference. In fact, this is a non-inertial frame of reference which has acceleration due to the planet's rotation about its axis. Coriolis force is the force that acts on objects moving within the planetary frame of reference, as a result of the planet's rotation. Mathematically, the acceleration due to Coriolis force can be written as: where is the flow velocity is the planet's angular velocity vector This force acts perpendicular to the flow and velocity and the planet's angular velocity vector, and comes into play when considering the atmospheric motion of a rotat Document 3::: The log wind profile is a semi-empirical relationship commonly used to describe the vertical distribution of horizontal mean wind speeds within the lowest portion of the planetary boundary layer. The relationship is well described in the literature. The logarithmic profile of wind speeds is generally limited to the lowest 100 m of the atmosphere (i.e., the surface layer of the atmospheric boundary layer). The rest of the atmosphere is composed of the remaining part of the planetary boundary layer (up to around 1000 m) and the troposphere or free atmosphere. In the free atmosphere, geostrophic wind relationships should be used. Definition The equation to estimate the mean wind speed () at height (meters) above the ground is: where is the friction velocity (m s−1), is the Von Kármán constant (~0.41), is the zero plane displacement (in metres), is the surface roughness (in meters), and is a stability term where is the Obukhov length from Monin-Obukhov similarity theory. Under neutral stability conditions, and drops out and the equation is simplified to, Zero-plane displacement () is the height in meters above the ground at which zero mean wind speed is achieved as a result of flow obstacles such as trees or buildings. This displacement can be approximated as 2/3 to 3/4 of the average height of the obstacles. For example, if estimating winds over a forest canopy of height 30 m, the zero-plane displacement could be estimated as d = 20 m. Roughness length () is a corrective measure to account for the effect of the roughness of a surface on wind flow. That is, the value of the roughness length depends on the terrain. The exact value is subjective and references indicate a range of values, making it difficult to give definitive values. In most cases, references present a tabular format with the value of given for certain terrain descriptions. For example, for very flat terrain (snow, desert) the roughness length may be in the range 0.001 to 0.005 m. Si Document 4::: In meteorology, wind speed, or wind flow speed, is a fundamental atmospheric quantity caused by air moving from high to low pressure, usually due to changes in temperature. Wind speed is now commonly measured with an anemometer. Wind speed affects weather forecasting, aviation and maritime operations, construction projects, growth and metabolism rate of many plant species, and has countless other implications. Wind direction is usually almost parallel to isobars (and not perpendicular, as one might expect), due to Earth's rotation. Units The metre per second (m/s) is the SI unit for velocity and the unit recommended by the World Meteorological Organization for reporting wind speeds, and is amongst others used in weather forecasts in the Nordic countries. Since 2010 the International Civil Aviation Organization (ICAO) also recommends meters per second for reporting wind speed when approaching runways, replacing their former recommendation of using kilometres per hour (km/h). For historical reasons, other units such as miles per hour (mph), knots (kn) or feet per second (ft/s) are also sometimes used to measure wind speeds. Historically, wind speeds have also been classified using the Beaufort scale, which is based on visual observations of specifically defined wind effects at sea or on land. Factors affecting wind speed Wind speed is affected by a number of factors and situations, operating on varying scales (from micro to macro scales). These include the pressure gradient, Rossby waves and jet streams, and local weather conditions. There are also links to be found between wind speed and wind direction, notably with the pressure gradient and terrain conditions. Pressure gradient is a term to describe the difference in air pressure between two points in the atmosphere or on the surface of the Earth. It is vital to wind speed, because the greater the difference in pressure, the faster the wind flows (from the high to low pressure) to balance out the variation. Th The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What type of winds blow only over a limited area? A. Planetary B. periodic C. local D. trade Answer:
sciq-831
multiple_choice
Mechanical energy can also usually be expressed as the sum of kinetic energy and what other kind of energy?
[ "partial energy", "directional energy", "potential energy", "reflective energy" ]
C
Relavent Documents: Document 0::: In physics, work is the energy transferred to or from an object via the application of force along a displacement. In its simplest form, for a constant force aligned with the direction of motion, the work equals the product of the force strength and the distance traveled. A force is said to do positive work if when applied it has a component in the direction of the displacement of the point of application. A force does negative work if it has a component opposite to the direction of the displacement at the point of application of the force. For example, when a ball is held above the ground and then dropped, the work done by the gravitational force on the ball as it falls is positive, and is equal to the weight of the ball (a force) multiplied by the distance to the ground (a displacement). If the ball is thrown upwards, the work done by the gravitational force is negative, and is equal to the weight multiplied by the displacement in the upwards direction. Both force and displacement are vectors. The work done is given by the dot product of the two vectors. When the force is constant and the angle between the force and the displacement is also constant, then the work done is given by: Work is a scalar quantity, so it has only magnitude and no direction. Work transfers energy from one place to another, or one form to another. The SI unit of work is the joule (J), the same unit as for energy. History The ancient Greek understanding of physics was limited to the statics of simple machines (the balance of forces), and did not include dynamics or the concept of work. During the Renaissance the dynamics of the Mechanical Powers, as the simple machines were called, began to be studied from the standpoint of how far they could lift a load, in addition to the force they could apply, leading eventually to the new concept of mechanical work. The complete dynamic theory of simple machines was worked out by Italian scientist Galileo Galilei in 1600 in Le Meccaniche (On Me Document 1::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 2::: Mechanical impedance is a measure of how much a structure resists motion when subjected to a harmonic force. It relates forces with velocities acting on a mechanical system. The mechanical impedance of a point on a structure is the ratio of the force applied at a point to the resulting velocity at that point. Mechanical impedance is the inverse of mechanical admittance or mobility. The mechanical impedance is a function of the frequency of the applied force and can vary greatly over frequency. At resonance frequencies, the mechanical impedance will be lower, meaning less force is needed to cause a structure to move at a given velocity. A simple example of this is pushing a child on a swing. For the greatest swing amplitude, the frequency of the pushes must be near the resonant frequency of the system. Where, is the force vector, is the velocity vector, is the impedance matrix and is the angular frequency. Mechanical impedance is the ratio of a potential (e.g., force) to a flow (e.g., velocity) where the arguments of the real (or imaginary) parts of both increase linearly with time. Examples of potentials are: force, sound pressure, voltage, temperature. Examples of flows are: velocity, volume velocity, current, heat flow. Impedance is the reciprocal of mobility. If the potential and flow quantities are measured at the same point then impedance is referred as driving point impedance; otherwise, transfer impedance. Resistance - the real part of an impedance. Reactance - the imaginary part of an impedance. See also Acoustic impedance Frequency response Impedance analogy Linear response function Document 3::: This is a list of topics that are included in high school physics curricula or textbooks. Mathematical Background SI Units Scalar (physics) Euclidean vector Motion graphs and derivatives Pythagorean theorem Trigonometry Motion and forces Motion Force Linear motion Linear motion Displacement Speed Velocity Acceleration Center of mass Mass Momentum Newton's laws of motion Work (physics) Free body diagram Rotational motion Angular momentum (Introduction) Angular velocity Centrifugal force Centripetal force Circular motion Tangential velocity Torque Conservation of energy and momentum Energy Conservation of energy Elastic collision Inelastic collision Inertia Moment of inertia Momentum Kinetic energy Potential energy Rotational energy Electricity and magnetism Ampère's circuital law Capacitor Coulomb's law Diode Direct current Electric charge Electric current Alternating current Electric field Electric potential energy Electron Faraday's law of induction Ion Inductor Joule heating Lenz's law Magnetic field Ohm's law Resistor Transistor Transformer Voltage Heat Entropy First law of thermodynamics Heat Heat transfer Second law of thermodynamics Temperature Thermal energy Thermodynamic cycle Volume (thermodynamics) Work (thermodynamics) Waves Wave Longitudinal wave Transverse waves Transverse wave Standing Waves Wavelength Frequency Light Light ray Speed of light Sound Speed of sound Radio waves Harmonic oscillator Hooke's law Reflection Refraction Snell's law Refractive index Total internal reflection Diffraction Interference (wave propagation) Polarization (waves) Vibrating string Doppler effect Gravity Gravitational potential Newton's law of universal gravitation Newtonian constant of gravitation See also Outline of physics Physics education Document 4::: A machine is a physical system using power to apply forces and control movement to perform an action. The term is commonly applied to artificial devices, such as those employing engines or motors, but also to natural biological macromolecules, such as molecular machines. Machines can be driven by animals and people, by natural forces such as wind and water, and by chemical, thermal, or electrical power, and include a system of mechanisms that shape the actuator input to achieve a specific application of output forces and movement. They can also include computers and sensors that monitor performance and plan movement, often called mechanical systems. Renaissance natural philosophers identified six simple machines which were the elementary devices that put a load into motion, and calculated the ratio of output force to input force, known today as mechanical advantage. Modern machines are complex systems that consist of structural elements, mechanisms and control components and include interfaces for convenient use. Examples include: a wide range of vehicles, such as trains, automobiles, boats and airplanes; appliances in the home and office, including computers, building air handling and water handling systems; as well as farm machinery, machine tools and factory automation systems and robots. Etymology The English word machine comes through Middle French from Latin , which in turn derives from the Greek (Doric , Ionic 'contrivance, machine, engine', a derivation from 'means, expedient, remedy'). The word mechanical (Greek: ) comes from the same Greek roots. A wider meaning of 'fabric, structure' is found in classical Latin, but not in Greek usage. This meaning is found in late medieval French, and is adopted from the French into English in the mid-16th century. In the 17th century, the word machine could also mean a scheme or plot, a meaning now expressed by the derived machination. The modern meaning develops out of specialized application of the term to st The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Mechanical energy can also usually be expressed as the sum of kinetic energy and what other kind of energy? A. partial energy B. directional energy C. potential energy D. reflective energy Answer:
sciq-7987
multiple_choice
"red-shift" refers to a shift toward red in the spectrum from what celestial bodies?
[ "comets", "galaxies", "stars", "planets" ]
C
Relavent Documents: Document 0::: A photometric redshift is an estimate for the recession velocity of an astronomical object such as a galaxy or quasar, made without measuring its spectrum. The technique uses photometry (that is, the brightness of the object viewed through various standard filters, each of which lets through a relatively broad passband of colours, such as red light, green light, or blue light) to determine the redshift, and hence, through Hubble's law, the distance, of the observed object. The technique was developed in the 1960s, but was largely replaced in the 1970s and 1980s by spectroscopic redshifts, using spectroscopy to observe the frequency (or wavelength) of characteristic spectral lines, and measure the shift of these lines from their laboratory positions. The photometric redshift technique has come back into mainstream use since 2000, as a result of large sky surveys conducted in the late 1990s and 2000s which have detected a large number of faint high-redshift objects, and telescope time limitations mean that only a small fraction of these can be observed by spectroscopy. Photometric redshifts were originally determined by calculating the expected observed data from a known emission spectrum at a range of redshifts. The technique relies upon the spectrum of radiation being emitted by the object having strong features that can be detected by the relatively crude filters. As photometric filters are sensitive to a range of wavelengths, and the technique relies on making many assumptions about the nature of the spectrum at the light-source, errors for these sorts of measurements can range up to δz = 0.5, and are much less reliable than spectroscopic determinations. In the absence of sufficient telescope time to determine a spectroscopic redshift for each object, the technique of photometric redshifts provides a method to determine an at least qualitative characterization of a redshift. For example, if a Sun-like spectrum had a redshift of z = 1, it would be brightest in Document 1::: Astrophysics is a science that employs the methods and principles of physics and chemistry in the study of astronomical objects and phenomena. As one of the founders of the discipline, James Keeler, said, Astrophysics "seeks to ascertain the nature of the heavenly bodies, rather than their positions or motions in space–what they are, rather than where they are." Among the subjects studied are the Sun (solar physics), other stars, galaxies, extrasolar planets, the interstellar medium and the cosmic microwave background. Emissions from these objects are examined across all parts of the electromagnetic spectrum, and the properties examined include luminosity, density, temperature, and chemical composition. Because astrophysics is a very broad subject, astrophysicists apply concepts and methods from many disciplines of physics, including classical mechanics, electromagnetism, statistical mechanics, thermodynamics, quantum mechanics, relativity, nuclear and particle physics, and atomic and molecular physics. In practice, modern astronomical research often involves a substantial amount of work in the realms of theoretical and observational physics. Some areas of study for astrophysicists include their attempts to determine the properties of dark matter, dark energy, black holes, and other celestial bodies; and the origin and ultimate fate of the universe. Topics also studied by theoretical astrophysicists include Solar System formation and evolution; stellar dynamics and evolution; galaxy formation and evolution; magnetohydrodynamics; large-scale structure of matter in the universe; origin of cosmic rays; general relativity, special relativity, quantum and physical cosmology, including string cosmology and astroparticle physics. History Astronomy is an ancient science, long separated from the study of terrestrial physics. In the Aristotelian worldview, bodies in the sky appeared to be unchanging spheres whose only motion was uniform motion in a circle, while the earthl Document 2::: In astronomy, a redshift survey is a survey of a section of the sky to measure the redshift of astronomical objects: usually galaxies, but sometimes other objects such as galaxy clusters or quasars. Using Hubble's law, the redshift can be used to estimate the distance of an object from Earth. By combining redshift with angular position data, a redshift survey maps the 3D distribution of matter within a field of the sky. These observations are used to measure detailed statistical properties of the large-scale structure of the universe. In conjunction with observations of early structure in the cosmic microwave background, these results can place strong constraints on cosmological parameters such as the average matter density and the Hubble constant. Generally the construction of a redshift survey involves two phases: first the selected area of the sky is imaged with a wide-field telescope, then galaxies brighter than a defined limit are selected from the resulting images as non-pointlike objects; optionally, colour selection may also be used to assist discrimination between stars and galaxies. Secondly, the selected galaxies are observed by spectroscopy, most commonly at visible wavelengths, to measure the wavelengths of prominent spectral lines; comparing observed and laboratory wavelengths then gives the redshift for each galaxy. The Great Wall, a vast conglomeration of galaxies over 500 million light-years wide, provides a dramatic example of a large-scale structure that redshift surveys can detect. The first systematic redshift survey was the CfA Redshift Survey of around 2,200 galaxies, started in 1977 with the initial data collection completed in 1982. This was later extended to the CfA2 redshift survey of 15,000 galaxies, completed in the early 1990s. These early redshift surveys were limited in size by taking a spectrum for one galaxy at a time; from the 1990s, the development of fibre-optic spectrographs and multi-slit spectrographs enabled spectra f Document 3::: CLASS B1359+154 is a quasar, or quasi-stellar object, that has a redshift of 3.235. A group of three foreground galaxies at a redshift of about 1 are behaving as gravitational lenses. The result is a rare example of a sixfold multiply imaged quasar. See also Twin Quasar Einstein Cross Document 4::: The source counts distribution of radio-sources from a radio-astronomical survey is the cumulative distribution of the number of sources (N) brighter than a given flux density (S). As it is usually plotted on a log-log scale its distribution is known as the log N – log S plot. It is one of several cosmological tests that were conceived in the 1930s to check the viability of and compare new cosmological models. Early work to catalogue radio sources aimed to determine the source count distribution as a discriminating test of different cosmological models. For example, a uniform distribution of radio sources at low redshift, such as might be found in a 'steady-state Euclidean universe,' would produce a slope of −1.5 in the cumulative distribution of log(N) versus log(S). Data from the early Cambridge 2C survey (published 1955) apparently implied a (log(N), log(S)) slope of nearly −3.0. This appeared to invalidate the steady state theory of Fred Hoyle, Hermann Bondi and Thomas Gold. Unfortunately many of these weaker sources were subsequently found to be due to 'confusion' (the blending of several weak sources in the side-lobes of the interferometer, producing a stronger response). By contrast, analysis from the contemporaneous Mills Cross data (by Slee and Mills) were consistent with an index of −1.5. Later and more accurate surveys from Cambridge, 3C, 3CR, and 4C, also showed source count slopes steeper than −1.5, though by a smaller margin than 2C. This convinced some cosmologists that the steady state theory was wrong, although residual problems with confusion provided some defense for Hoyle and his colleagues. The immediate interest in testing the steady-state theory through source-counts was reduced by the discovery of the 3K microwave background radiation in the mid 1960s, which essentially confirmed the Big-Bang model. Later radio survey data have shown a complex picture — the 3C and 4C claims appear to hold up, while at fainter levels the source counts The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. "red-shift" refers to a shift toward red in the spectrum from what celestial bodies? A. comets B. galaxies C. stars D. planets Answer:
ai2_arc-373
multiple_choice
The depth of Lake Superior can be measured by sending sound waves to the bottom and measuring the period of time it takes for the reflected sound waves to return to the surface. Which of the following would indicate a shallow depth?
[ "There is no return signal.", "The return signal is very weak.", "The return signal appears almost instantaneously.", "The return signal comes back at a different speed" ]
C
Relavent Documents: Document 0::: A sound speed profile shows the speed of sound in water at different vertical levels. It has two general representations: tabular form, with pairs of columns corresponding to ocean depth and the speed of sound at that depth, respectively. a plot of the speed of sound in the ocean as a function of depth, where the vertical axis corresponds to the depth and the horizontal axis corresponds to the sound speed. By convention, the horizontal axis is placed at the top of the plot, and the vertical axis is labeled with values that increase from top to bottom, thus reproducing visually the ocean from its surface downward. Table 1 shows an example of the first representation; figure 1 shows the same information using the second representation. Although given as a function of depth, the speed of sound in the ocean does not depend solely on depth. Rather, for a given depth, the speed of sound depends on the temperature at that depth, the depth itself, and the salinity at that depth, in that order. The speed of sound in the ocean at different depths can be measured directly, e.g., by using a velocimeter, or, using measurements of temperature and salinity at different depths, it can be calculated using a number of different sound speed formulae which have been developed. Examples of such formulae include those by Wilson, Chen and Millero ,and Mackenzie. Each such formulation applies within specific limits of the independent variables. From the shape of the sound speed profile in figure 1, one can see the effect of the order of importance of temperature and depth on sound speed. Near the surface, where temperatures are generally highest, the sound speed is often highest because the effect of temperature on sound speed dominates. Further down the water column, sound speed also decreases as temperature decreases in the ocean thermocline, and sound speed also decreases. At a certain point, however, the effect of depth, i.e., pressure, begins to dominate, and the sound s Document 1::: The SOFAR channel (short for sound fixing and ranging channel), or deep sound channel (DSC), is a horizontal layer of water in the ocean at which depth the speed of sound is at its minimum. The SOFAR channel acts as a waveguide for sound, and low frequency sound waves within the channel may travel thousands of miles before dissipating. An example was reception of coded signals generated by the Navy chartered ocean surveillance vessel Cory Chouest off Heard Island, located in the southern Indian Ocean (between Africa, Australia and Antarctica), by hydrophones in portions of all five major ocean basins and as distant as the North Atlantic and North Pacific. This phenomenon is an important factor in ocean surveillance. The deep sound channel was discovered and described independently by Maurice Ewing and J. Lamar Worzel at Columbia University and Leonid Brekhovskikh at the Lebedev Physics Institute in the 1940s. In testing the concept in 1944 Ewing and Worzel hung a hydrophone from Saluda, a sailing vessel assigned to the Underwater Sound Laboratory, with a second ship setting off explosive charges up to away. Principle Temperature is the dominant factor in determining the speed of sound in the ocean. In areas of higher temperatures (e.g. near the ocean surface), there is higher sound speed. Temperature decreases with depth, with sound speed decreasing accordingly until temperature becomes stable and pressure becomes the dominant factor. The axis of the SOFAR channel lies at the point of minimum sound speed at a depth where pressure begins dominating temperature and sound speed increases. This point is at the bottom of the thermocline and the top of the deep isothermal layer and thus has some seasonal variance. Other acoustic ducts exist, particularly in the upper mixed layer, but the ray paths lose energy with either surface or bottom reflections. In the SOFAR channel, low frequencies, in particular, are refracted back into the duct so that energy loss is small Document 2::: Long delayed echoes (LDEs) are radio echoes which return to the sender several seconds after a radio transmission has occurred. Delays of longer than 2.7 seconds are considered LDEs. LDEs have a number of proposed scientific origins. History These echoes were first observed in 1927 by civil engineer and amateur radio operator Jørgen Hals from his home near Oslo, Norway. Hals had repeatedly observed an unexpected second radio echo with a significant time delay after the primary radio echo ended. Unable to account for this strange phenomenon, he wrote a letter to Norwegian physicist Carl Størmer, explaining the event: At the end of the summer of 1927 I repeatedly heard signals from the Dutch short-wave transmitting station PCJJ at Eindhoven. At the same time as I heard these I also heard echoes. I heard the usual echo which goes round the Earth with an interval of about 1/7 of a second as well as a weaker echo about three seconds after the principal echo had gone. When the principal signal was especially strong, I suppose the amplitude for the last echo three seconds later, lay between 1/10 and 1/20 of the principal signal in strength. From where this echo comes I cannot say for the present, I can only confirm that I really heard it. Physicist Balthasar van der Pol helped Hals and Stormer investigate the echoes, but due to the sporadic nature of the echo events and variations in time-delay, did not find a suitable explanation. Long delayed echoes have been heard sporadically from the first observations in 1927 and up to the present day. Five hypotheses Shlionskiy lists 15 possible natural explanations in two groups: reflections in outer space, and reflections within the Earth's magnetosphere. Vidmar and Crawford suggest five of them are the most likely. Sverre Holm, professor of signal processing at the University of Oslo details those five; in summary, Ducting in the Earth's magnetosphere and ionosphere at low HF frequencies (1–4 MHz). Some similarities with Document 3::: In nautical terms, the word sound is used to describe the process of determining the depth of water in a tank or under a ship. Tanks are sounded to determine if they are full (for cargo tanks) or empty (to determine if a ship has been holed) and for other reasons. Soundings may also be taken of the water around a ship if it is in shallow water to aid in navigation. Methods Tanks may be sounded manually or with electronic or mechanical automated equipment. Manual sounding is undertaken with a sounding line- a rope with a weight on the end. Per the Code of Federal Regulations, most steel vessels with integral tanks are required to have sounding tubes and reinforcing plates under the tubes which the weight strikes when it reaches the bottom of the tank. Sounding tubes are steel pipes which lead upwards from the ships' tanks to a place on deck. Electronic and mechanical automated sounding may be undertaken with a variety of equipment including float level sensors, capacitance sensors, sonar, etc. See also Depth sounding Sources Code of Federal Regulations, Title 46 Nautical terminology Navigational aids Oceanography Document 4::: The target strength or acoustic size is a measure of the area of a sonar target. This is usually quantified as a number of decibels. For fish such as salmon, the target size varies with the length of the fish and a 5 cm fish could have a target strength of about -50 dB. Target strength (TS) is equal to 10 log10(σbs/(1 m2)) dB, where σbs is the differential backscattering cross section. Backscattering cross section is 4πσbs. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The depth of Lake Superior can be measured by sending sound waves to the bottom and measuring the period of time it takes for the reflected sound waves to return to the surface. Which of the following would indicate a shallow depth? A. There is no return signal. B. The return signal is very weak. C. The return signal appears almost instantaneously. D. The return signal comes back at a different speed Answer:
sciq-10430
multiple_choice
In a car race on a circular track, where the start and finish line are the same, what quantity is neglible?
[ "total distance", "total displacement", "total acceleration", "partial displacement" ]
B
Relavent Documents: Document 0::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of Document 1::: The Texas Math and Science Coaches Association or TMSCA is an organization for coaches of academic University Interscholastic League teams in Texas middle schools and high schools, specifically those that compete in mathematics and science-related tests. Events There are four events in the TMSCA at both the middle and high school level: Number Sense, General Mathematics, Calculator Applications, and General Science. Number Sense is an 80-question exam that students are given only 10 minutes to solve. Additionally, no scratch work or paper calculations are allowed. These questions range from simple calculations such as 99+98 to more complicated operations such as 1001×1938. Each calculation is able to be done with a certain trick or shortcut that makes the calculations easier. The high school exam includes calculus and other difficult topics in the questions also with the same rules applied as to the middle school version. It is well known that the grading for this event is particularly stringent as errors such as writing over a line or crossing out potential answers are considered as incorrect answers. General Mathematics is a 50-question exam that students are given only 40 minutes to solve. These problems are usually more challenging than questions on the Number Sense test, and the General Mathematics word problems take more thinking to figure out. Every problem correct is worth 5 points, and for every problem incorrect, 2 points are deducted. Tiebreakers are determined by the person that misses the first problem and by percent accuracy. Calculator Applications is an 80-question exam that students are given only 30 minutes to solve. This test requires practice on the calculator, knowledge of a few crucial formulas, and much speed and intensity. Memorizing formulas, tips, and tricks will not be enough. In this event, plenty of practice is necessary in order to master the locations of the keys and develop the speed necessary. All correct questions are worth 5 Document 2::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 3::: The Worcester County Mathematics League (WOCOMAL) is a high school mathematics league composed of 32 high schools, most of which are in Worcester County, Massachusetts. It organizes seven mathematics competitions per year, four at the "varsity" level (up to grade 12) and three at the "freshman" level (up to grade nine, including middle school students). In the 2013–14 school year, WOCOMAL began allowing older students to compete in the freshman level competitions, calling this level of participation "junior varsity." Top schools from the varsity competition are selected to attend the Massachusetts Association of Math Leagues state competition. Contest format A competition consists of four, or nine rounds at the Freshman level or five rounds at the Varsity level. The team round consists of eight problems at the Freshman level and nine at the Varsity level. Regardless of level, each student competes in three of the individual rounds. In each individual round, competing students have ten minutes to answer three questions, worth one, two, and three points. The maximum meet score for a student is eighteen points. History The Worcester County Mathematics League was originally formed in 1963 as the Southern Worcester County Mathematics League (Sowocomal). The winningest school in league history is St. John's High School, with twelve league championships in the fourteen-year span between 1983–84 and 1996–97. Algonquin Regional High School won six consecutive league championships from 1998–99 to 2003–04. Current events The league currently has members from Western Middlesex Counties. In the past, it has had members from Hampshire County, Massachusetts, and Windham County, Connecticut. In the 2015–16 season, the champion of both the varsity division and the freshman division was the Advanced Math and Science Academy Charter School. League members AMSA Charter, Worcester Academy, and Mass Academy and took first, second, and third place among small-sized schools a Document 4::: Advanced Placement (AP) Physics C: Mechanics (also known as AP Mechanics) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a one-semester calculus-based university course in mechanics. The content of Physics C: Mechanics overlaps with that of AP Physics 1, but Physics 1 is algebra-based, while Physics C is calculus-based. Physics C: Mechanics may be combined with its electricity and magnetism counterpart to form a year-long course that prepares for both exams. Course content Intended to be equivalent to an introductory college course in mechanics for physics or engineering majors, the course modules are: Kinematics Newton's laws of motion Work, energy and power Systems of particles and linear momentum Circular motion and rotation Oscillations and gravitation. Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a Calculus I class. This course is often compared to AP Physics 1: Algebra Based for its similar course material involving kinematics, work, motion, forces, rotation, and oscillations. However, AP Physics 1: Algebra Based lacks concepts found in Calculus I, like derivatives or integrals. This course may be combined with AP Physics C: Electricity and Magnetism to make a unified Physics C course that prepares for both exams. AP test The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution. Registration The AP examination for AP Physics C: Mechanics is separate from the AP examination for AP Physics C: Electricity and Magnetism. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test. Format The exam is typically administered on a Monday aftern The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. In a car race on a circular track, where the start and finish line are the same, what quantity is neglible? A. total distance B. total displacement C. total acceleration D. partial displacement Answer:
sciq-9204
multiple_choice
How do seed plants benefit from herbivores?
[ "rainfall of seeds", "dispersal of seeds", "consumption of seeds", "radiation of seeds" ]
B
Relavent Documents: Document 0::: Seed predation, often referred to as granivory, is a type of plant-animal interaction in which granivores (seed predators) feed on the seeds of plants as a main or exclusive food source, in many cases leaving the seeds damaged and not viable. Granivores are found across many families of vertebrates (especially mammals and birds) as well as invertebrates (mainly insects); thus, seed predation occurs in virtually all terrestrial ecosystems. Seed predation is commonly divided into two distinctive temporal categories, pre-dispersal and post-dispersal predation, which affect the fitness of the parental plant and the dispersed offspring (the seed), respectively. Mitigating pre- and post-dispersal predation may involve different strategies. To counter seed predation, plants have evolved both physical defenses (e.g. shape and toughness of the seed coat) and chemical defenses (secondary compounds such as tannins and alkaloids). However, as plants have evolved seed defenses, seed predators have adapted to plant defenses (e.g., ability to detoxify chemical compounds). Thus, many interesting examples of coevolution arise from this dynamic relationship. Seeds and their defenses Plant seeds are important sources of nutrition for animals across most ecosystems. Seeds contain food storage organs (e.g., endosperm) that provide nutrients to the developing plant embryo (cotyledon). This makes seeds an attractive food source for animals because they are a highly concentrated and localized nutrient source in relation to other plant parts. Seeds of many plants have evolved a variety of defenses to deter predation. Seeds are often contained inside protective structures or fruit pulp that encapsulate seeds until they are ripe. Other physical defenses include spines, hairs, fibrous seed coats and hard endosperm. Seeds, especially in arid areas, may have a mucilaginous seed coat that can glue soil to seed hiding it from granivores. Some seeds have evolved strong anti-herbivore chemical Document 1::: Tolerance is the ability of plants to mitigate the negative fitness effects caused by herbivory. It is one of the general plant defense strategies against herbivores, the other being resistance, which is the ability of plants to prevent damage (Strauss and Agrawal 1999). Plant defense strategies play important roles in the survival of plants as they are fed upon by many different types of herbivores, especially insects, which may impose negative fitness effects (Strauss and Zangerl 2002). Damage can occur in almost any part of the plants, including the roots, stems, leaves, flowers and seeds (Strauss and Zergerl 2002). In response to herbivory, plants have evolved a wide variety of defense mechanisms and although relatively less studied than resistance strategies, tolerance traits play a major role in plant defense (Strauss and Zergerl 2002, Rosenthal and Kotanen 1995). Traits that confer tolerance are controlled genetically and therefore are heritable traits under selection (Strauss and Agrawal 1999). Many factors intrinsic to the plants, such as growth rate, storage capacity, photosynthetic rates and nutrient allocation and uptake, can affect the extent to which plants can tolerate damage (Rosenthal and Kotanen 1994). Extrinsic factors such as soil nutrition, carbon dioxide levels, light levels, water availability and competition also have an effect on tolerance (Rosenthal and Kotanen 1994). History of the study of plant tolerance Studies of tolerance to herbivory has historically been the focus of agricultural scientists (Painter 1958; Bardner and Fletcher 1974). Tolerance was actually initially classified as a form of resistance (Painter 1958). Agricultural studies on tolerance, however, are mainly concerned with the compensatory effect on the plants' yield and not its fitness, since it is of economical interest to reduce crop losses due to herbivory by pests (Trumble 1993; Bardner and Fletcher 1974). One surprising discovery made about plant tolerance was th Document 2::: Xenohormesis is a hypothesis that posits that certain molecules such as plant polyphenols, which indicate stress in the plants, can have benefits of another organism (heterotrophs) which consumes it. Or in simpler terms, xenohormesis is interspecies hormesis. The expected benefits include improve lifespan and fitness, by activating the animal's cellular stress response. This may be useful to evolve, as it gives possible cues about the state of the environment. If the plants an animal is eating have increased polyphenol content, it means the plant is under stress and may signal famines. Using the chemical cues the heterotophs could preemptively prepare and defend itself before conditions worsen. A possible example may be resveratrol, which is famously found in red wine, which modulates over two dozen receptors and enzymes in mammals. Xenohormesis could also explain several phenomena seen in the ethno-pharmaceutical (traditional medicine) side of things. Such as in the case of cinnamon, which in several studies have shown to help treat type 2 diabetes, but hasn't been confirmed in meta analysis. This can be caused by the cinnamon used in one study differing from the other in xenohormic properties. Some explanations as to why this works, is first and foremost, it could be a coincidence. Especially for cases which partially venomous products, cause a positive stress in the organism. The second is that it is a shared evolutionary attribute, as both animals and plants share a huge amount of homology between their pathways. The third is that there is evolutionary pressure to evolve to better respond to the molecules. The latter is proposed mainly by Howitz and his team. There also might be the problem that our focus on maximizing the crop output, may be losing many of the xenohormetic advantages. Although the ideal conditions will cause the plant to increase its crop output it can also be argued it is loosing stress and therefore the hormesis. The honeybee colony colla Document 3::: The soil seed bank is the natural storage of seeds, often dormant, within the soil of most ecosystems. The study of soil seed banks started in 1859 when Charles Darwin observed the emergence of seedlings using soil samples from the bottom of a lake. The first scientific paper on the subject was published in 1882 and reported on the occurrence of seeds at different soil depths. Weed seed banks have been studied intensely in agricultural science because of their important economic impacts; other fields interested in soil seed banks include forest regeneration and restoration ecology. Henry David Thoreau wrote that the contemporary popular belief explaining the succession of a logged forest, specifically to trees of a dissimilar species to the trees cut down, was that seeds either spontaneously generated in the soil, or sprouted after lying dormant for centuries. However, he dismissed this idea, noting that heavy nuts unsuited for distribution by wind were distributed instead by animals. Background Many taxa have been classified according to the longevity of their seeds in the soil seed bank. Seeds of transient species remain viable in the soil seed bank only to the next opportunity to germinate, while seeds of persistent species can survive longer than the next opportunity—often much longer than one year. Species with seeds that remain viable in the soil longer than five years form the long-term persistent seed bank, while species whose seeds generally germinate or die within one to five years are called short-term persistent. A typical long-term persistent species is Chenopodium album (Lambsquarters); its seeds commonly remain viable in the soil for up to 40 years and in rare situations perhaps as long as 1,600 years. A species forming no soil seed bank at all (except the dry season between ripening and the first autumnal rains) is Agrostemma githago (Corncockle), which was formerly a widespread cereal weed. Seed longevity Longevity of seeds is very var Document 4::: Agrostology (from Greek , agrōstis, "type of grass"; and , -logia), sometimes graminology, is the scientific study of the grasses (the family Poaceae, or Gramineae). The grasslike species of the sedge family (Cyperaceae), the rush family (Juncaceae), and the bulrush or cattail family (Typhaceae) are often included with the true grasses in the category of graminoid, although strictly speaking these are not included within the study of agrostology. In contrast to the word graminoid, the words gramineous and graminaceous are normally used to mean "of, or relating to, the true grasses (Poaceae)". Agrostology has importance in the maintenance of wild and grazed grasslands, agriculture (crop plants such as rice, maize, sugarcane, and wheat are grasses, and many types of animal fodder are grasses), urban and environmental horticulture, turfgrass management and sod production, ecology, and conservation. Botanists that made important contributions to agrostology include: Jean Bosser Aimée Antoinette Camus Mary Agnes Chase Eduard Hackel Charles Edward Hubbard A. S. Hitchcock Ernst Gottlieb von Steudel Otto Stapf Joseph Dalton Hooker Norman Loftus Bor Jan-Frits Veldkamp William Derek Clayton Robert B Shaw Thomas Arthur Cope Grasses Agrostology 01 The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. How do seed plants benefit from herbivores? A. rainfall of seeds B. dispersal of seeds C. consumption of seeds D. radiation of seeds Answer:
sciq-7062
multiple_choice
How is the radioactive decay measured?
[ "half-life", "quarter-life", "carbon dating", "alpha emission" ]
A
Relavent Documents: Document 0::: Decay correction is a method of estimating the amount of radioactive decay at some set time before it was actually measured. Example of use Researchers often want to measure, say, medical compounds in the bodies of animals. It's hard to measure them directly, so it can be chemically joined to a radionuclide - by measuring the radioactivity, you can get a good idea of how the original medical compound is being processed. Samples may be collected and counted at short time intervals (ex: 1 and 4 hours). But they might be tested for radioactivity all at once. Decay correction is one way of working out what the radioactivity would have been at the time it was taken, rather than at the time it was tested. For example, the isotope copper-64, commonly used in medical research, has a half-life of 12.7 hours. If you inject a large group of animals at "time zero", but measure the radioactivity in their organs at two later times, the later groups must be "decay corrected" to adjust for the decay that has occurred between the two time points. Mathematics The formula for decay correcting is: where is the original activity count at time zero, is the activity at time "t", "λ" is the decay constant, and "t" is the elapsed time. The decay constant is where "" is the half-life of the radioactive material of interest. Example The decay correct might be used this way: a group of 20 animals is injected with a compound of interest on a Monday at 10:00 a.m. The compound is chemically joined to the isotope copper-64, which has a known half-life of 12.7 hours, or 764 minutes. After one hour, the 5 animals in the "one hour" group are killed, dissected, and organs of interest are placed in sealed containers to await measurement. This is repeated for another 5 animals, at 2 hours, and again at 4 hours. At this point, (say, 4:00 p.m., Monday) all the organs collected so far are measured for radioactivity (a proxy of the distribution of the compound of interest). The next day Document 1::: A radioactive tracer, radiotracer, or radioactive label is a chemical compound in which one or more atoms have been replaced by a radionuclide so by virtue of its radioactive decay it can be used to explore the mechanism of chemical reactions by tracing the path that the radioisotope follows from reactants to products. Radiolabeling or radiotracing is thus the radioactive form of isotopic labeling. In biological contexts, use of radioisotope tracers are sometimes called radioisotope feeding experiments. Radioisotopes of hydrogen, carbon, phosphorus, sulfur, and iodine have been used extensively to trace the path of biochemical reactions. A radioactive tracer can also be used to track the distribution of a substance within a natural system such as a cell or tissue, or as a flow tracer to track fluid flow. Radioactive tracers are also used to determine the location of fractures created by hydraulic fracturing in natural gas production. Radioactive tracers form the basis of a variety of imaging systems, such as, PET scans, SPECT scans and technetium scans. Radiocarbon dating uses the naturally occurring carbon-14 isotope as an isotopic label. Methodology Isotopes of a chemical element differ only in the mass number. For example, the isotopes of hydrogen can be written as 1H, 2H and 3H, with the mass number superscripted to the left. When the atomic nucleus of an isotope is unstable, compounds containing this isotope are radioactive. Tritium is an example of a radioactive isotope. The principle behind the use of radioactive tracers is that an atom in a chemical compound is replaced by another atom, of the same chemical element. The substituting atom, however, is a radioactive isotope. This process is often called radioactive labeling. The power of the technique is due to the fact that radioactive decay is much more energetic than chemical reactions. Therefore, the radioactive isotope can be present in low concentration and its presence detected by sensitive radiation Document 2::: Radiochemistry is the chemistry of radioactive materials, where radioactive isotopes of elements are used to study the properties and chemical reactions of non-radioactive isotopes (often within radiochemistry the absence of radioactivity leads to a substance being described as being inactive as the isotopes are stable). Much of radiochemistry deals with the use of radioactivity to study ordinary chemical reactions. This is very different from radiation chemistry where the radiation levels are kept too low to influence the chemistry. Radiochemistry includes the study of both natural and man-made radioisotopes. Main decay modes All radioisotopes are unstable isotopes of elements— that undergo nuclear decay and emit some form of radiation. The radiation emitted can be of several types including alpha, beta, gamma radiation, proton, and neutron emission along with neutrino and antiparticle emission decay pathways. 1. α (alpha) radiation—the emission of an alpha particle (which contains 2 protons and 2 neutrons) from an atomic nucleus. When this occurs, the atom's atomic mass will decrease by 4 units and the atomic number will decrease by 2. 2. β (beta) radiation—the transmutation of a neutron into an electron and a proton. After this happens, the electron is emitted from the nucleus into the electron cloud. 3. γ (gamma) radiation—the emission of electromagnetic energy (such as gamma rays) from the nucleus of an atom. This usually occurs during alpha or beta radioactive decay. These three types of radiation can be distinguished by their difference in penetrating power. Alpha can be stopped quite easily by a few centimetres of air or a piece of paper and is equivalent to a helium nucleus. Beta can be cut off by an aluminium sheet just a few millimetres thick and are electrons. Gamma is the most penetrating of the three and is a massless chargeless high-energy photon. Gamma radiation requires an appreciable amount of heavy metal radiation shielding (usually lead or Document 3::: Radiation Measurements is a monthly peer-reviewed scientific journal covering research on nuclear science and radiation physics. It was established in 1994 and is published by Elsevier. The current editors-in-chief are Eduardo Yukihara (Paul Scherrer Institute Radiation Protection and Security) and Adrie J.J. Bos (Delft University of Technology). Abstracting and indexing The journal is abstracted and indexed in: Chemical Abstracts Service Index Medicus/MEDLINE/PubMed Science Citation Index Expanded Current Contents/Physical, Chemical & Earth Sciences Scopus According to the Journal Citation Reports, the journal has a 2020 impact factor of 1.898. Former titles history Radiation Measurements is derived from the following former titles: Nuclear Track Detection (1977-1978) Nuclear Tracks (1979-1981) Nuclear Tracks and Radiation Measurements (1982-1985) International Journal of Radiation Applications and Instrumentation. Part D. Nuclear Tracks and Radiation Measurements (1986-1992) Nuclear Tracks and Radiation Measurements (1993) Radiation Measurements (1994–present) Notes Document 4::: Radioecology is the branch of ecology concerning the presence of radioactivity in Earth’s ecosystems. Investigations in radioecology include field sampling, experimental field and laboratory procedures, and the development of environmentally predictive simulation models in an attempt to understand the migration methods of radioactive material throughout the environment. The practice consists of techniques from the general sciences of physics, chemistry, mathematics, biology, and ecology, coupled with applications in radiation protection. Radioecological studies provide the necessary data for dose estimation and risk assessment regarding radioactive pollution and its effects on human and environmental health. Radioecologists detect and evaluate the effects of ionizing radiation and radionuclides on ecosystems, and then assess their risks and dangers. Interest and studies in the area of radioecology significantly increased in order to ascertain and manage the risks involved as a result of the Chernobyl disaster. Radioecology arose in line with increasing nuclear activities, particularly following the Second World War in response to nuclear atomic weapons testing and the use of nuclear reactors to produce electricity. History Artificial radioactive affliction to Earth’s environment began with nuclear weapon testing during World War II, but did not become a prominent topic of public discussion until the 1980s. The Journal of Environmental Radioactivity (JER) was the first collection of literature on the subject, and its inception was not until 1984. As demand for construction of nuclear power plants increased, it became necessary for humankind to understand how radioactive material interacts with various ecosystems in order to prevent or minimize potential damage. The aftermath of Chernobyl was the first major employment of radioecological techniques to combat radioactive pollution from a nuclear power plant. Collection of radioecological data from the Chernobyl The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. How is the radioactive decay measured? A. half-life B. quarter-life C. carbon dating D. alpha emission Answer:
sciq-3609
multiple_choice
What is another term for a hand lens?
[ "projecting glass", "traversing glass", "magnifying glass", "seeing glass" ]
C
Relavent Documents: Document 0::: The calculation of glass properties (glass modeling) is used to predict glass properties of interest or glass behavior under certain conditions (e.g., during production) without experimental investigation, based on past data and experience, with the intention to save time, material, financial, and environmental resources, or to gain scientific insight. It was first practised at the end of the 19th century by A. Winkelmann and O. Schott. The combination of several glass models together with other relevant functions can be used for optimization and six sigma procedures. In the form of statistical analysis glass modeling can aid with accreditation of new data, experimental procedures, and measurement institutions (glass laboratories). History Historically, the calculation of glass properties is directly related to the founding of glass science. At the end of the 19th century the physicist Ernst Abbe developed equations that allow calculating the design of optimized optical microscopes in Jena, Germany, stimulated by co-operation with the optical workshop of Carl Zeiss. Before Ernst Abbe's time the building of microscopes was mainly a work of art and experienced craftsmanship, resulting in very expensive optical microscopes with variable quality. Now Ernst Abbe knew exactly how to construct an excellent microscope, but unfortunately, the required lenses and prisms with specific ratios of refractive index and dispersion did not exist. Ernst Abbe was not able to find answers to his needs from glass artists and engineers; glass making was not based on science at this time. In 1879 the young glass engineer Otto Schott sent Abbe glass samples with a special composition (lithium silicate glass) that he had prepared himself and that he hoped to show special optical properties. Following measurements by Ernst Abbe, Schott's glass samples did not have the desired properties, and they were also not as homogeneous as desired. Nevertheless, Ernst Abbe invited Otto Schott to work Document 1::: In nano-optics, a plasmonic lens generally refers to a lens for surface plasmon polaritons (SPPs), i.e. a device that redirects SPPs to converge towards a single focal point. Because SPPs can have very small wavelength, they can converge into a very small and very intense spot, much smaller than the free space wavelength and the diffraction limit. A simple example of a plasmonic lens is a series of concentric rings on a metal film. Any light that hits the film from free space at a 90 degree angle, known as the normal, will get coupled into a SPP (this part works like a diffraction grating coupler), and that SPP will be heading towards the center of the circles, which is the focal point. Another example is a tapered "dimple". In 2007, a novel, or technologically new, plasmonic lenses and waveguide by modulating light a mesoscale dielectric structure on a metallic film with arrayed nano-slits, which have constant depth but variant widths. The slits transport electromagnetic energy in the form of SPPs in nano meter sized waveguides and provide desired phase adjustments for manipulating the beam of light. The scientists claim that it is an improvement over other subwavelength imaging techniques, such as "superlenses", where the object and image are confined to the near field. These devices have been suggested for various applications that take advantage of the small size and high intensity of the SPPs at the focal point. These include photolithography, heat-assisted magnetic recording, microscopy, biophotonics, biological molecule sensors, and solar cells, as well as other applications. The term "plasmonic lens" is also sometimes used to describe something different: Any free-space lens (i.e., a lens that focuses free-space light, rather than SPPs), that has something to do with plasmonics. These often come up in discussions of superlenses. Document 2::: Magnification is the process of enlarging the apparent size, not physical size, of something. This enlargement is quantified by a size ratio called optical magnification. When this number is less than one, it refers to a reduction in size, sometimes called de-magnification. Typically, magnification is related to scaling up visuals or images to be able to see more detail, increasing resolution, using microscope, printing techniques, or digital processing. In all cases, the magnification of the image does not change the perspective of the image. Examples of magnification Some optical instruments provide visual aid by magnifying small or distant subjects. A magnifying glass, which uses a positive (convex) lens to make things look bigger by allowing the user to hold them closer to their eye. A telescope, which uses its large objective lens or primary mirror to create an image of a distant object and then allows the user to examine the image closely with a smaller eyepiece lens, thus making the object look larger. A microscope, which makes a small object appear as a much larger image at a comfortable distance for viewing. A microscope is similar in layout to a telescope except that the object being viewed is close to the objective, which is usually much smaller than the eyepiece. A slide projector, which projects a large image of a small slide on a screen. A photographic enlarger is similar. A zoom lens, a system of camera lens elements for which the focal length and angle of view can be varied. Size ratio (optical magnification) Optical magnification is the ratio between the apparent size of an object (or its size in an image) and its true size, and thus it is a dimensionless number. Optical magnification is sometimes referred to as "power" (for example "10× power"), although this can lead to confusion with optical power. Linear or transverse magnification For real images, such as images projected on a screen, size means a linear dimension (measured, for examp Document 3::: In 2-dimensional geometry, a lens is a convex region bounded by two circular arcs joined to each other at their endpoints. In order for this shape to be convex, both arcs must bow outwards (convex-convex). This shape can be formed as the intersection of two circular disks. It can also be formed as the union of two circular segments (regions between the chord of a circle and the circle itself), joined along a common chord. Types If the two arcs of a lens have equal radius, it is called a symmetric lens, otherwise is an asymmetric lens. The vesica piscis is one form of a symmetric lens, formed by arcs of two circles whose centers each lie on the opposite arc. The arcs meet at angles of 120° at their endpoints. Area Symmetric The area of a symmetric lens can be expressed in terms of the radius R and arc lengths θ in radians: Asymmetric The area of an asymmetric lens formed from circles of radii R and r with distance d between their centers is where is the area of a triangle with sides d, r, and R. The two circles overlap if . For sufficiently large , the coordinate of the lens centre lies between the coordinates of the two circle centers: For small the coordinate of the lens centre lies outside the line that connects the circle centres: By eliminating y from the circle equations and the abscissa of the intersecting rims is . The sign of x, i.e., being larger or smaller than , distinguishes the two cases shown in the images. The ordinate of the intersection is . Negative values under the square root indicate that the rims of the two circles do not touch because the circles are too far apart or one circle lies entirely within the other. The value under the square root is a biquadratic polynomial of d. The four roots of this polynomial are associated with y=0 and with the four values of d where the two circles have only one point in common. The angles in the blue triangle of sides d, r and R are where y is the ordinate of the intersection. Th Document 4::: Optical lens design is the process of designing a lens to meet a set of performance requirements and constraints, including cost and manufacturing limitations. Parameters include surface profile types (spherical, aspheric, holographic, diffractive, etc.), as well as radius of curvature, distance to the next surface, material type and optionally tilt and decenter. The process is computationally intensive, using ray tracing or other techniques to model how the lens affects light that passes through it. Design requirements Performance requirements can include: Optical performance (image quality): This is quantified by various metrics, including encircled energy, modulation transfer function, Strehl ratio, ghost reflection control, and pupil performance (size, location and aberration control); the choice of the image quality metric is application specific. Physical requirements such as weight, static volume, dynamic volume, center of gravity and overall configuration requirements. Environmental requirements: ranges for temperature, pressure, vibration and electromagnetic shielding. Design constraints can include realistic lens element center and edge thicknesses, minimum and maximum air-spaces between lenses, maximum constraints on entrance and exit angles, physically realizable glass index of refraction and dispersion properties. Manufacturing costs and delivery schedules are also a major part of optical design. The price of an optical glass blank of given dimensions can vary by a factor of fifty or more, depending on the size, glass type, index homogeneity quality, and availability, with BK7 usually being the cheapest. Costs for larger and/or thicker optical blanks of a given material, above 100–150 mm, usually increase faster than the physical volume due to increased blank annealing time required to achieve acceptable index homogeneity and internal stress birefringence levels throughout the blank volume. Availability of glass blanks is driven by how frequently The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is another term for a hand lens? A. projecting glass B. traversing glass C. magnifying glass D. seeing glass Answer:
sciq-9726
multiple_choice
What type of tissue transmits nerve impulses throughout the body?
[ "connective", "epithelial", "nervous", "fibrous" ]
C
Relavent Documents: Document 0::: H2.00.04.4.01001: Lymphoid tissue H2.00.05.0.00001: Muscle tissue H2.00.05.1.00001: Smooth muscle tissue H2.00.05.2.00001: Striated muscle tissue H2.00.06.0.00001: Nerve tissue H2.00.06.1.00001: Neuron H2.00.06.2.00001: Synapse H2.00.06.2.00001: Neuroglia h3.01: Bones h3.02: Joints h3.03: Muscles h3.04: Alimentary system h3.05: Respiratory system h3.06: Urinary system h3.07: Genital system h3.08: Document 1::: A motor nerve is a nerve that transmits motor signals from the central nervous system (CNS) to the muscles of the body. This is different from the motor neuron, which includes a cell body and branching of dendrites, while the nerve is made up of a bundle of axons. Motor nerves act as efferent nerves which carry information out from the CNS to muscles, as opposed to afferent nerves (also called sensory nerves), which transfer signals from sensory receptors in the periphery to the CNS. Efferent nerves can also connect to glands or other organs/issues instead of muscles (and so motor nerves are not equivalent to efferent nerves). In addition, there are nerves that serve as both sensory and motor nerves called mixed nerves. Structure and function Motor nerve fibers transduce signals from the CNS to peripheral neurons of proximal muscle tissue. Motor nerve axon terminals innervate skeletal and smooth muscle, as they are heavily involved in muscle control. Motor nerves tend to be rich in acetylcholine vesicles because the motor nerve, a bundle of motor nerve axons that deliver motor signals and signal for movement and motor control. Calcium vesicles reside in the axon terminals of the motor nerve bundles. The high calcium concentration outside of presynaptic motor nerves increases the size of end-plate potentials (EPPs). Protective tissues Within motor nerves, each axon is wrapped by the endoneurium, which is a layer of connective tissue that surrounds the myelin sheath. Bundles of axons are called fascicles, which are wrapped in perineurium. All of the fascicles wrapped in the perineurium are wound together and wrapped by a final layer of connective tissue known as the epineurium. These protective tissues defend nerves from injury, pathogens and help to maintain nerve function. Layers of connective tissue maintain the rate at which nerves conduct action potentials. Spinal cord exit Most motor pathways originate in the motor cortex of the brain. Signals run down th Document 2::: Group A nerve fibers are one of the three classes of nerve fiber as generally classified by Erlanger and Gasser. The other two classes are the group B nerve fibers, and the group C nerve fibers. Group A are heavily myelinated, group B are moderately myelinated, and group C are unmyelinated. The other classification is a sensory grouping that uses the terms type Ia and type Ib, type II, type III, and type IV, sensory fibers. Types There are four subdivisions of group A nerve fibers: alpha (α) Aα; beta (β) Aβ; , gamma (γ) Aγ, and delta (δ) Aδ. These subdivisions have different amounts of myelination and axon thickness and therefore transmit signals at different speeds. Larger diameter axons and more myelin insulation lead to faster signal propagation. Group A nerves are found in both motor and sensory pathways. Different sensory receptors are innervated by different types of nerve fibers. Proprioceptors are innervated by type Ia, Ib and II sensory fibers, mechanoreceptors by type II and III sensory fibers, and nociceptors and thermoreceptors by type III and IV sensory fibers. Type Aα fibers include the type Ia and type Ib sensory fibers of the alternative classification system, and are the fibers from muscle spindle endings and the Golgi tendon, respectively. Type Aβ fibres, and type Aγ, are the type II afferent fibers from stretch receptors. Type Aβ fibres from the skin are mostly dedicated to touch. However a small fraction of these fast fibres, termed "ultrafast nociceptors", also transmit pain. Type Aδ fibers are the afferent fibers of nociceptors. Aδ fibers carry information from peripheral mechanoreceptors and thermoreceptors to the dorsal horn of the spinal cord. This pathway describes the first-order neuron. Aδ fibers serve to receive and transmit information primarily relating to acute pain (sharp, immediate, and relatively short-lasting). This type of pain can result from several classifications of stimulants: temperature-induced, mechanical, and chem Document 3::: Cutaneous innervation refers to an area of the skin which is supplied by a specific cutaneous nerve. Dermatomes are similar; however, a dermatome only specifies the area served by a spinal nerve. In some cases, the dermatome is less specific (when a spinal nerve is the source for more than one cutaneous nerve), and in other cases it is more specific (when a cutaneous nerve is derived from multiple spinal nerves.) Modern texts are in agreement about which areas of the skin are served by which nerves, but there are minor variations in some of the details. The borders designated by the diagrams in the 1918 edition of Gray's Anatomy are similar, but not identical, to those generally accepted today. Importance of the peripheral nervous system The peripheral nervous system (PNS) is divided into the somatic nervous system, the autonomic nervous system, and the enteric nervous system. However, it is the somatic nervous system, responsible for body movement and the reception of external stimuli, which allows one to understand how cutaneous innervation is made possible by the action of specific sensory fibers located on the skin, as well as the distinct pathways they take to the central nervous system. The skin, which is part of the integumentary system, plays an important role in the somatic nervous system because it contains a range of nerve endings that react to heat and cold, touch, pressure, vibration, and tissue injury. Importance of the central nervous system The central nervous system (CNS) works with the peripheral nervous system in cutaneous innervation. The CNS is responsible for processing the information it receives from the cutaneous nerves that detect a given stimulus, and then identifying the kind of sensory inputs which project to a specific region of the primary somatosensory cortex. The role of nerve endings on the surface of the skin Groups of nerve terminals located in the different layers of the skin are categorized depending on whether the skin Document 4::: In neuroanatomy, a plexus (from the Latin term for "braid") is a branching network of vessels or nerves. The vessels may be blood vessels (veins, capillaries) or lymphatic vessels. The nerves are typically axons outside the central nervous system. The standard plural form in English is plexuses. Alternatively, the Latin plural plexūs may be used. Types Nerve plexuses The four primary nerve plexuses are the cervical plexus, brachial plexus, lumbar plexus, and the sacral plexus. Cardiac plexus Celiac plexus Renal plexus Venous plexus Choroid plexus The choroid plexus is a part of the central nervous system in the brain and consists of capillaries, brain ventricles, and ependymal cells. Invertebrates The plexus is the characteristic form of nervous system in the coelenterates and persists with modifications in the flatworms. The nerves of the radially symmetric echinoderms also take this form, where a plexus underlies the ectoderm of these animals and deeper in the body other nerve cells form plexuses of limited extent. See also Cranial nerve Spinal nerve Nerve plexus Brachial nerve List of anatomy mnemonics The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What type of tissue transmits nerve impulses throughout the body? A. connective B. epithelial C. nervous D. fibrous Answer:
sciq-9449
multiple_choice
Mechanisms for establishing cellular asymmetries include morphogen gradients, localized determinants, and what other type of interactions?
[ "inductive", "conductive", "reductive", "electromagnetic" ]
A
Relavent Documents: Document 0::: Symmetry breaking in biology is the process by which uniformity is broken, or the number of points to view invariance are reduced, to generate a more structured and improbable state. Symmetry breaking is the event where symmetry along a particular axis is lost to establish a polarity. Polarity is a measure for a biological system to distinguish poles along an axis. This measure is important because it is the first step to building complexity. For example, during organismal development, one of the first steps for the embryo is to distinguish its dorsal-ventral axis. The symmetry-breaking event that occurs here will determine which end of this axis will be the ventral side, and which end will be the dorsal side. Once this distinction is made, then all the structures that are located along this axis can develop at the proper location. As an example, during human development, the embryo needs to establish where is ‘back’ and where is ‘front’ before complex structures, such as the spine and lungs, can develop in the right location (where the lungs are placed ‘in front’ of the spine). This relationship between symmetry breaking and complexity was articulated by P.W. Anderson. He speculated that increasing levels of broken symmetry in many-body systems correlates with increasing complexity and functional specialization. In a biological perspective, the more complex an organism is, the higher number of symmetry-breaking events can be found. The importance of symmetry breaking in biology is also reflected in the fact that it's found at all scales. Symmetry breaking can be found at the macromolecular level, at the subcellular level and even at the tissues and organ level. It's also interesting to note that most asymmetry on a higher scale is a reflection of symmetry breaking on a lower scale. Cells first need to establish a polarity through a symmetry-breaking event before tissues and organs themselves can be polar. For example, one model proposes that left-right bo Document 1::: The science of pattern formation deals with the visible, (statistically) orderly outcomes of self-organization and the common principles behind similar patterns in nature. In developmental biology, pattern formation refers to the generation of complex organizations of cell fates in space and time. The role of genes in pattern formation is an aspect of morphogenesis, the creation of diverse anatomies from similar genes, now being explored in the science of evolutionary developmental biology or evo-devo. The mechanisms involved are well seen in the anterior-posterior patterning of embryos from the model organism Drosophila melanogaster (a fruit fly), one of the first organisms to have its morphogenesis studied, and in the eyespots of butterflies, whose development is a variant of the standard (fruit fly) mechanism. Patterns in nature Examples of pattern formation can be found in biology, physics, and science, and can readily be simulated with computer graphics, as described in turn below. Biology Biological patterns such as animal markings, the segmentation of animals, and phyllotaxis are formed in different ways. In developmental biology, pattern formation describes the mechanism by which initially equivalent cells in a developing tissue in an embryo assume complex forms and functions. Embryogenesis, such as of the fruit fly Drosophila, involves coordinated control of cell fates. Pattern formation is genetically controlled, and often involves each cell in a field sensing and responding to its position along a morphogen gradient, followed by short distance cell-to-cell communication through cell signaling pathways to refine the initial pattern. In this context, a field of cells is the group of cells whose fates are affected by responding to the same set positional information cues. This conceptual model was first described as the French flag model in the 1960s. More generally, the morphology of organisms is patterned by the mechanisms of evolutionary development Document 2::: Dexiothetism refers to a reorganisation of a clade's bauplan, with right becoming ventral and left becoming dorsal. The organism would then recruit a new left hand side. Details If a bilaterally symmetrical ancestor were to become affixed by its right hand side, it would occlude all features on that side. When that organism wanted to become secondarily bilaterally symmetrical again, it would be forced to resculpt its new left and right hand sides from the old left hand side. The end result is a bilaterally symmetrical animal, but with its dorsoventral axis rotated a quarter of a turn. Implications Dexiothetism has been implicated in the origin of the unusual embryology of the cephalochordate amphioxus, whereby its gill slits originate on the left hand side and the migrate to the right hand side. In Jefferies' Calcichordate Theory, he supposes that all chordates and their mitrate ancestors are dexiothetic. Document 3::: This is a list of articles on biophysics. 0–9 5-HT3 receptor A ACCN1 ANO1 AP2 adaptor complex Aaron Klug Acid-sensing ion channel Activating function Active transport Adolf Eugen Fick Afterdepolarization Aggregate modulus Aharon Katzir Alan Lloyd Hodgkin Alexander Rich Alexander van Oudenaarden Allan McLeod Cormack Alpha-3 beta-4 nicotinic receptor Alpha-4 beta-2 nicotinic receptor Alpha-7 nicotinic receptor Alpha helix Alwyn Jones (biophysicist) Amoeboid movement Andreas Mershin Andrew Huxley Animal locomotion Animal locomotion on the water surface Anita Goel Antiporter Aquaporin 2 Aquaporin 3 Aquaporin 4 Archibald Hill Ariel Fernandez Arthropod exoskeleton Arthropod leg Avery Gilbert B BEST2 BK channel Bacterial outer membrane Balance (ability) Bat Bat wing development Bert Sakmann Bestrophin 1 Biased random walk (biochemistry) Bioelectrochemical reactor Bioelectrochemistry Biofilm Biological material Biological membrane Biomechanics Biomechanics of sprint running Biophysical Society Biophysics Bird flight Bird migration Bisindolylmaleimide Bleb (cell biology) Boris Pavlovich Belousov Brian Matthews (biochemist) Britton Chance Brush border Bulk movement Document 4::: Differential adhesion hypothesis (DAH) is a hypothesis that explains cellular movement during morphogenesis with thermodynamic principles. In DAH tissues are treated as liquids consisting of mobile cells whose varying degrees of surface adhesion cause them to reorganize spontaneously to minimize their interfacial free energy. Put another way, according to DAH, cells move to be near other cells of similar adhesive strength in order to maximize the bonding strength between cells and produce a more thermodynamically stable structure. In this way the movement of cells during tissue formation, according to DAH, parodies the behavior of a mixture of liquids. Although originally motivated by the problem of understanding cell sorting behavior in vertebrate embryos, DAH has subsequently been applied to explain several other morphogenic phenomena. Background The origins of DAH can be traced back to a 1955 study by Philip L. Townes and Johannes Holtfreter. In this study Townes and Holtfreter placed the three germ layers of an amphibian into an alkaline solution, allowing them to dissociate into individual cells, and mixed these different types of cells together. Cells of different species were used to be able to visually observe and follow their movements. Cells of similar types migrated to their correct location and reaggregated to form germ layers in their developmentally correct positions. This experiment demonstrated that tissue organization can occur independent of the path taken, implying that it is mediated by forces that are persistently present and doesn't arise solely from the chronological sequence of developmental events preceding it. From these results Holtfreter developed his concept of selective affinity, and hypothesized that well-timed changes to selective affinity of cells to one another throughout development guided morphogenesis. Several hypotheses were introduced to explain these results including the "timing hypothesis" and the "differential surface con The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Mechanisms for establishing cellular asymmetries include morphogen gradients, localized determinants, and what other type of interactions? A. inductive B. conductive C. reductive D. electromagnetic Answer:
sciq-1830
multiple_choice
Where is the food stored before being mixed with the chyme?
[ "histopathology of the stomach", "top of the stomach", "fundus of the stomach", "hemispherical of the stomach" ]
C
Relavent Documents: Document 0::: Digestion is the breakdown of large insoluble food compounds into small water-soluble components so that they can be absorbed into the blood plasma. In certain organisms, these smaller substances are absorbed through the small intestine into the blood stream. Digestion is a form of catabolism that is often divided into two processes based on how food is broken down: mechanical and chemical digestion. The term mechanical digestion refers to the physical breakdown of large pieces of food into smaller pieces which can subsequently be accessed by digestive enzymes. Mechanical digestion takes place in the mouth through mastication and in the small intestine through segmentation contractions. In chemical digestion, enzymes break down food into the small compounds that the body can use. In the human digestive system, food enters the mouth and mechanical digestion of the food starts by the action of mastication (chewing), a form of mechanical digestion, and the wetting contact of saliva. Saliva, a liquid secreted by the salivary glands, contains salivary amylase, an enzyme which starts the digestion of starch in the food; the saliva also contains mucus, which lubricates the food, and hydrogen carbonate, which provides the ideal conditions of pH (alkaline) for amylase to work, and electrolytes (Na+, K+, Cl−, HCO−3). About 30% of starch is hydrolyzed into disaccharide in the oral cavity (mouth). After undergoing mastication and starch digestion, the food will be in the form of a small, round slurry mass called a bolus. It will then travel down the esophagus and into the stomach by the action of peristalsis. Gastric juice in the stomach starts protein digestion. Gastric juice mainly contains hydrochloric acid and pepsin. In infants and toddlers, gastric juice also contains rennin to digest milk proteins. As the first two chemicals may damage the stomach wall, mucus and bicarbonates are secreted by the stomach. They provide a slimy layer that acts as a shield against the damag Document 1::: The Joan Mott Prize Lecture is a prize lecture awarded annually by The Physiological Society in honour of Joan Mott. Laureates Laureates of the award have included: - Intestinal absorption of sugars and peptides: from textbook to surprises See also Physiological Society Annual Review Prize Lecture Document 2::: The food vacuole, or digestive vacuole, is an organelle found in simple eukaryotes such as protists. This organelle is essentially a lysosome. During the stage of the symbiont parasites' lifecycle where it resides within a human (or other mammalian) red blood cell, it is the site of haemoglobin digestion and the formation of the large haemozoin crystals that can be seen under a light microscope. See also Protists Eukaryote Amoeba Lysosome Enzymes Euglenids Paramecia Document 3::: The gastrointestinal wall of the gastrointestinal tract is made up of four layers of specialised tissue. From the inner cavity of the gut (the lumen) outwards, these are: Mucosa Submucosa Muscular layer Serosa or adventitia The mucosa is the innermost layer of the gastrointestinal tract. It surrounds the lumen of the tract and comes into direct contact with digested food (chyme). The mucosa itself is made up of three layers: the epithelium, where most digestive, absorptive and secretory processes occur; the lamina propria, a layer of connective tissue, and the muscularis mucosae, a thin layer of smooth muscle. The submucosa contains nerves including the submucous plexus (also called Meissner's plexus), blood vessels and elastic fibres with collagen, that stretches with increased capacity but maintains the shape of the intestine. The muscular layer surrounds the submucosa. It comprises layers of smooth muscle in longitudinal and circular orientation that also helps with continued bowel movements (peristalsis) and the movement of digested material out of and along the gut. In between the two layers of muscle lies the myenteric plexus (also called Auerbach's plexus). The serosa/adventitia are the final layers. These are made up of loose connective tissue and coated in mucus so as to prevent any friction damage from the intestine rubbing against other tissue. The serosa is present if the tissue is within the peritoneum, and the adventitia if the tissue is retroperitoneal. Structure When viewed under the microscope, the gastrointestinal wall has a consistent general form, but with certain parts differing along its course. Mucosa The mucosa is the innermost layer of the gastrointestinal tract. It surrounds the cavity (lumen) of the tract and comes into direct contact with digested food (chyme). The mucosa is made up of three layers: The epithelium is the innermost layer. It is where most digestive, absorptive and secretory processes occur. The lamina propr Document 4::: Bile (from Latin bilis), or gall, is a yellow-green fluid produced by the liver of most vertebrates that aids the digestion of lipids in the small intestine. In humans, bile is primarily composed of water, produced continuously by the liver, and stored and concentrated in the gallbladder. After a human eats, this stored bile is discharged into the first section of their small intestine. Composition In the human liver, bile is composed of 97–98% water, 0.7% bile salts, 0.2% bilirubin, 0.51% fats (cholesterol, fatty acids, and lecithin), and 200 meq/L inorganic salts. The two main pigments of bile are bilirubin, which is yellow, and its oxidised form biliverdin, which is green. When mixed, they are responsible for the brown color of feces. About of bile is produced per day in adult human beings. Function Bile or gall acts to some extent as a surfactant, helping to emulsify the lipids in food. Bile salt anions are hydrophilic on one side and hydrophobic on the other side; consequently, they tend to aggregate around droplets of lipids (triglycerides and phospholipids) to form micelles, with the hydrophobic sides towards the fat and hydrophilic sides facing outwards. The hydrophilic sides are negatively charged, and this charge prevents fat droplets coated with bile from re-aggregating into larger fat particles. Ordinarily, the micelles in the duodenum have a diameter around 1–50 μm in humans. The dispersion of food fat into micelles provides a greatly increased surface area for the action of the enzyme pancreatic lipase, which digests the triglycerides, and is able to reach the fatty core through gaps between the bile salts. A triglyceride is broken down into two fatty acids and a monoglyceride, which are absorbed by the villi on the intestine walls. After being transferred across the intestinal membrane, the fatty acids reform into triglycerides (), before being absorbed into the lymphatic system through lacteals. Without bile salts, most of the lipids in food wou The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Where is the food stored before being mixed with the chyme? A. histopathology of the stomach B. top of the stomach C. fundus of the stomach D. hemispherical of the stomach Answer:
sciq-4605
multiple_choice
The lipids that are connected to the glucose pathways are cholesterol and triglycerides. cholesterol is a lipid that contributes to cell membrane flexibility and is a precursor of this?
[ "steroid hormones", "ammonia hormones", "Thrombopoietin", "Somatostatin" ]
A
Relavent Documents: Document 0::: The lipidome refers to the totality of lipids in cells. Lipids are one of the four major molecular components of biological organisms, along with proteins, sugars and nucleic acids. Lipidome is a term coined in the context of omics in modern biology, within the field of lipidomics. It can be studied using mass spectrometry and bioinformatics as well as traditional lab-based methods. The lipidome of a cell can be subdivided into the membrane-lipidome and mediator-lipidome. The first cell lipidome to be published was that of a mouse macrophage in 2010. The lipidome of the yeast Saccharomyces cerevisiae has been characterised with an estimated 95% coverage; studies of the human lipidome are ongoing. For example, the human plasma lipidome consist of almost 600 distinct molecular species. Research suggests that the lipidome of an individual may be able to indicate cancer risks associated with dietary fats, particularly breast cancer. See also Genome Proteome Glycome Document 1::: Lipolysis is the metabolic pathway through which lipid triglycerides are hydrolyzed into a glycerol and free fatty acids. It is used to mobilize stored energy during fasting or exercise, and usually occurs in fat adipocytes. The most important regulatory hormone in lipolysis is insulin; lipolysis can only occur when insulin action falls to low levels, as occurs during fasting. Other hormones that affect lipolysis include glucagon, epinephrine, norepinephrine, growth hormone, atrial natriuretic peptide, brain natriuretic peptide, and cortisol. Mechanisms In the body, stores of fat are referred to as adipose tissue. In these areas, intracellular triglycerides are stored in cytoplasmic lipid droplets. When lipase enzymes are phosphorylated, they can access lipid droplets and through multiple steps of hydrolysis, breakdown triglycerides into fatty acids and glycerol. Each step of hydrolysis leads to the removal of one fatty acid. The first step and the rate-limiting step of lipolysis is carried out by adipose triglyceride lipase (ATGL). This enzyme catalyzes the hydrolysis of triacylglycerol to diacylglycerol. Subsequently, hormone-sensitive lipase (HSL) catalyzes the hydrolysis of diacylglycerol to monoacylglycerol and monoacylglycerol lipase (MGL) catalyzes the hydrolysis of monoacylglycerol to glycerol. Document 2::: Biological cells which form bonds with a substrate and are at the same time subject to a flow can form long thin membrane cylinders called tethers. These tethers connect the adherent area of the substrate to the main body of the cell. Under physiological conditions, neutrophil tethers can extend to several micrometers. In biochemistry, a tether is a molecule that carries one or two carbon intermediates from one active site to another. They are commonly used in lipid synthesis, gluconeogenesis, and the conversion of pyruvate into Acetyl CoA via PDH complex. Common tethers are lipoate -lysine residue complex associated with dihydrolipoyl transacetylase, which is used for carrying hydroxyethyl from hydroxyethyl TPP. This compound forms Acetyl- CoA, a convergent molecule in metabolic pathways. Another tether is biotin-lysine residue complex associated with pyruvate carboxylase, an enzyme which plays an important role in gluconeogenesis. It is involved in the production of oxaloacetate from pyruvate. One of the biological tethers used in the synthesis of fats is a β- mercaptoethylamine-pantothenate complex associated with an acyl carrier protein. Biochemistry Cell biology Document 3::: Fatty acid metabolism consists of various metabolic processes involving or closely related to fatty acids, a family of molecules classified within the lipid macronutrient category. These processes can mainly be divided into (1) catabolic processes that generate energy and (2) anabolic processes where they serve as building blocks for other compounds. In catabolism, fatty acids are metabolized to produce energy, mainly in the form of adenosine triphosphate (ATP). When compared to other macronutrient classes (carbohydrates and protein), fatty acids yield the most ATP on an energy per gram basis, when they are completely oxidized to CO2 and water by beta oxidation and the citric acid cycle. Fatty acids (mainly in the form of triglycerides) are therefore the foremost storage form of fuel in most animals, and to a lesser extent in plants. In anabolism, intact fatty acids are important precursors to triglycerides, phospholipids, second messengers, hormones and ketone bodies. For example, phospholipids form the phospholipid bilayers out of which all the membranes of the cell are constructed from fatty acids. Phospholipids comprise the plasma membrane and other membranes that enclose all the organelles within the cells, such as the nucleus, the mitochondria, endoplasmic reticulum, and the Golgi apparatus. In another type of anabolism, fatty acids are modified to form other compounds such as second messengers and local hormones. The prostaglandins made from arachidonic acid stored in the cell membrane are probably the best-known of these local hormones. Fatty acid catabolism Fatty acids are stored as triglycerides in the fat depots of adipose tissue. Between meals they are released as follows: Lipolysis, the removal of the fatty acid chains from the glycerol to which they are bound in their storage form as triglycerides (or fats), is carried out by lipases. These lipases are activated by high epinephrine and glucagon levels in the blood (or norepinephrine secreted by s Document 4::: Lipidology is the scientific study of lipids. Lipids are a group of biological macromolecules that have a multitude of functions in the body. Clinical studies on lipid metabolism in the body have led to developments in therapeutic lipidology for disorders such as cardiovascular disease. History Compared to other biomedical fields, lipidology was long-neglected as the handling of oils, smears, and greases was unappealing to scientists and lipid separation was difficult. It was not until 2002 that lipidomics, the study of lipid networks and their interaction with other molecules, appeared in the scientific literature. Attention to the field was bolstered by the introduction of chromatography, spectrometry, and various forms of spectroscopy to the field, allowing lipids to be isolated and analyzed. The field was further popularized following the cytologic application of the electron microscope, which led scientists to find that many metabolic pathways take place within, along, and through the cell membrane - the properties of which are strongly influenced by lipid composition. Clinical lipidology The Framingham Heart Study and other epidemiological studies have found a correlation between lipoproteins and cardiovascular disease (CVD). Lipoproteins are generally a major target of study in lipidology since lipids are transported throughout the body in the form of lipoproteins. A class of lipids known as phospholipids help make up what is known as lipoproteins, and a type of lipoprotein is called high density lipoprotein (HDL). A high concentration of high density lipoproteins-cholesterols (HDL-C) have what is known as a vasoprotective effect on the body, a finding that correlates with an enhanced cardiovascular effect. There is also a correlation between those with diseases such as chronic kidney disease, coronary artery disease, or diabetes mellitus and the possibility of low vasoprotective effect from HDL. Another factor of CVD that is often overlooked involves the The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The lipids that are connected to the glucose pathways are cholesterol and triglycerides. cholesterol is a lipid that contributes to cell membrane flexibility and is a precursor of this? A. steroid hormones B. ammonia hormones C. Thrombopoietin D. Somatostatin Answer:
sciq-2118
multiple_choice
What is the part of a plant whose primary role is to collect sunlight and make food by photosynthesis?
[ "roots", "leaves", "seeds", "stems" ]
B
Relavent Documents: Document 0::: Plants are the eukaryotes that form the kingdom Plantae; they are predominantly photosynthetic. This means that they obtain their energy from sunlight, using chloroplasts derived from endosymbiosis with cyanobacteria to produce sugars from carbon dioxide and water, using the green pigment chlorophyll. Exceptions are parasitic plants that have lost the genes for chlorophyll and photosynthesis, and obtain their energy from other plants or fungi. Historically, as in Aristotle's biology, the plant kingdom encompassed all living things that were not animals, and included algae and fungi. Definitions have narrowed since then; current definitions exclude the fungi and some of the algae. By the definition used in this article, plants form the clade Viridiplantae (green plants), which consists of the green algae and the embryophytes or land plants (hornworts, liverworts, mosses, lycophytes, ferns, conifers and other gymnosperms, and flowering plants). A definition based on genomes includes the Viridiplantae, along with the red algae and the glaucophytes, in the clade Archaeplastida. There are about 380,000 known species of plants, of which the majority, some 260,000, produce seeds. They range in size from single cells to the tallest trees. Green plants provide a substantial proportion of the world's molecular oxygen; the sugars they create supply the energy for most of Earth's ecosystems; other organisms, including animals, either consume plants directly or rely on organisms which do so. Grain, fruit, and vegetables are basic human foods and have been domesticated for millennia. People use plants for many purposes, such as building materials, ornaments, writing materials, and, in great variety, for medicines. The scientific study of plants is known as botany, a branch of biology. Definition Taxonomic history All living things were traditionally placed into one of two groups, plants and animals. This classification dates from Aristotle (384–322 BC), who distinguished d Document 1::: A stem is one of two main structural axes of a vascular plant, the other being the root. It supports leaves, flowers and fruits, transports water and dissolved substances between the roots and the shoots in the xylem and phloem, photosynthesis takes place here, stores nutrients, and produces new living tissue. The stem can also be called halm or haulm or culms. The stem is normally divided into nodes and internodes: The nodes are the points of attachment for leaves and can hold one or more leaves. There are sometimes axillary buds between the stem and leaf which can grow into branches (with leaves, conifer cones, or flowers). Adventitious roots may also be produced from the nodes. Vines may produce tendrils from nodes. The internodes distance one node from another. The term "shoots" is often confused with "stems"; "shoots" generally refers to new fresh plant growth, including both stems and other structures like leaves or flowers. In most plants, stems are located above the soil surface, but some plants have underground stems. Stems have several main functions: Support for and the elevation of leaves, flowers, and fruits. The stems keep the leaves in the light and provide a place for the plant to keep its flowers and fruits. Transport of fluids between the roots and the shoots in the xylem and phloem. Storage of nutrients. Production of new living tissue. The normal lifespan of plant cells is one to three years. Stems have cells called meristems that annually generate new living tissue. Photosynthesis. Stems have two pipe-like tissues called xylem and phloem. The xylem tissue arises from the cell facing inside and transports water by the action of transpiration pull, capillary action, and root pressure. The phloem tissue arises from the cell facing outside and consists of sieve tubes and their companion cells. The function of phloem tissue is to distribute food from photosynthetic tissue to other tissues. The two tissues are separated by cambium, a tis Document 2::: Biomass partitioning is the process by which plants divide their energy among their leaves, stems, roots, and reproductive parts. These four main components of the plant have important morphological roles: leaves take in CO2 and energy from the sun to create carbon compounds, stems grow above competitors to reach sunlight, roots absorb water and mineral nutrients from the soil while anchoring the plant, and reproductive parts facilitate the continuation of species. Plants partition biomass in response to limits or excesses in resources like sunlight, carbon dioxide, mineral nutrients, and water and growth is regulated by a constant balance between the partitioning of biomass between plant parts. An equilibrium between root and shoot growth occurs because roots need carbon compounds from photosynthesis in the shoot and shoots need nitrogen absorbed from the soil by roots. Allocation of biomass is put towards the limit to growth; a limit below ground will focus biomass to the roots and a limit above ground will favor more growth in the shoot. Plants photosynthesize to create carbon compounds for growth and energy storage. Sugars created through photosynthesis are then transported by phloem using the pressure flow system and are used for growth or stored for later use. Biomass partitioning causes this sugar to be divided in a way that maximizes growth, provides the most fitness, and allows for successful reproduction. Plant hormones play a large part in biomass partitioning since they affect differentiation and growth of cells and tissues by changing the expression of genes and altering morphology. By responding to environmental stimuli and partitioning biomass accordingly, plants are better able to take in resources from their environmental and maximize growth. Abiotic Factors of Partitioning It is important for plants to be able to balance their absorption and utilization of available resources and they adjust their growth in order to acquire more of the scarce, g Document 3::: {{DISPLAYTITLE: C3 carbon fixation}} carbon fixation is the most common of three metabolic pathways for carbon fixation in photosynthesis, the other two being and CAM. This process converts carbon dioxide and ribulose bisphosphate (RuBP, a 5-carbon sugar) into two molecules of 3-phosphoglycerate through the following reaction: CO2 + H2O + RuBP → (2) 3-phosphoglycerate This reaction was first discovered by Melvin Calvin, Andrew Benson and James Bassham in 1950. C3 carbon fixation occurs in all plants as the first step of the Calvin–Benson cycle. (In and CAM plants, carbon dioxide is drawn out of malate and into this reaction rather than directly from the air.) Plants that survive solely on fixation ( plants) tend to thrive in areas where sunlight intensity is moderate, temperatures are moderate, carbon dioxide concentrations are around 200 ppm or higher, and groundwater is plentiful. The plants, originating during Mesozoic and Paleozoic eras, predate the plants and still represent approximately 95% of Earth's plant biomass, including important food crops such as rice, wheat, soybeans and barley. plants cannot grow in very hot areas at today's atmospheric CO2 level (significantly depleted during hundreds of millions of years from above 5000 ppm) because RuBisCO incorporates more oxygen into RuBP as temperatures increase. This leads to photorespiration (also known as the oxidative photosynthetic carbon cycle, or C2 photosynthesis), which leads to a net loss of carbon and nitrogen from the plant and can therefore limit growth. plants lose up to 97% of the water taken up through their roots by transpiration. In dry areas, plants shut their stomata to reduce water loss, but this stops from entering the leaves and therefore reduces the concentration of in the leaves. This lowers the :O2 ratio and therefore also increases photorespiration. and CAM plants have adaptations that allow them to survive in hot and dry areas, and they can therefore out-compete Document 4::: Photosynthesis systems are electronic scientific instruments designed for non-destructive measurement of photosynthetic rates in the field. Photosynthesis systems are commonly used in agronomic and environmental research, as well as studies of the global carbon cycle. How photosynthesis systems function Photosynthesis systems function by measuring gas exchange of leaves. Atmospheric carbon dioxide is taken up by leaves in the process of photosynthesis, where is used to generate sugars in a molecular pathway known as the Calvin cycle. This draw-down of induces more atmospheric to diffuse through stomata into the air spaces of the leaf. While stoma are open, water vapor can easily diffuse out of plant tissues, a process known as transpiration. It is this exchange of and water vapor that is measured as a proxy of photosynthetic rate. The basic components of a photosynthetic system are the leaf chamber, infrared gas analyzer (IRGA), batteries and a console with keyboard, display and memory. Modern 'open system' photosynthesis systems also incorporate miniature disposable compressed gas cylinder and gas supply pipes. This is because external air has natural fluctuations in and water vapor content, which can introduce measurement noise. Modern 'open system' photosynthesis systems remove the and water vapour by passage over soda lime and Drierite, then add at a controlled rate to give a stable concentration. Some systems are also equipped with temperature control and a removable light unit, so the effect of these environmental variables can also be measured. The leaf to be analysed is placed in the leaf chamber. The concentrations is measured by the infrared gas analyzer. The IRGA shines infrared light through a gas sample onto a detector. in the sample absorbs energy, so the reduction in the level of energy that reaches the detector indicates the concentration. Modern IRGAs take account of the fact that absorbs energy at similar wavelengths as . Modern IRG The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the part of a plant whose primary role is to collect sunlight and make food by photosynthesis? A. roots B. leaves C. seeds D. stems Answer:
sciq-2042
multiple_choice
Any structure inside a cell that is enclosed by a membrane is called?
[ "an article", "an organelle", "an enclave", "a particle" ]
B
Relavent Documents: Document 0::: Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence. Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism. Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry. See also Cell (biology) Cell biology Biomolecule Organelle Tissue (biology) External links https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm Document 1::: Cell physiology is the biological study of the activities that take place in a cell to keep it alive. The term physiology refers to normal functions in a living organism. Animal cells, plant cells and microorganism cells show similarities in their functions even though they vary in structure. General characteristics There are two types of cells: prokaryotes and eukaryotes. Prokaryotes were the first of the two to develop and do not have a self-contained nucleus. Their mechanisms are simpler than later-evolved eukaryotes, which contain a nucleus that envelops the cell's DNA and some organelles. Prokaryotes Prokaryotes have DNA located in an area called the nucleoid, which is not separated from other parts of the cell by a membrane. There are two domains of prokaryotes: bacteria and archaea. Prokaryotes have fewer organelles than eukaryotes. Both have plasma membranes and ribosomes (structures that synthesize proteins and float free in cytoplasm). Two unique characteristics of prokaryotes are fimbriae (finger-like projections on the surface of a cell) and flagella (threadlike structures that aid movement). Eukaryotes Eukaryotes have a nucleus where DNA is contained. They are usually larger than prokaryotes and contain many more organelles. The nucleus, the feature of a eukaryote that distinguishes it from a prokaryote, contains a nuclear envelope, nucleolus and chromatin. In cytoplasm, endoplasmic reticulum (ER) synthesizes membranes and performs other metabolic activities. There are two types, rough ER (containing ribosomes) and smooth ER (lacking ribosomes). The Golgi apparatus consists of multiple membranous sacs, responsible for manufacturing and shipping out materials such as proteins. Lysosomes are structures that use enzymes to break down substances through phagocytosis, a process that comprises endocytosis and exocytosis. In the mitochondria, metabolic processes such as cellular respiration occur. The cytoskeleton is made of fibers that support the str Document 2::: The cell is the basic structural and functional unit of all forms of life. Every cell consists of cytoplasm enclosed within a membrane, and contains many macromolecules such as proteins, DNA and RNA, as well as many small molecules of nutrients and metabolites. The term comes from the Latin word meaning 'small room'. Cells can acquire specified function and carry out various tasks within the cell such as replication, DNA repair, protein synthesis, and motility. Cells are capable of specialization and mobility within the cell. Most plant and animal cells are only visible under a light microscope, with dimensions between 1 and 100 micrometres. Electron microscopy gives a much higher resolution showing greatly detailed cell structure. Organisms can be classified as unicellular (consisting of a single cell such as bacteria) or multicellular (including plants and animals). Most unicellular organisms are classed as microorganisms. The study of cells and how they work has led to many other studies in related areas of biology, including: discovery of DNA, cancer systems biology, aging and developmental biology. Cell biology is the study of cells, which were discovered by Robert Hooke in 1665, who named them for their resemblance to cells inhabited by Christian monks in a monastery. Cell theory, first developed in 1839 by Matthias Jakob Schleiden and Theodor Schwann, states that all organisms are composed of one or more cells, that cells are the fundamental unit of structure and function in all living organisms, and that all cells come from pre-existing cells. Cells emerged on Earth about 4 billion years ago. Discovery With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as i Document 3::: H2.00.04.4.01001: Lymphoid tissue H2.00.05.0.00001: Muscle tissue H2.00.05.1.00001: Smooth muscle tissue H2.00.05.2.00001: Striated muscle tissue H2.00.06.0.00001: Nerve tissue H2.00.06.1.00001: Neuron H2.00.06.2.00001: Synapse H2.00.06.2.00001: Neuroglia h3.01: Bones h3.02: Joints h3.03: Muscles h3.04: Alimentary system h3.05: Respiratory system h3.06: Urinary system h3.07: Genital system h3.08: Document 4::: Cellular compartments in cell biology comprise all of the closed parts within the cytosol of a eukaryotic cell, usually surrounded by a single or double lipid layer membrane. These compartments are often, but not always, defined as membrane-bound organelles. The formation of cellular compartments is called compartmentalization. Both organelles, the mitochondria and chloroplasts (in photosynthetic organisms), are compartments that are believed to be of endosymbiotic origin. Other compartments such as peroxisomes, lysosomes, the endoplasmic reticulum, the cell nucleus or the Golgi apparatus are not of endosymbiotic origin. Smaller elements like vesicles, and sometimes even microtubules can also be counted as compartments. It was thought that compartmentalization is not found in prokaryotic cells., but the discovery of carboxysomes and many other metabolosomes revealed that prokaryotic cells are capable of making compartmentalized structures, albeit these are in most cases not surrounded by a lipid bilayer, but of pure proteinaceous built. Types In general there are 4 main cellular compartments, they are: The nuclear compartment comprising the nucleus The intercisternal space which comprises the space between the membranes of the endoplasmic reticulum (which is continuous with the nuclear envelope) Organelles (the mitochondrion in all eukaryotes and the plastid in phototrophic eukaryotes) The cytosol Function Compartments have three main roles. One is to establish physical boundaries for biological processes that enables the cell to carry out different metabolic activities at the same time. This may include keeping certain biomolecules within a region, or keeping other molecules outside. Within the membrane-bound compartments, different intracellular pH, different enzyme systems, and other differences are isolated from other organelles and cytosol. With mitochondria, the cytosol has an oxidizing environment which converts NADH to NAD+. With these cases, the The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Any structure inside a cell that is enclosed by a membrane is called? A. an article B. an organelle C. an enclave D. a particle Answer:
sciq-5666
multiple_choice
Receptor clustering happens when what binds to the receptors?
[ "lipid molecules", "carbohydrates", "enzymes", "fas molecules" ]
D
Relavent Documents: Document 0::: In biochemistry and pharmacology, receptors are chemical structures, composed of protein, that receive and transduce signals that may be integrated into biological systems. These signals are typically chemical messengers which bind to a receptor and produce physiological responses such as change in the electrical activity of a cell. For example, GABA, an inhibitory neurotransmitter inhibits electrical activity of neurons by binding to GABA receptors. There are three main ways the action of the receptor can be classified: relay of signal, amplification, or integration. Relaying sends the signal onward, amplification increases the effect of a single ligand, and integration allows the signal to be incorporated into another biochemical pathway. Receptor proteins can be classified by their location. Cell surface receptors also known as transmembrane receptors, include ligand-gated ion channels, G protein-coupled receptors, and enzyme-linked hormone receptors. Intracellular receptors are those found inside the cell, and include cytoplasmic receptors and nuclear receptors. A molecule that binds to a receptor is called a ligand and can be a protein, peptide (short protein), or another small molecule, such as a neurotransmitter, hormone, pharmaceutical drug, toxin, calcium ion or parts of the outside of a virus or microbe. An endogenously produced substance that binds to a particular receptor is referred to as its endogenous ligand. E.g. the endogenous ligand for the nicotinic acetylcholine receptor is acetylcholine, but it can also be activated by nicotine and blocked by curare. Receptors of a particular type are linked to specific cellular biochemical pathways that correspond to the signal. While numerous receptors are found in most cells, each receptor will only bind with ligands of a particular structure. This has been analogously compared to how locks will only accept specifically shaped keys. When a ligand binds to a corresponding receptor, it activates or inhibits th Document 1::: The adequate stimulus is a property of a sensory receptor that determines the type of energy to which a sensory receptor responds with the initiation of sensory transduction. Sensory receptor are specialized to respond to certain types of stimuli. The adequate stimulus is the amount and type of energy required to stimulate a specific sensory organ. Many of the sensory stimuli are categorized by the mechanics by which they are able to function and their purpose. Sensory receptors that are present within the body typically are made to respond to a single stimulus. Sensory receptors are present all throughout the body, and they take a certain amount of a stimulus to trigger these receptors. The use of these sensory receptors allows the brain to interpret the signals to the body which allow a person to respond to the stimulus if the stimulus reaches a minimum threshold to signal the brain. The sensory receptors will activate the sensory transduction system which will in turn send an electrical or chemical stimulus to a cell, and the cell will then respond with electrical signals to the brain which were produced from action potentials. The minuscule signals, which result from the stimuli, enter the cells must be amplified and turned into an sufficient signal that will be sent to the brain. A sensory receptor's adequate stimulus is determined by the signal transduction mechanisms and ion channels incorporated in the sensory receptor's plasma membrane. Adequate stimulus are often used in relation with sensory thresholds and absolute thresholds to describe the smallest amount of a stimulus needed to activate a feeling within the sensory organ. Categorizations of receptors They are categorized through the stimuli to which they respond. Adequate stimulus are also often categorized based on their purpose and locations within the body. The following are the categorizations of receptors within the body: Visual – These are found in the visual organs of species and are respon Document 2::: Scavenger receptors in endocrinology are inactive membrane receptors which bind certain hormones such as IGF-1 and do not transmit an intracellular response. Document 3::: A heteromer is something that consists of different parts; the antonym of homomeric. Examples are: Biology Spinal neurons that pass over to the opposite side of the spinal cord. A protein complex that contains two or more different polypeptides. Pharmacology Ligand-gated ion channels such as the nicotinic acetylcholine receptor and GABAA receptor are composed of five subunits arranged around a central pore that opens to allow ions to pass through. There are many different subunits available that can come together in a wide variety of combinations to form different subtypes of the ion channel. Sometimes the channel can be made from only one type of subunit, such as the α7 nicotinic receptor, which is made up from five α7 subunits, and so is a homomer rather than a heteromer, but more commonly several different types of subunit will come together to form a heteromeric complex (e.g., the α4β2 nicotinic receptor, which is made up from two α4 subunits and three β2 subunits). Because the different ion channel subtypes are expressed to different extents in different tissues, this allows selective modulation of ion transport and means that a single neurotransmitter can produce varying effects depending on where in the body it is released. G protein-coupled receptors are composed of seven membrane-spanning alpha-helical segments that are usually linked together into a single folded chain to form the receptor complex. However, research has demonstrated that a number of GPCRs are also capable of forming heteromers from a combination of two or more individual GPCR subunits under some circumstances, especially where several different GPCRs are densely expressed in the same neuron. Such heteromers may be between receptors from the same family (e.g., adenosine A1/A2A heteromers and dopamine D1/D2 and D1/D3 heteromers) or between entirely unrelated receptors such as CB1/A2A, glutamate mGluR5 / adenosine A2A heteromers, cannabinoid CB1 / dopamine D2 heteromers, and even CB1/A2 Document 4::: A heteroreceptor is a receptor regulating the synthesis and/or the release of mediators other than its own ligand. Heteroreceptors respond to neurotransmitters, neuromodulators, or neurohormones released from adjacent neurons or cells; they are opposite to autoreceptors, which are sensitive only to neurotransmitters or hormones released by the cell in whose wall they are embedded. Examples Norepinephrine can influence the release of acetylcholine from parasympathetic neurons by acting on α2 adrenergic (α2A, α2B, and α2C) heteroreceptors. Acetylcholine can influence the release of norepinephrine from sympathetic neurons by acting on muscarinic-2 and muscarinic-4 heteroreceptors. CB1 negatively modulates the release of GABA and glutamate, playing a crucial role in maintaining a homeostasis between excitatory and inhibitory transmission. Glutamate released from an excitatory neuron escapes from the synaptic cleft and preferentially affects mGluR III receptors on the presynaptic terminals of interneurons. Glutamate spillover leads to inhibition of GABA release, modulating GABAergic transmission. See also Autoreceptor The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Receptor clustering happens when what binds to the receptors? A. lipid molecules B. carbohydrates C. enzymes D. fas molecules Answer:
sciq-757
multiple_choice
A doorknob and a ferris wheel are examples of what type of simple machine?
[ "ball and axle", "wheel and axle", "ball and socket", "lever and pulley" ]
B
Relavent Documents: Document 0::: Machine element or hardware refers to an elementary component of a machine. These elements consist of three basic types: structural components such as frame members, bearings, axles, splines, fasteners, seals, and lubricants, mechanisms that control movement in various ways such as gear trains, belt or chain drives, linkages, cam and follower systems, including brakes and clutches, and control components such as buttons, switches, indicators, sensors, actuators and computer controllers. While generally not considered to be a machine element, the shape, texture and color of covers are an important part of a machine that provide a styling and operational interface between the mechanical components of a machine and its users. Machine elements are basic mechanical parts and features used as the building blocks of most machines. Most are standardized to common sizes, but customs are also common for specialized applications. Machine elements may be features of a part (such as screw threads or integral plain bearings) or they may be discrete parts in and of themselves such as wheels, axles, pulleys, rolling-element bearings, or gears. All of the simple machines may be described as machine elements, and many machine elements incorporate concepts of one or more simple machines. For example, a leadscrew incorporates a screw thread, which is an inclined plane wrapped around a cylinder. Many mechanical design, invention, and engineering tasks involve a knowledge of various machine elements and an intelligent and creative combining of these elements into a component or assembly that fills a need (serves an application). Structural elements Beams, Struts, Bearings, Fasteners Keys, Splines, Cotter pin, Seals Machine guardings Mechanical elements Engine, Electric motor, Actuator, Shafts, Couplings Belt, Chain, Cable drives, Gear train, Clutch, Brake, Flywheel, Cam, follower systems, Linkage, Simple machine Types Shafts Document 1::: Types of mill include the following: Manufacturing facilities Categorized by power source Watermill, a mill powered by moving water Windmill, a mill powered by moving air (wind) Tide mill, a water mill that uses the tide's movement Treadmill or treadwheel, a mill powered by human or animal movement Horse mill, a mill powered by horses' movement Categorized by not being a fixed building Ship mill, a water mill that floats on the river or bay whose current or tide provides the water movement Field mill (carriage), a portable mill Categorized by what is made and/or acted on Materials recovery facility, processes raw garbage and turns it into purified commodities like aluminum, PET, and cardboard by processing and crushing (compressing and baling) it. Rice mill, processes paddy to rice Bark mill, produces tanbark for tanneries Coffee mill Colloid mill Cider mill, crushes apples to give cider Drainage mills such as the Clayrack Drainage Mill are used to pump water from low-lying land. Flotation mill, in mining, uses grinding and froth flotation to concentrate ores using differences in materials' hydrophobicity Gristmill, a grain mill (flour mill) Herb grinder Oil mill, see expeller pressing, extrusion Ore mill, for crushing and processing ore Paper mill Pellet mill Powder mill, produces gunpowder Puppy mill, a breeding facility that produces puppies on a large scale, where the welfare of the dogs is jeopardized for profits Rock crusher Sugar cane mill Sawmill, a lumber mill Millwork starch mill Steel mill sugar mill (also called a sugar refinery), processes sugar beets or sugar cane into various finished products Textile mills for textile manufacturing: Cotton mill Flax mill, for flax Silk mill, for silk woollen mill, see textile manufacturing huller (also called a rice mill, or rice husker) is used to hull rice Wire mill, for wire drawing Other types See :Category:Industrial buildings and structures Industrial tools for size re Document 2::: A simple machine that exhibits mechanical advantage is called a mechanical advantage device - e.g.: Lever: The beam shown is in static equilibrium around the fulcrum. This is due to the moment created by vector force "A" counterclockwise (moment A*a) being in equilibrium with the moment created by vector force "B" clockwise (moment B*b). The relatively low vector force "B" is translated in a relatively high vector force "A". The force is thus increased in the ratio of the forces A : B, which is equal to the ratio of the distances to the fulcrum b : a. This ratio is called the mechanical advantage. This idealised situation does not take into account friction. Wheel and axle motion (e.g. screwdrivers, doorknobs): A wheel is essentially a lever with one arm the distance between the axle and the outer point of the wheel, and the other the radius of the axle. Typically this is a fairly large difference, leading to a proportionately large mechanical advantage. This allows even simple wheels with wooden axles running in wooden blocks to still turn freely, because their friction is overwhelmed by the rotational force of the wheel multiplied by the mechanical advantage. A block and tackle of multiple pulleys creates mechanical advantage, by having the flexible material looped over several pulleys in turn. Adding more loops and pulleys increases the mechanical advantage. Screw: A screw is essentially an inclined plane wrapped around a cylinder. The run over the rise of this inclined plane is the mechanical advantage of a screw. Pulleys Consider lifting a weight with rope and pulleys. A rope looped through a pulley attached to a fixed spot, e.g. a barn roof rafter, and attached to the weight is called a single pulley. It has a mechanical advantage (MA) = 1 (assuming frictionless bearings in the pulley), moving no mechanical advantage (or disadvantage) however advantageous the change in direction may be. A single movable pulley has an MA of 2 (assuming frictionless be Document 3::: Montessori sensorial materials are materials used in the Montessori classroom to help a child develop and refine their five senses. Use of these materials constitutes the next level of difficulty after those of practical life. Like many other materials in the Montessori classroom, sensorial materials have what is called "control of error", meaning that the child not only works with the material, but has a way to check their work rather than seeking out the teacher if they have a question on whether or not they did it right. This is done to help promote independence and problem solving on the part of the child. Cylinder blocks The cylinder blocks are ten wooden cylinders of various dimensions that can be removed from a fitted container block using a knobbed handle. To remove the cylinders, the child tends to naturally use the same three-finger grip used to hold pencils. Several activities can be done with the cylinder blocks. The main activity involves removing the cylinders from the block and replacing them again in the spot that one got them from. The control of error is constituted in the child's inability to replace a cylinder in the wrong hole. Pink tower The pink tower has ten pink cubes of different sizes, from 1 centimeter up to 10 cm in increments of 1 cm. The work is designed to provide the child with a concept of "big" and "small." The child starts with the largest cube and puts the second-largest cube on top of it. This continues until all ten cubes are stacked on top of each other. The control of error is visual. The child sees the cubes are in the wrong order and knows that they should fix them. The successive dimensions of each cube are such that if the cubes are stacked flush with a corner, the smallest cube may be fit squarely on the ledge of each level. Broad stair The broad stair (also called Brown Stair) is designed to teach the concepts of "thick" and "thin". It comprises ten sets of wooden prisms with a natural or brown stain finish. Document 4::: Mechanical engineering is a discipline centered around the concept of using force multipliers, moving components, and machines. It utilizes knowledge of mathematics, physics, materials sciences, and engineering technologies. It is one of the oldest and broadest of the engineering disciplines. Dawn of civilization to early middle ages Engineering arose in early civilization as a general discipline for the creation of large scale structures such as irrigation, architecture, and military projects. Advances in food production through irrigation allowed a portion of the population to become specialists in Ancient Babylon. All six of the classic simple machines were known in the ancient Near East. The wedge and the inclined plane (ramp) were known since prehistoric times. The wheel, along with the wheel and axle mechanism, was invented in Mesopotamia (modern Iraq) during the 5th millennium BC. The lever mechanism first appeared around 5,000 years ago in the Near East, where it was used in a simple balance scale, and to move large objects in ancient Egyptian technology. The lever was also used in the shadoof water-lifting device, the first crane machine, which appeared in Mesopotamia circa 3000 BC, and then in ancient Egyptian technology circa 2000 BC. The earliest evidence of pulleys date back to Mesopotamia in the early 2nd millennium BC, and ancient Egypt during the Twelfth Dynasty (1991-1802 BC). The screw, the last of the simple machines to be invented, first appeared in Mesopotamia during the Neo-Assyrian period (911-609) BC. The Egyptian pyramids were built using three of the six simple machines, the inclined plane, the wedge, and the lever, to create structures like the Great Pyramid of Giza. The Assyrians were notable in their use of metallurgy and incorporation of iron weapons. Many of their advancements were in military equipment. They were not the first to develop them, but did make advancements on the wheel and the chariot. They made use of pivot-able axl The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. A doorknob and a ferris wheel are examples of what type of simple machine? A. ball and axle B. wheel and axle C. ball and socket D. lever and pulley Answer:
sciq-1380
multiple_choice
Where does most of the mass for an atom reside?
[ "protons", "nucleus", "neutrons", "electrons" ]
B
Relavent Documents: Document 0::: The subatomic scale is the domain of physical size that encompasses objects smaller than an atom. It is the scale at which the atomic constituents, such as the nucleus containing protons and neutrons, and the electrons in their orbitals, become apparent. The subatomic scale includes the many thousands of times smaller subnuclear scale, which is the scale of physical size at which constituents of the protons and neutrons - particularly quarks - become apparent. See also Astronomical scale the opposite end of the spectrum Subatomic particles Document 1::: In particle physics, the electron mass (symbol: ) is the mass of a stationary electron, also known as the invariant mass of the electron. It is one of the fundamental constants of physics. It has a value of about or about , which has an energy-equivalent of about or about Terminology The term "rest mass" is sometimes used because in special relativity the mass of an object can be said to increase in a frame of reference that is moving relative to that object (or if the object is moving in a given frame of reference). Most practical measurements are carried out on moving electrons. If the electron is moving at a relativistic velocity, any measurement must use the correct expression for mass. Such correction becomes substantial for electrons accelerated by voltages of over . For example, the relativistic expression for the total energy, , of an electron moving at speed is where is the speed of light; is the Lorentz factor, is the "rest mass", or more simply just the "mass" of the electron. This quantity is frame invariant and velocity independent. However, some texts group the Lorentz factor with the mass factor to define a new quantity called the relativistic mass, . Determination Since the electron mass determines a number of observed effects in atomic physics, there are potentially many ways to determine its mass from an experiment, if the values of other physical constants are already considered known. Historically, the mass of the electron was determined directly from combining two measurements. The mass-to-charge ratio of the electron was first estimated by Arthur Schuster in 1890 by measuring the deflection of "cathode rays" due to a known magnetic field in a cathode ray tube. Seven years later J. J. Thomson showed that cathode rays consist of streams of particles, to be called electrons, and made more precise measurements of their mass-to-charge ratio again using a cathode ray tube. The second measurement was of the charge of the electron. T Document 2::: The atomic mass (ma or m) is the mass of an atom. Although the SI unit of mass is the kilogram (symbol: kg), atomic mass is often expressed in the non-SI unit dalton (symbol: Da) – equivalently, unified atomic mass unit (u). 1 Da is defined as of the mass of a free carbon-12 atom at rest in its ground state. The protons and neutrons of the nucleus account for nearly all of the total mass of atoms, with the electrons and nuclear binding energy making minor contributions. Thus, the numeric value of the atomic mass when expressed in daltons has nearly the same value as the mass number. Conversion between mass in kilograms and mass in daltons can be done using the atomic mass constant . The formula used for conversion is: where is the molar mass constant, is the Avogadro constant, and is the experimentally determined molar mass of carbon-12. The relative isotopic mass (see section below) can be obtained by dividing the atomic mass ma of an isotope by the atomic mass constant mu yielding a dimensionless value. Thus, the atomic mass of a carbon-12 atom is by definition, but the relative isotopic mass of a carbon-12 atom is simply 12. The sum of relative isotopic masses of all atoms in a molecule is the relative molecular mass. The atomic mass of an isotope and the relative isotopic mass refers to a certain specific isotope of an element. Because substances are usually not isotopically pure, it is convenient to use the elemental atomic mass which is the average (mean) atomic mass of an element, weighted by the abundance of the isotopes. The dimensionless (standard) atomic weight is the weighted mean relative isotopic mass of a (typical naturally occurring) mixture of isotopes. The atomic mass of atoms, ions, or atomic nuclei is slightly less than the sum of the masses of their constituent protons, neutrons, and electrons, due to binding energy mass loss (per ). Relative isotopic mass Relative isotopic mass (a property of a single atom) is not to be confused w Document 3::: The electric dipole moment is a measure of the separation of positive and negative electrical charges within a system, that is, a measure of the system's overall polarity. The SI unit for electric dipole moment is the coulomb-meter (C⋅m). The debye (D) is another unit of measurement used in atomic physics and chemistry. Theoretically, an electric dipole is defined by the first-order term of the multipole expansion; it consists of two equal and opposite charges that are infinitesimally close together, although real dipoles have separated charge. Elementary definition Often in physics the dimensions of a massive object can be ignored and can be treated as a pointlike object, i.e. a point particle. Point particles with electric charge are referred to as point charges. Two point charges, one with charge and the other one with charge separated by a distance , constitute an electric dipole (a simple case of an electric multipole). For this case, the electric dipole moment has a magnitude and is directed from the negative charge to the positive one. Some authors may split in half and use since this quantity is the distance between either charge and the center of the dipole, leading to a factor of two in the definition. A stronger mathematical definition is to use vector algebra, since a quantity with magnitude and direction, like the dipole moment of two point charges, can be expressed in vector form where is the displacement vector pointing from the negative charge to the positive charge. The electric dipole moment vector also points from the negative charge to the positive charge. With this definition the dipole direction tends to align itself with an external electric field (and note that the electric flux lines produced by the charges of the dipole itself, which point from positive charge to negative charge then tend to oppose the flux lines of the external field). Note that this sign convention is used in physics, while the opposite sign convention for th Document 4::: Nuclear density is the density of the nucleus of an atom. For heavy nuclei, it is close to the nuclear saturation density nucleons/fm3, which minimizes the energy density of an infinite nuclear matter. The nuclear saturation mass density is thus kg/m3, where mu is the atomic mass constant. The descriptive term nuclear density is also applied to situations where similarly high densities occur, such as within neutron stars. Evaluation The nuclear density of a typical nucleus can be approximately calculated from the size of the nucleus, which itself can be approximated based on the number of protons and neutrons in it. The radius of a typical nucleus, in terms of number of nucleons, is where is the mass number and is 1.25 fm, with typical deviations of up to 0.2 fm from this value. The number density of the nucleus is thus: The density for any typical nucleus, in terms of mass number, is thus constant, not dependent on A or R, theoretically: The experimentally determined value for the nuclear saturation density is The mass density ρ is the product of the number density n by the particle's mass. The calculated mass density, using a nucleon mass of mn=1.67×10−27 kg, is thus: (using the theoretical estimate) or (using the experimental value). Applications and extensions The components of an atom and of a nucleus have varying densities. The proton is not a fundamental particle, being composed of quark–gluon matter. Its size is approximately 10−15 meters and its density 1018 kg/m3. The descriptive term nuclear density is also applied to situations where similarly high densities occur, such as within neutron stars. Using deep inelastic scattering, it has been estimated that the "size" of an electron, if it is not a point particle, must be less than 10−17 meters. This would correspond to a density of roughly 1021 kg/m3. There are possibilities for still-higher densities when it comes to quark matter. In the near future, the highest experimentally measur The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Where does most of the mass for an atom reside? A. protons B. nucleus C. neutrons D. electrons Answer:
sciq-8079
multiple_choice
The carbon cycle can be thought of in terms of two interdependent cycles - cellular respiration and what else?
[ "glycolysis", "photosynthesis", "pollination", "spermatogenesis" ]
B
Relavent Documents: Document 0::: The carbon cycle is that part of the biogeochemical cycle by which carbon is exchanged among the biosphere, pedosphere, geosphere, hydrosphere, and atmosphere of Earth. Other major biogeochemical cycles include the nitrogen cycle and the water cycle. Carbon is the main component of biological compounds as well as a major component of many minerals such as limestone. The carbon cycle comprises a sequence of events that are key to making Earth capable of sustaining life. It describes the movement of carbon as it is recycled and reused throughout the biosphere, as well as long-term processes of carbon sequestration (storage) to and release from carbon sinks. To describe the dynamics of the carbon cycle, a distinction can be made between the fast and slow carbon cycle. The fast carbon cycle is also referred to as the biological carbon cycle. Fast carbon cycles can complete within years, moving substances from atmosphere to biosphere, then back to the atmosphere. Slow or geological cycles (also called deep carbon cycle) can take millions of years to complete, moving substances through the Earth's crust between rocks, soil, ocean and atmosphere. Human activities have disturbed the fast carbon cycle for many centuries by modifying land use, and moreover with the recent industrial-scale mining of fossil carbon (coal, petroleum, and gas extraction, and cement manufacture) from the geosphere. Carbon dioxide in the atmosphere had increased nearly 52% over pre-industrial levels by 2020, forcing greater atmospheric and Earth surface heating by the Sun. The increased carbon dioxide has also caused a reduction in the ocean's pH value and is fundamentally altering marine chemistry. The majority of fossil carbon has been extracted over just the past half century, and rates continue to rise rapidly, contributing to human-caused climate change. Main compartments The carbon cycle was first described by Antoine Lavoisier and Joseph Priestley, and popularised by Humphry Davy. The g Document 1::: The molecules that an organism uses as its carbon source for generating biomass are referred to as "carbon sources" in biology. It is possible for organic or inorganic sources of carbon. Heterotrophs must use organic molecules as both are a source of carbon and energy, in contrast to autotrophs, which can use inorganic materials as both a source of carbon and an abiotic source of energy, such as, for instance, inorganic chemical energy or light (photoautotrophs) (chemolithotrophs). The carbon cycle, which begins with a carbon source that is inorganic, such as carbon dioxide and progresses through the carbon fixation process, includes the biological use of carbon as one of its components.[1] Types of organism by carbon source Heterotrophs Autotrophs Document 2::: A biogeochemical cycle, or more generally a cycle of matter, is the movement and transformation of chemical elements and compounds between living organisms, the atmosphere, and the Earth's crust. Major biogeochemical cycles include the carbon cycle, the nitrogen cycle and the water cycle. In each cycle, the chemical element or molecule is transformed and cycled by living organisms and through various geological forms and reservoirs, including the atmosphere, the soil and the oceans. It can be thought of as the pathway by which a chemical substance cycles (is turned over or moves through) the biotic compartment and the abiotic compartments of Earth. The biotic compartment is the biosphere and the abiotic compartments are the atmosphere, lithosphere and hydrosphere. For example, in the carbon cycle, atmospheric carbon dioxide is absorbed by plants through photosynthesis, which converts it into organic compounds that are used by organisms for energy and growth. Carbon is then released back into the atmosphere through respiration and decomposition. Additionally, carbon is stored in fossil fuels and is released into the atmosphere through human activities such as burning fossil fuels. In the nitrogen cycle, atmospheric nitrogen gas is converted by plants into usable forms such as ammonia and nitrates through the process of nitrogen fixation. These compounds can be used by other organisms, and nitrogen is returned to the atmosphere through denitrification and other processes. In the water cycle, the universal solvent water evaporates from land and oceans to form clouds in the atmosphere, and then precipitates back to different parts of the planet. Precipitation can seep into the ground and become part of groundwater systems used by plants and other organisms, or can runoff the surface to form lakes and rivers. Subterranean water can then seep into the ocean along with river discharges, rich with dissolved and particulate organic matter and other nutrients. There are bio Document 3::: Community respiration (CR) refers to the total amount of carbon-dioxide that is produced by individuals organisms in a given community, originating from the cellular respiration of organic material. CR is an important ecological index as it dictates the amount of production for the higher trophic levels and influence biogeochemical cycles. CR is often used as a proxy for the biological activity of the microbial community. Overview The process of cellular respiration is foundational to the ecological index, community respiration (CR). Cellular respiration can be used to explain relationships between heterotrophic organisms and the autotrophic ones they consume. The process of cellular respiration consists of a series of metabolic reactions using biological material produced by autotrophic organisms, such as oxygen () and glucose (C6H12O6) to turn its chemical energy into adenosine triphosphate (ATP) which can then be used in other metabolic reactions to power the organism, creating carbon dioxide () and water () as a by-product.The overall process of cellular respiration can be summarized with, C6H12O6 + 6O2 → 6CO2 + 6H2O + ATP. The ATP created during cellular respiration is absolutely necessary for a living being to function as it is the 'Energy currency" of the cell and none of the other metabolic functions could be sustained without it. The process of cellular respiration is an essential component of the Carbon Cycle, which tracks the recycling of carbon through the earth and atmosphere in various compounds such as: CO2 ,H2CO3, HCO3- ,C6H12O6 , CH4 to name a few. The concentration of carbon dioxide in a given area can act as a proxy indicator for metabolic metabolic function of an individual, or individuals in that area. Since the process of cellular respiration consumes oxygen and produces carbon dioxide the amount of carbon dioxide can be used to infer the amount of oxygen used in the environment specifically for metabolic requirements. Since cellular respi Document 4::: Carbon sequestration (or carbon storage) is the process of storing carbon in a carbon pool. Carbon sequestration is a naturally occurring process but it can also be enhanced or achieved with technology, for example within carbon capture and storage projects. There are two main types of carbon sequestration: geologic and biologic (also called biosequestration). Carbon dioxide () is naturally captured from the atmosphere through biological, chemical, and physical processes. These changes can be accelerated through changes in land use and agricultural practices, such as converting crop land into land for non-crop fast growing plants. Artificial processes have been devised to produce similar effects, including large-scale, artificial capture and sequestration of industrially produced using subsurface saline aquifers or aging oil fields. Other technologies that work with carbon sequestration include bio-energy with carbon capture and storage, biochar, enhanced weathering, direct air carbon capture and sequestration (DACCS). Forests, kelp beds, and other forms of plant life absorb carbon dioxide from the air as they grow, and bind it into biomass. However, these biological stores are considered volatile carbon sinks as the long-term sequestration cannot be guaranteed. For example, natural events, such as wildfires or disease, economic pressures and changing political priorities can result in the sequestered carbon being released back into the atmosphere. Carbon dioxide that has been removed from the atmosphere can also be stored in the Earth's crust by injecting it into the subsurface, or in the form of insoluble carbonate salts (mineral sequestration). These methods are considered non-volatile because they remove carbon from the atmosphere and sequester it indefinitely and presumably for a considerable duration (thousands to millions of years). To enhance carbon sequestration processes in oceans the following technologies have been proposed but none have achieved lar The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The carbon cycle can be thought of in terms of two interdependent cycles - cellular respiration and what else? A. glycolysis B. photosynthesis C. pollination D. spermatogenesis Answer:
sciq-6028
multiple_choice
At what time of life does menopause occur?
[ "adolescence", "young adulthood", "in middle adulthood", "old age" ]
C
Relavent Documents: Document 0::: Postmenopausal confusion, also commonly referred to as postmenopausal brain fog, is a group of symptoms of menopause in which women report problems with cognition at a higher frequency during postmenopause than before. Multiple studies on cognitive performance following menopause have reported noticeable declines of greater than 60%. The common issues presented included impairments in reaction time and attention, difficulty recalling numbers or words, and forgetting reasons for involvement in certain behaviors. Association between subjective cognitive complaints and objective measures of performance show a significant impact on health-related quality of life for postmenopausal women. Treatment primarily involves symptom management through non-pharmacological treatment strategies. This includes involvement in physical activity and following medically supervised diets, especially those that contain phytoestrogens or resveratrol. Pharmacological interventions in treating postmenopausal confusion are currently being researched. Hormone replacement therapy (HRT) is currently not indicated for the treatment of postmenopausal confusion due to inefficacy. The use of HRT for approved indications has identified no significant negative effect on postmenopausal cognition. Although much of the literature references women, all people who undergo menopause, including those who do not self-identify as women, may experience symptoms of postmenopausal confusion. History Research on menopause as a whole declined with the end of the Women's Health Initiative (WHI) studies, but research on the treatment of symptoms associated with menopause—especially the treatment of cognitive decline—continues. The Study of Women's Health Across the Nation (SWAN), first started in 1996, continues to publish progress reports which include cognitive symptoms associated with menopausal transition, including those in postmenopause. , SWAN indicated, "Approximately 60% of midlife women report problems Document 1::: Menopause, also known as the climacteric, is the time when menstrual periods permanently cease, marking the end of reproduction. It typically occurs between the ages of 45 and 55, although the exact timing can vary. Menopause is usually a natural change. It can occur earlier in those who smoke tobacco. Other causes include surgery that removes both ovaries or some types of chemotherapy. At the physiological level, menopause happens because of a decrease in the ovaries' production of the hormones estrogen and progesterone. While typically not needed, a diagnosis of menopause can be confirmed by measuring hormone levels in the blood or urine. Menopause is the opposite of menarche, the time when a girl's periods start. In the years before menopause, a woman's periods typically become irregular, which means that periods may be longer or shorter in duration or be lighter or heavier in the amount of flow. During this time, women often experience hot flashes; these typically last from 30 seconds to ten minutes and may be associated with shivering, night sweats, and reddening of the skin. Hot flashes can recur for four to five years. Other symptoms may include vaginal dryness, trouble sleeping, and mood changes. The severity of symptoms varies between women. Menopause before the age of 45 years is considered to be "early menopause" and when ovarian failure/surgical removal of the ovaries occurs before the age of 40 years this is termed "premature ovarian insufficiency". In addition to symptoms (hot flushes/flashes, night sweats, mood changes, arthralgia and vaginal dryness), the physical consequences of menopause include bone loss, increased central abdominal fat, and adverse changes in a woman's cholesterol profile and vascular function. These changes predispose postmenopausal women to increased risks of osteoporosis and bone fracture, and of cardio-metabolic disease (diabetes and cardiovascular disease). Medical professionals often define menopause as having occurred Document 2::: Menopause in the workplace is a social and human resources campaigning issue in which people work to raise awareness of the impact menopause symptoms can have on attendance and performance in the workplace. Activism Campaigners, journalists, personnel professionals and academics draw upon published research and lobby for support for workers via industrial trades unions (including ACAS TUC UCU UNISON EIS, NASUTW ) and changes in legislation. In the UK under the Equality Act 2010, menopause discrimination is covered by three of the protected characteristics: age, sex and disability discrimination. A UK government cross-party equalities working group explored why workplaces were failing women going through the menopause Background The average age for the menopause transition is 51. Women over the age of 50 are a growing demographic in the workforce. 14 million working days are lost to menopause each year in the UK. Around 900,000 women have left jobs in the UK because of menopause symptoms making continuing work impossible. Many women go through the menopause during their working lives, and workplace support is vital.  Menopause is considered by many to be a private matter or ‘a women's issue' or the 'last taboo' subject in workplaces. The TUC found that many employers were unaware of the issues involved and not tackling problems in ways that helped workers. The impact of employers failing to make reasonable adjustments include loss of work days due to absence and women being disciplined on competency grounds for health issues. The number of UK employment tribunals concerning menopause is increasing A UK government report suggests that employers can make positive changes by "changing organisational cultures; compulsory equality and diversity training; providing specialist advice; tailored absence policies; flexible working patterns for mid-life women; and fairly low cost environmental changes" to cater for women's differing experiences. The CIPD have prod Document 3::: Menarche ( ; ) is the first menstrual cycle, or first menstrual bleeding, in female humans. From both social and medical perspectives, it is often considered the central event of female puberty, as it signals the possibility of fertility. Girls experience menarche at different ages. Having menarche occur between the ages of 9–14 in the West is considered normal. Canadian psychological researcher Niva Piran claims that menarche or the perceived average age of puberty is used in many cultures to separate girls from activity with boys, and to begin transition into womanhood. The timing of menarche is influenced by female biology, as well as genetic and environmental factors, especially nutritional factors. The mean age of menarche has declined over the last century, but the magnitude of the decline and the factors responsible remain subjects of contention. The worldwide average age of menarche is very difficult to estimate accurately, and it varies significantly by geographical region, race, ethnicity and other characteristics, and occurs mostly during a span of ages from 8 to 16, with a small percentage of girls having menarche by age 10, and the vast majority having it by the time they were 14. There is a later age of onset in Asian populations compared to the West, but it too is changing with time. For example a Korean study in 2011 showed an overall average age of 12.7, with around 20% before age 12, and more than 90% by age 14. A Chinese study from 2014 published in Acta Paediatrica showed similar results (overall average of age 12.8 in 2005 down to age 12.3 in 2014) and a similar trend in time, but also similar findings about ethnic, cultural, and environmental effects. The average age of menarche was about 12.7 years in Canada in 2001, and 12.9 in the United Kingdom. A study of girls in Istanbul, Turkey, in 2011 found the median age at menarche to be 12.7 years. In the United States, an analysis of 10,590 women aged 15–44 taken from the 2013–2017 round of th Document 4::: The reproductive-cell cycle theory posits that the hormones that regulate reproduction act in an antagonistic pleiotrophic manner to control aging via cell cycle signaling; promoting growth and development early in life in order to achieve reproduction, but later in life, in a futile attempt to maintain reproduction, become dysregulated and drive senescence. Rather than seeing aging as a loss of functionality as we get older, this theory defines aging as any change in an organism over time, as evidenced by the fact that if all chemical reactions in the body were stopped, no change, and thus no aging, would occur. Since the most important change in an organism through time is the chemical reactions that result in a single cell developing into a multicellular organism, whatever controls these chemical reactions that regulate cell growth, development, and death, is believed to control aging. The theory argues that these cellular changes are directed by reproductive hormones of the hypothalamic-pituitary-gonadal axis (HPG axis). Receptors for reproductive hormones (such as estrogens, progestogens, androgens and gonadotropins) have been found to be present in all tissues of the body. Thus, HPG axis hormones normally promote growth and development of the organism early in life in order to achieve reproduction. Hormones levels then begin to change in men around age 30 and more abruptly in women when they reach menopause, around age 50. When the HPG axis becomes unbalanced, cellular growth and development is dysregulated, and cell death and dysfunction can occur, both of which can initiate senescence, the accumulated damage to cells, tissues, and organs that occurs with the passage of time and that is associated with functional loss during aging. Evidence supporting this theory comes from disease studies showing that women who reach menopause later have less heart disease and stroke, less dementia, and less osteoporosis, supporting the theory that the longer the HPG a The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. At what time of life does menopause occur? A. adolescence B. young adulthood C. in middle adulthood D. old age Answer:
sciq-9179
multiple_choice
The chloroplasts contained within the green algal endosymbionts still are capable of what process associated with plants?
[ "photosynthesis", "respiration", "germination", "reproduction" ]
A
Relavent Documents: Document 0::: Cyanobacteria (), also called Cyanobacteriota or Cyanophyta, are a phylum of gram-negative bacteria that obtain energy via photosynthesis. The name cyanobacteria refers to their color (), which similarly forms the basis of cyanobacteria's common name, blue-green algae, although they are not usually scientifically classified as algae. They appear to have originated in a freshwater or terrestrial environment. Sericytochromatia, the proposed name of the paraphyletic and most basal group, is the ancestor of both the non-photosynthetic group Melainabacteria and the photosynthetic cyanobacteria, also called Oxyphotobacteria. Cyanobacteria use photosynthetic pigments, such as carotenoids, phycobilins, and various forms of chlorophyll, which absorb energy from light. Unlike heterotrophic prokaryotes, cyanobacteria have internal membranes. These are flattened sacs called thylakoids where photosynthesis is performed. Phototrophic eukaryotes such as green plants perform photosynthesis in plastids that are thought to have their ancestry in cyanobacteria, acquired long ago via a process called endosymbiosis. These endosymbiotic cyanobacteria in eukaryotes then evolved and differentiated into specialized organelles such as chloroplasts, chromoplasts, etioplasts, and leucoplasts, collectively known as plastids. Cyanobacteria are the first organisms known to have produced oxygen. By producing and releasing oxygen as a byproduct of photosynthesis, cyanobacteria are thought to have converted the early oxygen-poor, reducing atmosphere into an oxidizing one, causing the Great Oxidation Event and the "rusting of the Earth", which dramatically changed the composition of life forms on Earth. The cyanobacteria Synechocystis and Cyanothece are important model organisms with potential applications in biotechnology for bioethanol production, food colorings, as a source of human and animal food, dietary supplements and raw materials. Cyanobacteria produce a range of toxins known as cyanotox Document 1::: In ecology, primary production is the synthesis of organic compounds from atmospheric or aqueous carbon dioxide. It principally occurs through the process of photosynthesis, which uses light as its source of energy, but it also occurs through chemosynthesis, which uses the oxidation or reduction of inorganic chemical compounds as its source of energy. Almost all life on Earth relies directly or indirectly on primary production. The organisms responsible for primary production are known as primary producers or autotrophs, and form the base of the food chain. In terrestrial ecoregions, these are mainly plants, while in aquatic ecoregions algae predominate in this role. Ecologists distinguish primary production as either net or gross, the former accounting for losses to processes such as cellular respiration, the latter not. Overview Primary production is the production of chemical energy in organic compounds by living organisms. The main source of this energy is sunlight but a minute fraction of primary production is driven by lithotrophic organisms using the chemical energy of inorganic molecules.Regardless of its source, this energy is used to synthesize complex organic molecules from simpler inorganic compounds such as carbon dioxide () and water (H2O). The following two equations are simplified representations of photosynthesis (top) and (one form of) chemosynthesis (bottom): + H2O + light → CH2O + O2 + O2 + 4 H2S → CH2O + 4 S + 3 H2O In both cases, the end point is a polymer of reduced carbohydrate, (CH2O)n, typically molecules such as glucose or other sugars. These relatively simple molecules may be then used to further synthesise more complicated molecules, including proteins, complex carbohydrates, lipids, and nucleic acids, or be respired to perform work. Consumption of primary producers by heterotrophic organisms, such as animals, then transfers these organic molecules (and the energy stored within them) up the food web, fueling all of the Earth' Document 2::: Chlorella is a genus of about thirteen species of single-celled green algae of the division Chlorophyta. The cells are spherical in shape, about 2 to 10 μm in diameter, and are without flagella. Their chloroplasts contain the green photosynthetic pigments chlorophyll-a and -b. In ideal conditions cells of Chlorella multiply rapidly, requiring only carbon dioxide, water, sunlight, and a small amount of minerals to reproduce. The name Chlorella is taken from the Greek χλώρος, chlōros/ khlōros, meaning green, and the Latin diminutive suffix ella, meaning small. German biochemist and cell physiologist Otto Heinrich Warburg, awarded with the Nobel Prize in Physiology or Medicine in 1931 for his research on cell respiration, also studied photosynthesis in Chlorella. In 1961, Melvin Calvin of the University of California received the Nobel Prize in Chemistry for his research on the pathways of carbon dioxide assimilation in plants using Chlorella. Chlorella has been considered as a source of food and energy because its photosynthetic efficiency can reach 8%, which exceeds that of other highly efficient crops such as sugar cane. Taxonomy Chlorella was first described by Martinus Beijerinck in 1890. Since then, over a hundred taxa have been described within the genus. However, biochemical and genomic data has revealed that many of these species were not closely related to each other, even being placed in a separate class Chlorophyceae. In other words, the "green ball" form of Chlorella appears to be a product of convergent evolution and not a natural taxon. Identifying Chlorella-like algae based on morphological features alone is generally not possible. Some strains of "Chlorella" used for food are incorrectly identified, or correspond to genera that were classified out of true Chlorella. For example, Heterochlorella luteoviridis is typically known as Chlorella luteoviridis which is no longer considered a valid name. As a food source When first harvested, Chlorella Document 3::: Algae (, ; : alga ) is an informal term for a large and diverse group of photosynthetic, eukaryotic organisms. It is a polyphyletic grouping that includes species from multiple distinct clades. Included organisms range from unicellular microalgae, such as Chlorella, Prototheca and the diatoms, to multicellular forms, such as the giant kelp, a large brown alga which may grow up to in length. Most are aquatic and lack many of the distinct cell and tissue types, such as stomata, xylem and phloem that are found in land plants. The largest and most complex marine algae are called seaweeds, while the most complex freshwater forms are the Charophyta, a division of green algae which includes, for example, Spirogyra and stoneworts. Algae that are carried by water are plankton, specifically phytoplankton. Algae constitute a polyphyletic group since they do not include a common ancestor, and although their plastids seem to have a single origin, from cyanobacteria, they were acquired in different ways. Green algae are examples of algae that have primary chloroplasts derived from endosymbiotic cyanobacteria. Diatoms and brown algae are examples of algae with secondary chloroplasts derived from an endosymbiotic red alga. Algae exhibit a wide range of reproductive strategies, from simple asexual cell division to complex forms of sexual reproduction. Algae lack the various structures that characterize land plants, such as the phyllids (leaf-like structures) of bryophytes, rhizoids of non-vascular plants, and the roots, leaves, and other organs found in tracheophytes (vascular plants). Most are phototrophic, although some are mixotrophic, deriving energy both from photosynthesis and uptake of organic carbon either by osmotrophy, myzotrophy, or phagotrophy. Some unicellular species of green algae, many golden algae, euglenids, dinoflagellates, and other algae have become heterotrophs (also called colorless or apochlorotic algae), sometimes parasitic, relying entirely on external e Document 4::: A mycophycobiosis (composed of myco-, from the Ancient Greek: (mukês , "mushroom"), phyco-, from Ancient Greek: , (phûkos, fucus, used for algae), and -biose, from ancient Greek: (bióô, "to spend one's life") is a symbiotic organism made up of a multicellular algae and an ascomycete fungus housed inside the algae (in the thallus for example). The algae and fungus involved in this association are called mycophycobionts. The essential role of the algae is to carry out photosynthesis, while that of the fungus is less obvious, but it could be linked to the transfer of minerals within the thallus, to a repellent effect on herbivores and, above all, to resistance to desiccation of this living organism in the intertidal zone. Such symbioses have been reported in a few green algae (Prasiola, Blidingia) and red algae (Apophlaea), both in seawater and in freshwater. Definition elements Although compared to lichens by certain authors, mycophycobioses carry out an association of the opposite type: the algal partner is multicellular and forms the external structure of the symbiotic organization. Moreover, the reproduction of the two partners is always disjoint (the algae and the fungus reproduce separately). To explain the nuances of this duality, the ecologists Chantal Delzenne-Van Haluwyn, Michel Lerond propose the analogy of the two symbionts with an "ideal couple". In a lichen, the host is compared to a "macho fungus"; in mycophycobiosis, the host is "the algae that wears the panties". According to Hawksworth the physiology of this symbiosis could well be comparable to that of lichens, but it remains to be better explored. Unlike lichens, mycophycobioses look like an algal partner, which remains fertile. These associations appear to be less coevolved than lichens, as they exhibit neither joint asexual multiplication of partners nor do they contain the equivalent lichen products. History The term mycophycobiosis was introduced by Jan and Erika Kohlmeyer in 1972, base The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The chloroplasts contained within the green algal endosymbionts still are capable of what process associated with plants? A. photosynthesis B. respiration C. germination D. reproduction Answer:
sciq-6998
multiple_choice
What is the process of removing wastes and excess water from the body?
[ "ingestion", "depletion", "diffusion", "excretion" ]
D
Relavent Documents: Document 0::: The excretory system is a passive biological system that removes excess, unnecessary materials from the body fluids of an organism, so as to help maintain internal chemical homeostasis and prevent damage to the body. The dual function of excretory systems is the elimination of the waste products of metabolism and to drain the body of used up and broken down components in a liquid and gaseous state. In humans and other amniotes (mammals, birds and reptiles) most of these substances leave the body as urine and to some degree exhalation, mammals also expel them through sweating. Only the organs specifically used for the excretion are considered a part of the excretory system. In the narrow sense, the term refers to the urinary system. However, as excretion involves several functions that are only superficially related, it is not usually used in more formal classifications of anatomy or function. As most healthy functioning organs produce metabolic and other wastes, the entire organism depends on the function of the system. Breaking down of one of more of the systems is a serious health condition, for example kidney failure. Systems Urinary system The kidneys are large, bean-shaped organs which are present on each side of the vertebral column in the abdominal cavity. Humans have two kidneys and each kidney is supplied with blood from the renal artery. The kidneys remove from the blood the nitrogenous wastes such as urea, as well as salts and excess water, and excrete them in the form of urine. This is done with the help of millions of nephrons present in the kidney. The filtrated blood is carried away from the kidneys by the renal vein (or kidney vein). The urine from the kidney is collected by the ureter (or excretory tubes), one from each kidney, and is passed to the urinary bladder. The urinary bladder collects and stores the urine until urination. The urine collected in the bladder is passed into the external environment from the body through an opening called Document 1::: Urine is a liquid by-product of metabolism in humans and in many other animals. Urine flows from the kidneys through the ureters to the urinary bladder. Urination results in urine being excreted from the body through the urethra. Cellular metabolism generates many by-products that are rich in nitrogen and must be cleared from the bloodstream, such as urea, uric acid, and creatinine. These by-products are expelled from the body during urination, which is the primary method for excreting water-soluble chemicals from the body. A urinalysis can detect nitrogenous wastes of the mammalian body. Urine plays an important role in the earth's nitrogen cycle. In balanced ecosystems, urine fertilizes the soil and thus helps plants to grow. Therefore, urine can be used as a fertilizer. Some animals use it to mark their territories. Historically, aged or fermented urine (known as lant) was also used for gunpowder production, household cleaning, tanning of leather and dyeing of textiles. Human urine and feces are collectively referred to as human waste or human excreta, and are managed via sanitation systems. Livestock urine and feces also require proper management if the livestock population density is high. Physiology Most animals have excretory systems for elimination of soluble toxic wastes. In humans, soluble wastes are excreted primarily by the urinary system and, to a lesser extent in terms of urea, removed by perspiration. The urinary system consists of the kidneys, ureters, urinary bladder, and urethra. The system produces urine by a process of filtration, reabsorption, and tubular secretion. The kidneys extract the soluble wastes from the bloodstream, as well as excess water, sugars, and a variety of other compounds. The resulting urine contains high concentrations of urea and other substances, including toxins. Urine flows from the kidneys through the ureter, bladder, and finally the urethra before passing from the body. Duration Research looking at the duration Document 2::: Wet Processing Engineering is one of the major streams in Textile Engineering or Textile manufacturing which refers to the engineering of textile chemical processes and associated applied science. The other three streams in textile engineering are yarn engineering, fabric engineering, and apparel engineering. The processes of this stream are involved or carried out in an aqueous stage. Hence, it is called a wet process which usually covers pre-treatment, dyeing, printing, and finishing. The wet process is usually done in the manufactured assembly of interlacing fibers, filaments and yarns, having a substantial surface (planar) area in relation to its thickness, and adequate mechanical strength to give it a cohesive structure. In other words, the wet process is done on manufactured fiber, yarn and fabric. All of these stages require an aqueous medium which is created by water. A massive amount of water is required in these processes per day. It is estimated that, on an average, almost 50–100 liters of water is used to process only 1 kilogram of textile goods, depending on the process engineering and applications. Water can be of various qualities and attributes. Not all water can be used in the textile processes; it must have some certain properties, quality, color and attributes of being used. This is the reason why water is a prime concern in wet processing engineering. Water Water consumption and discharge of wastewater are the two major concerns. The textile industry uses a large amount of water in its varied processes especially in wet operations such as pre-treatment, dyeing, and printing. Water is required as a solvent of various dyes and chemicals and it is used in washing or rinsing baths in different steps. Water consumption depends upon the application methods, processes, dyestuffs, equipment/machines and technology which may vary mill to mill and material composition. Longer processing sequences, processing of extra dark colors and reprocessing lead Document 3::: Drinking is the act of ingesting water or other liquids into the body through the mouth, proboscis, or elsewhere. Humans drink by swallowing, completed by peristalsis in the esophagus. The physiological processes of drinking vary widely among other animals. Most animals drink water to maintain bodily hydration, although many can survive on the water gained from their food. Water is required for many physiological processes. Both inadequate and (less commonly) excessive water intake are associated with health problems. Methods of drinking In humans When a liquid enters a human mouth, the swallowing process is completed by peristalsis which delivers the liquid through the esophagus to the stomach; much of the activity is abetted by gravity. The liquid may be poured from the hands or drinkware may be used as vessels. Drinking can also be performed by acts of inhalation, typically when imbibing hot liquids or drinking from a spoon. Infants employ a method of suction wherein the lips are pressed tight around a source, as in breastfeeding: a combination of breath and tongue movement creates a vacuum which draws in liquid. In other land mammals By necessity, terrestrial animals in captivity become accustomed to drinking water, but most free-roaming animals stay hydrated through the fluids and moisture in fresh food, and learn to actively seek foods with high fluid content. When conditions impel them to drink from bodies of water, the methods and motions differ greatly among species. Cats, canines, and ruminants all lower the neck and lap in water with their powerful tongues. Cats and canines lap up water with the tongue in a spoon-like shape. Canines lap water by scooping it into their mouth with a tongue which has taken the shape of a ladle. However, with cats, only the tip of their tongue (which is smooth) touches the water, and then the cat quickly pulls its tongue back into its mouth which soon closes; this results in a column of liquid being pulled into the ca Document 4::: Purified water is water that has been mechanically filtered or processed to remove impurities and make it suitable for use. Distilled water was, formerly, the most common form of purified water, but, in recent years, water is more frequently purified by other processes including capacitive deionization, reverse osmosis, carbon filtering, microfiltration, ultrafiltration, ultraviolet oxidation, or electrodeionization. Combinations of a number of these processes have come into use to produce ultrapure water of such high purity that its trace contaminants are measured in parts per billion (ppb) or parts per trillion (ppt). Purified water has many uses, largely in the production of medications, in science and engineering laboratories and industries, and is produced in a range of purities. It is also used in the commercial beverage industry as the primary ingredient of any given trademarked bottling formula, in order to maintain product consistency. It can be produced on-site for immediate use or purchased in containers. Purified water in colloquial English can also refer to water that has been treated ("rendered potable") to neutralize, but not necessarily remove contaminants considered harmful to humans or animals. Parameters of water purity Purified water is usually produced by the purification of drinking water or ground water. The impurities that may need to be removed are: inorganic ions (typically monitored as electrical conductivity or resistivity or specific tests) organic compounds (typically monitored as TOC or by specific tests) bacteria (monitored by total viable counts or epifluorescence) endotoxins and nucleases (monitored by LAL or specific enzyme tests) particulates (typically controlled by filtration) gases (typically managed by degassing when required) Purification methods Distillation Distilled water is produced by a process of distillation. Distillation involves boiling the water and then condensing the vapor into a clean container, leaving sol The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the process of removing wastes and excess water from the body? A. ingestion B. depletion C. diffusion D. excretion Answer:
sciq-7837
multiple_choice
Name the physicist who discovered radioactivity?
[ "louis pasteur", "neal degrasse tyson", "albert einstein", "antoine henri becquerel" ]
D
Relavent Documents: Document 0::: Susan Carol Hagness is an American electrical engineer and applied electromagnetics researcher. She is the Philip Dunham Reed Professor and Department Chair of Electrical and Computer Engineering at the University of Wisconsin–Madison. Early life and education Hagness was born and raised in Terre Haute, Indiana where she was encouraged by mathematics professor Herb Bailey to pursue a career in STEM. He persuaded Hagness to take computer programming during the summer and entered her in Rubik's Cube competitions. Growing up, she attended high school in the Vigo County School Corporation district where she was a finalist for their Most Valuable Student contest. Hagness completed her PhD at Northwestern University after being inspired by Allen Taflove, who later became her doctoral advisor. Career As a graduate student, Hagness was asked to assist with Northwestern's new first-year course called "Engineering First," inspiring her to pursue a career in teaching. After graduating, Hagness chose to accept a faculty position at the University of Wisconsin–Madison (UW–Madison). Upon joining the UW–Madison's Department of Electrical and Computer Engineering in 1998, she was one of only two women in the department out of approximately 40 faculty. As an assistant professor of electrical and computer engineering at the university, she began researching the use of microwave radar imaging for breast cancer detection. In recognition of her academic research, Hagness earned a 2000 Presidential Early Career Awards for Scientists and Engineer and was named one of the world's 100 Top Young Innovators by the Massachusetts Institute of Technology's magazine, Technology Review. In 2003, she was the recipient of the Emil H. Steiger Distinguished Teaching Award from the University of Wisconsin–Madison. In 2009, Hagness was elected a Fellow of the IEEE for her "contributions to time-domain computational electromagnetics and microwave medical imaging." She remained at UW–Madison where she Document 1::: The Röntgen Memorial Site in Würzburg, Germany, is dedicated to the work of the German physicist Wilhelm Conrad Röntgen (1845–1923) and his discovery of X-rays, for which he was granted the Nobel Prize in physics. It contains an exhibition of historical instruments, machines and documents. Location The Röntgen Memorial Site is in the foyer, corridors and two laboratory rooms of the former Physics Institute of the University of Würzburg in Röntgenring 8, a building that is now used by the University of Applied Sciences Würzburg-Schweinfurt. The road, where the building lies, was renamed in 1909 from Pleicherring to Röntgenring. History On the late Friday evening of 8. November 1895 Röntgen discovered for the first time the rays which penetrate through solid materials and gave them the name X-rays. He presented this in a lecture and publication On a new type of rays - Über eine neue Art von Strahlen on 23 January 1896 at the Physical Medical Society of Würzburg. During the discussion of this lecture, the anatomist Albert von Kölliker proposed to call these rays Röntgen radiation after their inventor, a term that is still being used in Germany. Exhibition The Röntgen Memorial Site gives an insight into the particle physics of the late 19th century. It shows an experimental set-up of cathodic rays beside the apparatus of the discovery. An experiment of penetrating solid materials by X-rays is shown in the historic laboratory of Röntgen. A separate room shows various X-ray tubes, a medical X-ray machine of Siemens & Halske from 1912 and several original documents. In the foyer a short German movie explains the purpose of the Memorial Site and the life of Röntgen. In the corridor some personal belongings of Röntgen are displayed to give some background information on his personal and historical circumstances. After remodeling in 2015 the tables and captures of the exhibition are now in English and German language. Society The site is managed by the non-profit Document 2::: Antoine Henri Becquerel (; ; 15 December 1852 – 25 August 1908) was a French engineer, physicist, Nobel laureate, and the first person to discover radioactivity. For work in this field he, along with Marie Skłodowska-Curie and Pierre Curie, received the 1903 Nobel Prize in Physics. The SI unit for radioactivity, the becquerel (Bq), is named after him. Biography Early life Becquerel was born in Paris, France, into a wealthy family which produced four generations of notable physicists, including Becquerel's grandfather (Antoine César Becquerel), father (Alexandre-Edmond Becquerel), and son (Jean Becquerel). Henri started off his education by attending the Lycée Louis-le-Grand school, a prep school in Paris. He studied engineering at the École Polytechnique and the École des Ponts et Chaussées. Career In Becquerel's early career, he became the third in his family to occupy the physics chair at the Muséum National d'Histoire Naturelle in 1892. Later on in 1894, Becquerel became chief engineer in the Department of Bridges and Highways before he started with his early experiments. Becquerel's earliest works centered on the subject of his doctoral thesis: the plane polarization of light, with the phenomenon of phosphorescence and absorption of light by crystals. Early in his career, Becquerel also studied the Earth's magnetic fields. In 1895, he was appointed as a professor at the École Polytechnique. Becquerel's discovery of spontaneous radioactivity is a famous example of serendipity, of how chance favors the prepared mind. Becquerel had long been interested in phosphorescence, the emission of light of one color following a body's exposure to light of another color. In early 1896, there was a wave of excitement following Wilhelm Conrad Röntgen's discovery of X-rays on 5 January. During the experiment, Röntgen "found that the Crookes tubes he had been using to study cathode rays emitted a new kind of invisible ray that was capable of penetrating through black paper". Document 3::: The names for the chemical elements 104 to 106 were the subject of a major controversy starting in the 1960s, described by some nuclear chemists as the Transfermium Wars because it concerned the elements following fermium (element 100) on the periodic table. This controversy arose from disputes between American scientists and Soviet scientists as to which had first isolated these elements. The final resolution of this controversy in 1997 also decided the names of elements 107 to 109. Controversy By convention, naming rights for newly discovered chemical elements go to their discoverers. For elements 104, 105, and 106, there was a controversy between Soviet researchers at the Joint Institute for Nuclear Research and American researchers at Lawrence Berkeley National Laboratory regarding which group had discovered them first. Both parties suggested their own names for elements 104 and 105, not recognizing the other's name. The American name of seaborgium for element 106 was also objectionable to some, because it referred to American chemist Glenn T. Seaborg who was still alive at the time this name was proposed. (Einsteinium and fermium had also been proposed as names of new elements while Einstein and Fermi were still living, but only made public after their deaths, due to Cold War secrecy.) Opponents The two principal groups which were involved in the conflict over element naming were: An American group at Lawrence Berkeley Laboratory. A Russian group at Joint Institute for Nuclear Research in Dubna. and, as a kind of arbiter, The IUPAC Commission on Nomenclature of Inorganic Chemistry, which introduced its own proposal to the IUPAC General Assembly. The German group at the Gesellschaft für Schwerionenforschung (GSI) in Darmstadt, who had (undisputedly) discovered elements 107 to 109, were dragged into the controversy when the Commission suggested that the name "hahnium", proposed for element 105 by the Americans, be used for GSI's element 108 instead. P Document 4::: Edward Raymond Andrew FRS FRSE (27 June 1921 – 27 May 2001) was a 20th-century British scientist who was a pioneer of nuclear magnetic resonance. He was a primary figure in the development and creation of the world's first MRI scanner. Life He was born in Boston, Lincolnshire on 27 June 1921 the only child of English parents of Scots descent. He was educated at Wellingborough School where he was head boy. He then won a place at Christ's College, Cambridge on a Natural Science Tripos from 1939 to 1942 under C. P. Snow, Lawrence Bragg, Norman Feather and David Shoenberg. From 1942 to 1945, during the Second World War he was Scientific Officer at the Air Defence Research and Development Establishment in Malvern studying the effects of gun flashes on radar. In 1945 he returned to Cambridge as a research student at Pembroke College and at the Cavendish Laboratory. Here he worked with David Shoenberg on superconductors, gaining a doctorate (PhD) in 1948. He then went to Harvard University for a year to work on nuclear magnetic resonance with Ed Purcell and Bersohn, also working on the Pake doublet. He returned to Britain in 1949 to work with Jack Allen FRS at the Cavendish. Colleagues on the NMR project included Bob Eades, Dan Hyndman and Alwyn Rushworth. His students here included Waldo Hinshaw. In March 1952 he was elected a Fellow of the Royal Society of Edinburgh. In 1954 he became professor of physics at the University of North Wales in Bangor. Here he founded the British Radio-Frequency Spectroscopy Group (BRSG). In 1964 he moved to a chair at the University of Nottingham in place of Prof L. F. Bates. His work here included the development of the MRI scanner from 1975 to 1977. In 1978 their success led to the development of the whole-body MRI scanner. After 19 years in Nottingham he moved to the University of Florida in Gainesville as Graduate Professor of Radiology, Physics and Nuclear Engineering. In 1984 he was elected a Fellow of the Royal Society of Lo The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Name the physicist who discovered radioactivity? A. louis pasteur B. neal degrasse tyson C. albert einstein D. antoine henri becquerel Answer:
sciq-5724
multiple_choice
In which process do two light nuclei combine to produce a heavier nucleus and great energy?
[ "nuclear fusion", "certain fusion", "light fusion", "general fusion" ]
A
Relavent Documents: Document 0::: Nuclear binding energy in experimental physics is the minimum energy that is required to disassemble the nucleus of an atom into its constituent protons and neutrons, known collectively as nucleons. The binding energy for stable nuclei is always a positive number, as the nucleus must gain energy for the nucleons to move apart from each other. Nucleons are attracted to each other by the strong nuclear force. In theoretical nuclear physics, the nuclear binding energy is considered a negative number. In this context it represents the energy of the nucleus relative to the energy of the constituent nucleons when they are infinitely far apart. Both the experimental and theoretical views are equivalent, with slightly different emphasis on what the binding energy means. The mass of an atomic nucleus is less than the sum of the individual masses of the free constituent protons and neutrons. The difference in mass can be calculated by the Einstein equation, , where E is the nuclear binding energy, c is the speed of light, and m is the difference in mass. This 'missing mass' is known as the mass defect, and represents the energy that was released when the nucleus was formed. The term "nuclear binding energy" may also refer to the energy balance in processes in which the nucleus splits into fragments composed of more than one nucleon. If new binding energy is available when light nuclei fuse (nuclear fusion), or when heavy nuclei split (nuclear fission), either process can result in release of this binding energy. This energy may be made available as nuclear energy and can be used to produce electricity, as in nuclear power, or in a nuclear weapon. When a large nucleus splits into pieces, excess energy is emitted as gamma rays and the kinetic energy of various ejected particles (nuclear fission products). These nuclear binding energies and forces are on the order of one million times greater than the electron binding energies of light atoms like hydrogen. Introduction Nucl Document 1::: Reaction products This sequence of reactions can be understood by thinking of the two interacting carbon nuclei as coming together to form an excited state of the 24Mg nucleus, which then decays in one of the five ways listed above. The first two reactions are strongly exothermic, as indicated by the large positive energies released, and ar Document 2::: Pair production is the creation of a subatomic particle and its antiparticle from a neutral boson. Examples include creating an electron and a positron, a muon and an antimuon, or a proton and an antiproton. Pair production often refers specifically to a photon creating an electron–positron pair near a nucleus. As energy must be conserved, for pair production to occur, the incoming energy of the photon must be above a threshold of at least the total rest mass energy of the two particles created. (As the electron is the lightest, hence, lowest mass/energy, elementary particle, it requires the least energetic photons of all possible pair-production processes.) Conservation of energy and momentum are the principal constraints on the process. All other conserved quantum numbers (angular momentum, electric charge, lepton number) of the produced particles must sum to zero thus the created particles shall have opposite values of each other. For instance, if one particle has electric charge of +1 the other must have electric charge of −1, or if one particle has strangeness of +1 then another one must have strangeness of −1. The probability of pair production in photon–matter interactions increases with photon energy and also increases approximately as the square of atomic number of (hence, number of protons in) the nearby atom. Photon to electron and positron For photons with high photon energy (MeV scale and higher), pair production is the dominant mode of photon interaction with matter. These interactions were first observed in Patrick Blackett's counter-controlled cloud chamber, leading to the 1948 Nobel Prize in Physics. If the photon is near an atomic nucleus, the energy of a photon can be converted into an electron–positron pair: (Z+) →  +  The photon's energy is converted to particle mass in accordance with Einstein's equation, ; where is energy, is mass and is the speed of light. The photon must have higher energy than the sum of the rest mass energies of Document 3::: In nuclear physics and nuclear chemistry, a nuclear reaction is a process in which two nuclei, or a nucleus and an external subatomic particle, collide to produce one or more new nuclides. Thus, a nuclear reaction must cause a transformation of at least one nuclide to another. If a nucleus interacts with another nucleus or particle and they then separate without changing the nature of any nuclide, the process is simply referred to as a type of nuclear scattering, rather than a nuclear reaction. In principle, a reaction can involve more than two particles colliding, but because the probability of three or more nuclei to meet at the same time at the same place is much less than for two nuclei, such an event is exceptionally rare (see triple alpha process for an example very close to a three-body nuclear reaction). The term "nuclear reaction" may refer either to a change in a nuclide induced by collision with another particle or to a spontaneous change of a nuclide without collision. Natural nuclear reactions occur in the interaction between cosmic rays and matter, and nuclear reactions can be employed artificially to obtain nuclear energy, at an adjustable rate, on-demand. Nuclear chain reactions in fissionable materials produce induced nuclear fission. Various nuclear fusion reactions of light elements power the energy production of the Sun and stars. History In 1919, Ernest Rutherford was able to accomplish transmutation of nitrogen into oxygen at the University of Manchester, using alpha particles directed at nitrogen 14N + α → 17O + p.  This was the first observation of an induced nuclear reaction, that is, a reaction in which particles from one decay are used to transform another atomic nucleus. Eventually, in 1932 at Cambridge University, a fully artificial nuclear reaction and nuclear transmutation was achieved by Rutherford's colleagues John Cockcroft and Ernest Walton, who used artificially accelerated protons against lithium-7, to split the nucleus into t Document 4::: The Gamow factor, Sommerfeld factor or Gamow–Sommerfeld factor, named after its discoverer George Gamow or after Arnold Sommerfeld, is a probability factor for two nuclear particles' chance of overcoming the Coulomb barrier in order to undergo nuclear reactions, for example in nuclear fusion. By classical physics, there is almost no possibility for protons to fuse by crossing each other's Coulomb barrier at temperatures commonly observed to cause fusion, such as those found in the sun. When George Gamow instead applied quantum mechanics to the problem, he found that there was a significant chance for the fusion due to tunneling. The probability of two nuclear particles overcoming their electrostatic barriers is given by the following equation: where is the Gamow energy, Here, is the reduced mass of the two particles. The constant is the fine structure constant, is the speed of light, and and are the respective atomic numbers of each particle. While the probability of overcoming the Coulomb barrier increases rapidly with increasing particle energy, for a given temperature, the probability of a particle having such an energy falls off very fast, as described by the Maxwell–Boltzmann distribution. Gamow found that, taken together, these effects mean that for any given temperature, the particles that fuse are mostly in a temperature-dependent narrow range of energies known as the Gamow window. Derivation Gamow first solved the one-dimensional case of quantum tunneling using the WKB approximation. Considering a wave function of a particle of mass m, we take area 1 to be where a wave is emitted, area 2 the potential barrier which has height V and width l (at ), and area 3 its other side, where the wave is arriving, partly transmitted and partly reflected. For a wave number k and energy E we get: where and . This is solved for given A and α by taking the boundary conditions at the both barrier edges, at and , where both and its derivative must be equal The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. In which process do two light nuclei combine to produce a heavier nucleus and great energy? A. nuclear fusion B. certain fusion C. light fusion D. general fusion Answer:
sciq-4832
multiple_choice
What hormone controls milk production in mammary glands?
[ "pepsin", "prolactin", "melanin", "dopamine" ]
B
Relavent Documents: Document 0::: Uterine glands or endometrial glands are tubular glands, lined by a simple columnar epithelium, found in the functional layer of the endometrium that lines the uterus. Their appearance varies during the menstrual cycle. During the proliferative phase, uterine glands appear long due to estrogen secretion by the ovaries. During the secretory phase, the uterine glands become very coiled with wide lumens and produce a glycogen-rich secretion known as histotroph or uterine milk. This change corresponds with an increase in blood flow to spiral arteries due to increased progesterone secretion from the corpus luteum. During the pre-menstrual phase, progesterone secretion decreases as the corpus luteum degenerates, which results in decreased blood flow to the spiral arteries. The functional layer of the uterus containing the glands becomes necrotic, and eventually sloughs off during the menstrual phase of the cycle. They are of small size in the unimpregnated uterus, but shortly after impregnation become enlarged and elongated, presenting a contorted or waved appearance. Function Hormones produced in early pregnancy stimulate the uterine glands to secrete a number of substances to give nutrition and protection to the embryo and fetus, and the fetal membranes. These secretions are known as histiotroph, alternatively histotroph, and also as uterine milk. Important uterine milk proteins are glycodelin-A, and osteopontin. Some secretory components from the uterine glands are taken up by the secondary yolk sac lining the exocoelomic cavity during pregnancy, and may thereby assist in providing fetal nutrition. Additional images Document 1::: The following is a list of hormones found in Homo sapiens. Spelling is not uniform for many hormones. For example, current North American and international usage uses estrogen and gonadotropin, while British usage retains the Greek digraph in oestrogen and favours the earlier spelling gonadotrophin. Hormone listing Steroid Document 2::: An anterior pituitary basophil is a type of cell in the anterior pituitary which manufactures hormones. It is called a basophil because it is basophilic (readily takes up bases), and typically stains a relatively deep blue or purple. These basophils are further classified by the hormones they produce. (It is usually not possible to distinguish between these cell types using standard staining techniques.) *Produced only in pregnancy by the developing embryo. See also Chromophobe cell Melanotroph Chromophil Acidophil cell Oxyphil cell Oxyphil cell (parathyroid) Pituitary gland Neuroendocrine cell Basophilic Document 3::: Pathophysiology of obesity is the study of disordered physiological processes that cause, result from, or are otherwise associated with obesity. A number of possible pathophysiological mechanisms have been identified which may contribute in the development and maintenance of obesity. Research This field of research had been almost unapproached until the leptin gene was discovered in 1994 by J. M. Friedman's laboratory. These investigators postulated that leptin was a satiety factor. In the ob/ob mouse, mutations in the leptin gene resulted in the obese phenotype opening the possibility of leptin therapy for human obesity. However, soon thereafter J. F. Caro's laboratory could not detect any mutations in the leptin gene in humans with obesity. On the contrary, leptin expression was increased, proposing the possibility of leptin-resistance in human obesity. Since this discovery, many other hormonal mechanisms have been elucidated that participate in the regulation of appetite and food intake, storage patterns of adipose tissue, and development of insulin resistance. Since leptin's discovery, ghrelin, insulin, orexin, PYY 3-36, cholecystokinin, adiponectin, as well as many other mediators have been studied. The adipokines are mediators produced by adipose tissue; their action is thought to modify many obesity-related diseases. Appetite Leptin and ghrelin are considered to be complementary in their influence on appetite, with ghrelin produced by the stomach modulating short-term appetitive control (i.e. to eat when the stomach is empty and to stop when the stomach is stretched). Leptin is produced by adipose tissue to signal fat storage reserves in the body, and mediates long-term appetitive controls (i.e. to eat more when fat storages are low and less when fat storages are high). Although administration of leptin may be effective in a small subset of obese individuals who are leptin-deficient, most obese individuals are thought to be leptin resistant and have been f Document 4::: Pulsatile secretion is a biochemical phenomenon observed in a wide variety of cell and tissue types, in which chemical products are secreted in a regular temporal pattern. The most common cellular products observed to be released in this manner are intercellular signaling molecules such as hormones or neurotransmitters. Examples of hormones that are secreted pulsatilely include insulin, thyrotropin, TRH, gonadotropin-releasing hormone (GnRH) and growth hormone (GH). In the nervous system, pulsatility is observed in oscillatory activity from central pattern generators. In the heart, pacemakers are able to work and secrete in a pulsatile manner. A pulsatile secretion pattern is critical to the function of many hormones in order to maintain the delicate homeostatic balance necessary for essential life processes, such as development and reproduction. Variations of the concentration in a certain frequency can be critical to hormone function, as evidenced by the case of GnRH agonists, which cause functional inhibition of the receptor for GnRH due to profound downregulation in response to constant (tonic) stimulation. Pulsatility may function to sensitize target tissues to the hormone of interest and upregulate receptors, leading to improved responses. This heightened response may have served to improve the animal's fitness in its environment and promote its evolutionary retention. Pulsatile secretion in its various forms is observed in: Hypothalamic-pituitary-gonadal axis (HPG) related hormones Glucocorticoids Insulin Growth hormone Parathyroid hormone Neuroendocrine Pulsatility Nervous system control over hormone release is based in the hypothalamus, from which the neurons that populate the pariventricular and arcuate nuclei originate. These neurons project to the median eminence, where they secrete releasing hormones into the hypophysial portal system connecting the hypothalamus with the pituitary gland. There, they dictate endocrine function via the four Hyp The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What hormone controls milk production in mammary glands? A. pepsin B. prolactin C. melanin D. dopamine Answer:
sciq-2646
multiple_choice
What type of molecules sit within a membrane and contain an aqueous channel that spans the membrane’s hydrophobic region?
[ "microorganisms", "mole", "channel", "osmotic fluid" ]
C
Relavent Documents: Document 0::: Semipermeable membrane is a type of biological or synthetic, polymeric membrane that will allow certain molecules or ions to pass through it by osmosis. The rate of passage depends on the pressure, concentration, and temperature of the molecules or solutes on either side, as well as the permeability of the membrane to each solute. Depending on the membrane and the solute, permeability may depend on solute size, solubility, properties, or chemistry. How the membrane is constructed to be selective in its permeability will determine the rate and the permeability. Many natural and synthetic materials which are rather thick are also semipermeable. One example of this is the thin film on the inside of the egg. Biological membranes are selectively permeable, with the passage of molecules controlled by facilitated diffusion, passive transport or active transport regulated by proteins embedded in the membrane. Biological membranes An example of a biological semi-permeable membrane is the lipid bilayer, on which is based the plasma membrane that surrounds all biological cells. A group of phospholipids (consisting of a phosphate head and two fatty acid tails) arranged into a double layer, the phospholipid bilayer is a semipermeable membrane that is very specific in its permeability. The hydrophilic phosphate heads are in the outside layer and exposed to the water content outside and within the cell. The hydrophobic tails are the layer hidden in the inside of the membrane. Cholesterol molecules are also found throughout the plasma membrane and act as a buffer of membrane fluidity. The phospholipid bilayer is most permeable to small, uncharged solutes. Protein channels are embedded in or through phospholipids, and, collectively, this model is known as the fluid mosaic model. Aquaporins are protein channel pores permeable to water. Cellular communication Information can also pass through the plasma membrane when signaling molecules bind to receptors in the cell membrane. Th Document 1::: Tight junctions, also known as occluding junctions or zonulae occludentes (singular, zonula occludens), are multiprotein junctional complexes whose canonical function is to prevent leakage of solutes and water and seals between the epithelial cells. They also play a critical role maintaining the structure and permeability of endothelial cells. Tight junctions may also serve as leaky pathways by forming selective channels for small cations, anions, or water. The corresponding junctions that occur in invertebrates are septate junctions. Structure Tight junctions are composed of a branching network of sealing strands, each strand acting independently from the others. Therefore, the efficiency of the junction in preventing ion passage increases exponentially with the number of strands. Each strand is formed from a row of transmembrane proteins embedded in both plasma membranes, with extracellular domains joining one another directly. There are at least 40 different proteins composing the tight junctions. These proteins consist of both transmembrane and cytoplasmic proteins. The three major transmembrane proteins are occludin, claudins, and junction adhesion molecule (JAM) proteins. These associate with different peripheral membrane proteins such as ZO-1 located on the intracellular side of plasma membrane, which anchor the strands to the actin component of the cytoskeleton. Thus, tight junctions join together the cytoskeletons of adjacent cells. Transmembrane proteins: Occludin was the first integral membrane protein to be identified. It has a molecular weight of ~60kDa. It consists of four transmembrane domains and both the N-terminus and the C-terminus of the protein are intracellular. It forms two extracellular loops and one intracellular loop. These loops help regulate paracellular permeability. Occludin also plays a key role in cellular structure and barrier function. Claudins were discovered after occludin and are a family of over 27 different members in Document 2::: Membrane proteins are common proteins that are part of, or interact with, biological membranes. Membrane proteins fall into several broad categories depending on their location. Integral membrane proteins are a permanent part of a cell membrane and can either penetrate the membrane (transmembrane) or associate with one or the other side of a membrane (integral monotopic). Peripheral membrane proteins are transiently associated with the cell membrane. Membrane proteins are common, and medically important—about a third of all human proteins are membrane proteins, and these are targets for more than half of all drugs. Nonetheless, compared to other classes of proteins, determining membrane protein structures remains a challenge in large part due to the difficulty in establishing experimental conditions that can preserve the correct conformation of the protein in isolation from its native environment. Function Membrane proteins perform a variety of functions vital to the survival of organisms: Membrane receptor proteins relay signals between the cell's internal and external environments. Transport proteins move molecules and ions across the membrane. They can be categorized according to the Transporter Classification database. Membrane enzymes may have many activities, such as oxidoreductase, transferase or hydrolase. Cell adhesion molecules allow cells to identify each other and interact. For example, proteins involved in immune response The localization of proteins in membranes can be predicted reliably using hydrophobicity analyses of protein sequences, i.e. the localization of hydrophobic amino acid sequences. Integral membrane proteins Integral membrane proteins are permanently attached to the membrane. Such proteins can be separated from the biological membranes only using detergents, nonpolar solvents, or sometimes denaturing agents. They can be classified according to their relationship with the bilayer: Integral polytopic proteins are transmembran Document 3::: Paracellular transport refers to the transfer of substances across an epithelium by passing through the intercellular space between the cells. It is in contrast to transcellular transport, where the substances travel through the cell, passing through both the apical membrane and basolateral membrane. The distinction has particular significance in renal physiology and intestinal physiology. Transcellular transport often involves energy expenditure whereas paracellular transport is unmediated and passive down a concentration gradient, or by osmosis (for water) and solvent drag for solutes. Paracellular transport also has the benefit that absorption rate is matched to load because it has no transporters that can be saturated. In most mammals, intestinal absorption of nutrients is thought to be dominated by transcellular transport, e.g., glucose is primarily absorbed via the SGLT1 transporter and other glucose transporters. Paracellular absorption therefore plays only a minor role in glucose absorption, although there is evidence that paracellular pathways become more available when nutrients are present in the intestinal lumen. In contrast, small flying vertebrates (small birds and bats) rely on the paracellular pathway for the majority of glucose absorption in the intestine. This has been hypothesized to compensate for an evolutionary pressure to reduce mass in flying animals, which resulted in a reduction in intestine size and faster transit time of food through the gut. Capillaries of the blood–brain barrier have only transcellular transport, in contrast with normal capillaries which have both transcellular and paracellular transport. The paracellular pathway of transport is also important for the absorption of drugs in the gastrointestinal tract. The paracellular pathway allows the permeation of hydrophilic molecules that are not able to permeate through the lipid membrane by the transcellular pathway of absorption. This is particularly important for hydrophi Document 4::: Polymorphism in biophysics is the ability of lipids to aggregate in a variety of ways, giving rise to structures of different shapes, known as "phases". This can be in the form of spheres of lipid molecules (micelles), pairs of layers that face one another (lamellar phase, observed in biological systems as a lipid bilayer), a tubular arrangement (hexagonal), or various cubic phases (Fdm, Imm, Iam, Pnm, and Pmm being those discovered so far). More complicated aggregations have also been observed, such as rhombohedral, tetragonal and orthorhombic phases. It forms an important part of current academic research in the fields of membrane biophysics (polymorphism), biochemistry (biological impact) and organic chemistry (synthesis). Determination of the topology of a lipid system is possible by a number of methods, the most reliable of which is x-ray diffraction. This uses a beam of x-rays that are scattered by the sample, giving a diffraction pattern as a set of rings. The ratio of the distances of these rings from the central point indicates which phase(s) are present. The structural phase of the aggregation is influenced by the ratio of lipids present, temperature, hydration, pressure and ionic strength (and type). Hexagonal phases In lipid polymorphism, if the packing ratio of lipids is greater or less than one, lipid membranes can form two separate hexagonal phases, or nonlamellar phases, in which long, tubular aggregates form according to the environment in which the lipid is introduced. Hexagonal I phase (HI) This phase is favored in detergent-in-water solutions and has a packing ratio of less than one. The micellar population in a detergent/water mixture cannot increase without limit as the detergent to water ratio increases. In the presence of low amounts of water, lipids that would normally form micelles will form larger aggregates in the form of micellar tubules in order to satisfy the requirements of the hydrophobic effect. These aggregates can be t The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What type of molecules sit within a membrane and contain an aqueous channel that spans the membrane’s hydrophobic region? A. microorganisms B. mole C. channel D. osmotic fluid Answer:
sciq-1522
multiple_choice
In what two ways are volcanic eruptions characterized?
[ "localized and general", "explosive and non-explosive", "minor and explosive", "non-explosive and serious" ]
B
Relavent Documents: Document 0::: The mid-24th century BCE climate anomaly is the period, between 2354–2345 BCE, of consistently, reduced annual temperatures that are reconstructed from consecutive abnormally narrow, Irish oak tree rings. These tree rings are indicative of a period of catastrophically reduced growth in Irish trees during that period. This range of dates also matches the transition from the Neolithic to the Bronze Age in the British Isles and a period of widespread societal collapse in the Near East. It has been proposed that this anomalous downturn in the climate might have been the result of comet debris suspended in the atmosphere. In 1997, Marie-Agnès Courty proposed that a natural disaster involving wildfires, floods, and an air blast of over 100 megatons power occurred about 2350 BCE. This proposal is based on unusual "dust" deposits which have been reported from archaeological sites in Mesopotamia that are a few hundred kilometres from each other. In later papers, Courty subsequently revised the date of this event from 2350 BCE to 2000 BCE. Based only upon the analysis of satellite imagery, Umm al Binni lake in southern Iraq has been suggested as a possible extraterrestrial impact crater and possible cause of this natural disaster. More recent sources have argued for a formation of the lake through the subsidence of the underlying basement fault blocks. Baillie and McAneney's 2015 discussion of this climate anomaly discusses its abnormally narrow Irish tree rings and the anomalous dust deposits of Courty. However, this paper lacks any mention of Umm al Binni lake. See also 4.2-kiloyear event, c. 2200 BCE Great Flood (China), c. 2300 BCE Document 1::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 2::: Explosive volcanic eruptions affect the global climate in several ways. Lowering sea surface temperature One main impact of volcanoes is the injection of sulfur-bearing gases into the stratosphere, which oxidize to form sulfate aerosols. Stratospheric sulfur aerosols spread around the globe by the atmospheric circulation, producing surface cooling by scattering solar radiation back to space. This cooling effect on the ocean surface usually lasts for several years as the lifetime of sulfate aerosols is about 2–3 years. However, in the subsurface ocean the cooling signal may persist for a longer time and may have impacts on some decadal variabilities, such as the Atlantic meridional overturning circulation (AMOC). Volcanic aerosols from huge volcanoes (VEI>=5) directly reduce global mean sea surface temperature (SST) by approximately 0.2-0.3 °C, milder than global total surface temperature drop, which is ~0.3 to 0.5 °C, according to both global temperature records and model simulations. It usually takes several years to be back to normal. Decreasing ocean heat content The volcanic cooling signals in ocean heat content can persist for much longer time (decadal or mutil-decadal time scale), far beyond the duration of volcanic forcing. Several studies have revealed that Krakatau’s effect in the heat content can be as long as one-century. Relaxation time of the effects of recent volcanoes is generally shorter than those before the 1950s. For example, the recovery time of ocean heat content of Pinatubo, which caused comparable radiative forcing to Krakatau, seems to be much shorter. This is because Pinatubo happened under a warm and non-stationary background with increasing greenhouse gas forcing. However, its signal still could penetrate down to ~1000 m deep. A 2022 study on environmental impacts of volcanic eruptions showed that in the eastern equatorial of the pacific, after the volcano erupts, some low-latitude volcano trends to warmer. But some highlatitude vol Document 3::: Martian geysers (or jets) are putative sites of small gas and dust eruptions that occur in the south polar region of Mars during the spring thaw. "Dark dune spots" and "spiders" – or araneiforms – are the two most visible types of features ascribed to these eruptions. Martian geysers are distinct from geysers on Earth, which are typically associated with hydrothermal activity. These are unlike any terrestrial geological phenomenon. The reflectance (albedo), shapes and unusual spider appearance of these features have stimulated a variety of hypotheses about their origin, ranging from differences in frosting reflectance, to explanations involving biological processes. However, all current geophysical models assume some sort of jet or geyser-like activity on Mars. Their characteristics, and the process of their formation, are still a matter of debate. These features are unique to the south polar region of Mars in an area informally called the 'cryptic region', at latitudes 60° to 80° south and longitudes 150°W to 310°W; this 1 meter deep carbon dioxide (CO2) ice transition area—between the scarps of the thick polar ice layer and the permafrost—is where clusters of the apparent geyser systems are located. The seasonal frosting and defrosting of carbon dioxide ice results in the appearance of a number of features, such dark dune spots with spider-like rilles or channels below the ice, where spider-like radial channels are carved between the ground and the carbon dioxide ice, giving it an appearance of spider webs, then, pressure accumulating in their interior ejects gas and dark basaltic sand or dust, which is deposited on the ice surface and thus, forming dark dune spots. This process is rapid, observed happening in the space of a few days, weeks or months, a growth rate rather unusual in geology – especially for Mars. However, it would seem that multiple years would be required to carve the larger spider-like channels. There is no direct data on these features othe Document 4::: Sand boils or sand volcanoes occur when water under pressure wells up through a bed of sand. The water looks like it is boiling up from the bed of sand, hence the name. Sand volcano A sand volcano or sand blow is a cone of sand formed by the ejection of sand onto a surface from a central point. The sand builds up as a cone with slopes at the sand's angle of repose. A crater is commonly seen at the summit. The cone looks like a small volcanic cone and can range in size from millimetres to metres in diameter. The process is often associated with soil liquefaction and the ejection of fluidized sand that can occur in water-saturated sediments during an earthquake. The New Madrid Seismic Zone exhibited many such features during the 1811–12 New Madrid earthquakes. Adjacent sand blows aligned in a row along a linear fracture within fine-grained surface sediments are just as common, and can still be seen in the New Madrid area. In the past few years, much effort has gone into the mapping of liquefaction features to study ancient earthquakes. The basic idea is to map zones that are susceptible to the process and then go in for a closer look. The presence or absence of soil liquefaction features is strong evidence of past earthquake activity, or lack thereof. These are to be contrasted with mud volcanoes, which occur in areas of geyser or subsurface gas venting. Flood protection structures Sand boils can be a mechanism contributing to liquefaction and levee failure during floods. This effect is caused by a difference in pressure on two sides of a levee or dike, most likely during a flood. This process can result in internal erosion, whereby the removal of soil particles results in a pipe through the embankment. The creation of the pipe will quickly pick up pace and will eventually result in failure of the embankment. A sand boil is difficult to stop. The most effective method is by creating a body of water above the boil to create enough pressure to slow the flow of The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. In what two ways are volcanic eruptions characterized? A. localized and general B. explosive and non-explosive C. minor and explosive D. non-explosive and serious Answer:
sciq-2178
multiple_choice
What kind of conditions are often inherited as simple recessive traits?
[ "genetic neurotransmitters", "genetic diversivers", "genetic ratios", "genetic disorders" ]
D
Relavent Documents: Document 0::: Mendelian traits in humans are human traits that are substantially influenced by Mendelian inheritance. Most – if not all – Mendelian traits are also influenced by other genes, the environment, immune responses, and chance. Therefore no trait is purely Mendelian, but many traits are almost entirely Mendelian, including canonical examples, such as those listed below. Purely Mendelian traits are a minority of all traits, since most phenotypic traits exhibit incomplete dominance, codominance, and contributions from many genes. If a trait is genetically influenced, but not well characterized by Mendelian inheritance, it is non-Mendelian. Examples Albinism (recessive) Achondroplasia Alkaptonuria Ataxia telangiectasia Brachydactyly (shortness of fingers and toes) Colour blindness (monochromatism, dichromatism, anomalous trichromatism, tritanopia, deuteranopia, protanopia) Duchenne muscular dystrophy Ectrodactyly Ehlers–Danlos syndrome Fabry disease Galactosemia Gaucher's disease Some forms of Haemophilia Hereditary breast–ovarian cancer syndrome Hereditary nonpolyposis colorectal cancer HFE hereditary haemochromatosis Huntington's disease Hypercholesterolemia Krabbe disease Lactase persistence (dominant) Leber's hereditary optic neuropathy Lesch–Nyhan syndrome Marfan syndrome Niemann–Pick disease Phenylketonuria Porphyria Retinoblastoma Sickle-cell disease Sanfilippo syndrome Tay–Sachs disease Wet (dominant) or dry (recessive) earwax Non-Mendelian traits Most traits (including all complex traits) are non-mendelian. Some traits commonly thought of as Mendelian are not, including: Eye Color Psychiatric diseases Hair color Height Document 1::: The Generalist Genes hypothesis of learning abilities and disabilities was originally coined in an article by Plomin & Kovas (2005). The Generalist Genes hypothesis suggests that most genes associated with common learning disabilities and abilities are generalist in three ways. Firstly, the same genes that influence common learning abilities (e.g., high reading aptitude) are also responsible for common learning disabilities (e.g., reading disability): they are strongly genetically correlated. Secondly, many of the genes associated with one aspect of a learning disability (e.g., vocabulary problems) also influence other aspects of this learning disability (e.g., grammar problems). Thirdly, genes that influence one learning disability (e.g., reading disability) are largely the same as those that influence other learning disabilities (e.g., mathematics disability). The Generalist Genes hypothesis has important implications for education, cognitive sciences and molecular genetics. Document 2::: Human genetics is the study of inheritance as it occurs in human beings. Human genetics encompasses a variety of overlapping fields including: classical genetics, cytogenetics, molecular genetics, biochemical genetics, genomics, population genetics, developmental genetics, clinical genetics, and genetic counseling. Genes are the common factor of the qualities of most human-inherited traits. Study of human genetics can answer questions about human nature, can help understand diseases and the development of effective treatment and help us to understand the genetics of human life. This article describes only basic features of human genetics; for the genetics of disorders please see: medical genetics. Genetic differences and inheritance patterns Inheritance of traits for humans are based upon Gregor Mendel's model of inheritance. Mendel deduced that inheritance depends upon discrete units of inheritance, called factors or genes. Autosomal dominant inheritance Autosomal traits are associated with a single gene on an autosome (non-sex chromosome)—they are called "dominant" because a single copy—inherited from either parent—is enough to cause this trait to appear. This often means that one of the parents must also have the same trait, unless it has arisen due to an unlikely new mutation. Examples of autosomal dominant traits and disorders are Huntington's disease and achondroplasia. Autosomal recessive inheritance Autosomal recessive traits is one pattern of inheritance for a trait, disease, or disorder to be passed on through families. For a recessive trait or disease to be displayed two copies of the trait or disorder needs to be presented. The trait or gene will be located on a non-sex chromosome. Because it takes two copies of a trait to display a trait, many people can unknowingly be carriers of a disease. From an evolutionary perspective, a recessive disease or trait can remain hidden for several generations before displaying the phenotype. Examples of auto Document 3::: The Encyclopedia of Genetics () is a print encyclopedia of genetics edited by Sydney Brenner and Jeffrey H. Miller. It has four volumes and 1,700 entries. It is available online at http://www.sciencedirect.com/science/referenceworks/9780122270802. Genetics Genetics literature Document 4::: Mendelian traits behave according to the model of monogenic or simple gene inheritance in which one gene corresponds to one trait. Discrete traits (as opposed to continuously varying traits such as height) with simple Mendelian inheritance patterns are relatively rare in nature, and many of the clearest examples in humans cause disorders. Discrete traits found in humans are common examples for teaching genetics. Mendelian model According to the model of Mendelian inheritance, alleles may be dominant or recessive, one allele is inherited from each parent, and only those who inherit a recessive allele from each parent exhibit the recessive phenotype. Offspring with either one or two copies of the dominant allele will display the dominant phenotype. Very few phenotypes are purely Mendelian traits. Common violations of the Mendelian model include incomplete dominance, codominance, genetic linkage, environmental effects, and quantitative contributions from a number of genes (see: gene interactions, polygenic inheritance, oligogenic inheritance). OMIM (Online Mendelian Inheritance in Man) is a comprehensive database of human genotype–phenotype links. Many visible human traits that exhibit high heritability were included in the older McKusick's Mendelian Inheritance in Man. Before the discovery of genotyping, they were used as genetic markers in medicolegal practice, including in cases of disputed paternity. Human traits with probable or uncertain simple inheritance patterns See also Polygenic inheritance Trait Gene interaction Dominance Homozygote Heterozygote The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What kind of conditions are often inherited as simple recessive traits? A. genetic neurotransmitters B. genetic diversivers C. genetic ratios D. genetic disorders Answer:
scienceQA-6691
multiple_choice
Which of the following organisms is the producer in this food web?
[ "bat star", "sea cucumber", "black rockfish", "phytoplankton" ]
D
Producers do not eat other organisms. So, in a food web, producers do not have arrows pointing to them from other organisms. The kelp does not have any arrows pointing to it. So, the kelp is a producer. The black rockfish has an arrow pointing to it, so it is not a producer. The phytoplankton does not have any arrows pointing to it. So, the phytoplankton is a producer. The bat star has an arrow pointing to it, so it is not a producer. The sea cucumber has arrows pointing to it, so it is not a producer.
Relavent Documents: Document 0::: The trophic level of an organism is the position it occupies in a food web. A food chain is a succession of organisms that eat other organisms and may, in turn, be eaten themselves. The trophic level of an organism is the number of steps it is from the start of the chain. A food web starts at trophic level 1 with primary producers such as plants, can move to herbivores at level 2, carnivores at level 3 or higher, and typically finish with apex predators at level 4 or 5. The path along the chain can form either a one-way flow or a food "web". Ecological communities with higher biodiversity form more complex trophic paths. The word trophic derives from the Greek τροφή (trophē) referring to food or nourishment. History The concept of trophic level was developed by Raymond Lindeman (1942), based on the terminology of August Thienemann (1926): "producers", "consumers", and "reducers" (modified to "decomposers" by Lindeman). Overview The three basic ways in which organisms get food are as producers, consumers, and decomposers. Producers (autotrophs) are typically plants or algae. Plants and algae do not usually eat other organisms, but pull nutrients from the soil or the ocean and manufacture their own food using photosynthesis. For this reason, they are called primary producers. In this way, it is energy from the sun that usually powers the base of the food chain. An exception occurs in deep-sea hydrothermal ecosystems, where there is no sunlight. Here primary producers manufacture food through a process called chemosynthesis. Consumers (heterotrophs) are species that cannot manufacture their own food and need to consume other organisms. Animals that eat primary producers (like plants) are called herbivores. Animals that eat other animals are called carnivores, and animals that eat both plants and other animals are called omnivores. Decomposers (detritivores) break down dead plant and animal material and wastes and release it again as energy and nutrients into Document 1::: The soil food web is the community of organisms living all or part of their lives in the soil. It describes a complex living system in the soil and how it interacts with the environment, plants, and animals. Food webs describe the transfer of energy between species in an ecosystem. While a food chain examines one, linear, energy pathway through an ecosystem, a food web is more complex and illustrates all of the potential pathways. Much of this transferred energy comes from the sun. Plants use the sun’s energy to convert inorganic compounds into energy-rich, organic compounds, turning carbon dioxide and minerals into plant material by photosynthesis. Plant flowers exude energy-rich nectar above ground and plant roots exude acids, sugars, and ectoenzymes into the rhizosphere, adjusting the pH and feeding the food web underground. Plants are called autotrophs because they make their own energy; they are also called producers because they produce energy available for other organisms to eat. Heterotrophs are consumers that cannot make their own food. In order to obtain energy they eat plants or other heterotrophs. Above ground food webs In above ground food webs, energy moves from producers (plants) to primary consumers (herbivores) and then to secondary consumers (predators). The phrase, trophic level, refers to the different levels or steps in the energy pathway. In other words, the producers, consumers, and decomposers are the main trophic levels. This chain of energy transferring from one species to another can continue several more times, but eventually ends. At the end of the food chain, decomposers such as bacteria and fungi break down dead plant and animal material into simple nutrients. Methodology The nature of soil makes direct observation of food webs difficult. Since soil organisms range in size from less than 0.1 mm (nematodes) to greater than 2 mm (earthworms) there are many different ways to extract them. Soil samples are often taken using a metal Document 2::: Consumer–resource interactions are the core motif of ecological food chains or food webs, and are an umbrella term for a variety of more specialized types of biological species interactions including prey-predator (see predation), host-parasite (see parasitism), plant-herbivore and victim-exploiter systems. These kinds of interactions have been studied and modeled by population ecologists for nearly a century. Species at the bottom of the food chain, such as algae and other autotrophs, consume non-biological resources, such as minerals and nutrients of various kinds, and they derive their energy from light (photons) or chemical sources. Species higher up in the food chain survive by consuming other species and can be classified by what they eat and how they obtain or find their food. Classification of consumer types The standard categorization Various terms have arisen to define consumers by what they eat, such as meat-eating carnivores, fish-eating piscivores, insect-eating insectivores, plant-eating herbivores, seed-eating granivores, and fruit-eating frugivores and omnivores are meat eaters and plant eaters. An extensive classification of consumer categories based on a list of feeding behaviors exists. The Getz categorization Another way of categorizing consumers, proposed by South African American ecologist Wayne Getz, is based on a biomass transformation web (BTW) formulation that organizes resources into five components: live and dead animal, live and dead plant, and particulate (i.e. broken down plant and animal) matter. It also distinguishes between consumers that gather their resources by moving across landscapes from those that mine their resources by becoming sessile once they have located a stock of resources large enough for them to feed on during completion of a full life history stage. In Getz's scheme, words for miners are of Greek etymology and words for gatherers are of Latin etymology. Thus a bestivore, such as a cat, preys on live animal Document 3::: Agroecology and Sustainable Food Systems is a peer-reviewed scientific journal covering sustainable agriculture. It was established in 1990 as the Journal of Sustainable Agriculture, obtaining its current title in 2013. It is published by Taylor & Francis and the editor-in-chief is Stephen R. Gliessman (University of California, Santa Cruz). Abstracting and indexing The journal is abstracted and indexed in the Science Citation Index Expanded and Scopus. Document 4::: The University of Florida Institute of Food and Agricultural Sciences (UF/IFAS) is a teaching, research and Extension scientific organization focused on agriculture and natural resources. It is a partnership of federal, state, and county governments that includes an Extension office in each of Florida's 67 counties, 12 off-campus research and education centers, five demonstration units, the University of Florida College of Agricultural and Life Sciences (including the School of Forest, Fisheries and Geomatics Sciences and the School of Natural Resources and Environment), three 4-H camps, portions of the UF College of Veterinary Medicine, the Florida Sea Grant program, the Emerging Pathogens Institute, the UF Water Institute and the UF Genetics Institute. UF/IFAS research and development covers natural resource industries that have a $101 billion annual impact. The program is ranked #1 in the nation in federally financed higher education R&D expenditures in agricultural sciences and natural resources conservation by the National Science Foundation for FY 2019. Because of this mission and the diversity of Florida's climate and agricultural commodities, IFAS has facilities located throughout Florida. On July 13, 2020, Dr. J. Scott Angle became leader of UF/IFAS and UF's vice president for agriculture and natural resources. History Research The mission of UF/IFAS is to develop knowledge in agricultural, human, and natural resources, and to make that knowledge accessible to sustain and enhance the quality of human life. Faculty members pursue fundamental and applied research that furthers understanding of natural and human systems. Research is supported by state and federally appropriated funds and supplemented by grants and contracts. UF/IFAS received $155.6 million in annual research expenditures in sponsored research for FY 2021. The Florida Agricultural Experiment Station administers and supports research programs in UF/IFAS. The research program was created in The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Which of the following organisms is the producer in this food web? A. bat star B. sea cucumber C. black rockfish D. phytoplankton Answer:
sciq-8728
multiple_choice
What do you call the fast-moving air currents high in the troposphere?
[ "stationary fronts", "jet streams", "wind streams", "cyclones" ]
B
Relavent Documents: Document 0::: Atmospheric circulation of a planet is largely specific to the planet in question and the study of atmospheric circulation of exoplanets is a nascent field as direct observations of exoplanet atmospheres are still quite sparse. However, by considering the fundamental principles of fluid dynamics and imposing various limiting assumptions, a theoretical understanding of atmospheric motions can be developed. This theoretical framework can also be applied to planets within the Solar System and compared against direct observations of these planets, which have been studied more extensively than exoplanets, to validate the theory and understand its limitations as well. The theoretical framework first considers the Navier–Stokes equations, the governing equations of fluid motion. Then, limiting assumptions are imposed to produce simplified models of fluid motion specific to large scale motion atmospheric dynamics. These equations can then be studied for various conditions (i.e. fast vs. slow planetary rotation rate, stably stratified vs. unstably stratified atmosphere) to see how a planet's characteristics would impact its atmospheric circulation. For example, a planet may fall into one of two regimes based on its rotation rate: geostrophic balance or cyclostrophic balance. Atmospheric motions Coriolis force When considering atmospheric circulation we tend to take the planetary body as the frame of reference. In fact, this is a non-inertial frame of reference which has acceleration due to the planet's rotation about its axis. Coriolis force is the force that acts on objects moving within the planetary frame of reference, as a result of the planet's rotation. Mathematically, the acceleration due to Coriolis force can be written as: where is the flow velocity is the planet's angular velocity vector This force acts perpendicular to the flow and velocity and the planet's angular velocity vector, and comes into play when considering the atmospheric motion of a rotat Document 1::: This is a list of meteorology topics. The terms relate to meteorology, the interdisciplinary scientific study of the atmosphere that focuses on weather processes and forecasting. (see also: List of meteorological phenomena) A advection aeroacoustics aerobiology aerography (meteorology) aerology air parcel (in meteorology) air quality index (AQI) airshed (in meteorology) American Geophysical Union (AGU) American Meteorological Society (AMS) anabatic wind anemometer annular hurricane anticyclone (in meteorology) apparent wind Atlantic Oceanographic and Meteorological Laboratory (AOML) Atlantic hurricane season atmometer atmosphere Atmospheric Model Intercomparison Project (AMIP) Atmospheric Radiation Measurement (ARM) (atmospheric boundary layer [ABL]) planetary boundary layer (PBL) atmospheric chemistry atmospheric circulation atmospheric convection atmospheric dispersion modeling atmospheric electricity atmospheric icing atmospheric physics atmospheric pressure atmospheric sciences atmospheric stratification atmospheric thermodynamics atmospheric window (see under Threats) B ball lightning balloon (aircraft) baroclinity barotropity barometer ("to measure atmospheric pressure") berg wind biometeorology blizzard bomb (meteorology) buoyancy Bureau of Meteorology (in Australia) C Canada Weather Extremes Canadian Hurricane Centre (CHC) Cape Verde-type hurricane capping inversion (in meteorology) (see "severe thunderstorms" in paragraph 5) carbon cycle carbon fixation carbon flux carbon monoxide (see under Atmospheric presence) ceiling balloon ("to determine the height of the base of clouds above ground level") ceilometer ("to determine the height of a cloud base") celestial coordinate system celestial equator celestial horizon (rational horizon) celestial navigation (astronavigation) celestial pole Celsius Center for Analysis and Prediction of Storms (CAPS) (in Oklahoma in the US) Center for the Study o Document 2::: In fluid dynamics, a secondary circulation or secondary flow is a weak circulation that plays a key maintenance role in sustaining a stronger primary circulation that contains most of the kinetic energy and momentum of a flow. For example, a tropical cyclone's primary winds are tangential (horizontally swirling), but its evolution and maintenance against friction involves an in-up-out secondary circulation flow that is also important to its clouds and rain. On a planetary scale, Earth's winds are mostly east–west or zonal, but that flow is maintained against friction by the Coriolis force acting on a small north–south or meridional secondary circulation. See also Hough function Primitive equations Secondary flow Document 3::: In meteorology, wind speed, or wind flow speed, is a fundamental atmospheric quantity caused by air moving from high to low pressure, usually due to changes in temperature. Wind speed is now commonly measured with an anemometer. Wind speed affects weather forecasting, aviation and maritime operations, construction projects, growth and metabolism rate of many plant species, and has countless other implications. Wind direction is usually almost parallel to isobars (and not perpendicular, as one might expect), due to Earth's rotation. Units The metre per second (m/s) is the SI unit for velocity and the unit recommended by the World Meteorological Organization for reporting wind speeds, and is amongst others used in weather forecasts in the Nordic countries. Since 2010 the International Civil Aviation Organization (ICAO) also recommends meters per second for reporting wind speed when approaching runways, replacing their former recommendation of using kilometres per hour (km/h). For historical reasons, other units such as miles per hour (mph), knots (kn) or feet per second (ft/s) are also sometimes used to measure wind speeds. Historically, wind speeds have also been classified using the Beaufort scale, which is based on visual observations of specifically defined wind effects at sea or on land. Factors affecting wind speed Wind speed is affected by a number of factors and situations, operating on varying scales (from micro to macro scales). These include the pressure gradient, Rossby waves and jet streams, and local weather conditions. There are also links to be found between wind speed and wind direction, notably with the pressure gradient and terrain conditions. Pressure gradient is a term to describe the difference in air pressure between two points in the atmosphere or on the surface of the Earth. It is vital to wind speed, because the greater the difference in pressure, the faster the wind flows (from the high to low pressure) to balance out the variation. Th Document 4::: Zonal and meridional flow are directions and regions of fluid flow on a globe. Zonal flow follows a pattern along latitudinal lines, latitudinal circles or in the west–east direction. Meridional flow follows a pattern from north to south, or from south to north, along the Earth's longitude lines, longitudinal circles (meridian) or in the north–south direction. These terms are often used in the atmospheric and earth sciences to describe global phenomena, such as "meridional wind", or "zonal average temperature". In the context of physics, zonal flow connotes a tendency of flux to conform to a pattern parallel to the equator of a sphere. In meteorological term regarding atmospheric circulation, zonal flow brings a temperature contrast along the Earth's longitude. Extratropical cyclones in zonal flows tend to be weaker, moving faster and producing relatively little impact on local weather. Extratropical cyclones in meridional flows tend to be stronger and move slower. This pattern is responsible for most instances of extreme weather, as not only are storms stronger in this type of flow regime, but temperatures can reach extremes as well, producing heat waves and cold waves depending on the equator-ward or poleward direction of the flow. For vector fields (such as wind velocity), the zonal component (or x-coordinate) is denoted as u, while the meridional component (or y-coordinate) is denoted as v. In plasma physics Zonal flow (plasma) means poloidal, which is the opposite from the meaning in planetary atmospheres and weather/climate studies. See also Zonal and poloidal Zonal flow (plasma) Meridione Notes Orientation (geometry) The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What do you call the fast-moving air currents high in the troposphere? A. stationary fronts B. jet streams C. wind streams D. cyclones Answer:
sciq-5361
multiple_choice
The electron transport chains are located on the inner membrane of which organelle?
[ "chloroplast", "lysosome", "mitochondrion", "axon" ]
C
Relavent Documents: Document 0::: Chloroplasts contain several important membranes, vital for their function. Like mitochondria, chloroplasts have a double-membrane envelope, called the chloroplast envelope, but unlike mitochondria, chloroplasts also have internal membrane structures called thylakoids. Furthermore, one or two additional membranes may enclose chloroplasts in organisms that underwent secondary endosymbiosis, such as the euglenids and chlorarachniophytes. The chloroplasts come via endosymbiosis by engulfment of a photosynthetic cyanobacterium by the eukaryotic, already mitochondriate cell. Over millions of years the endosymbiotic cyanobacterium evolved structurally and functionally, retaining its own DNA and the ability to divide by binary fission (not mitotically) but giving up its autonomy by the transfer of some of its genes to the nuclear genome. Envelope membranes Each of the envelope membranes is a lipid bilayer that is between 6 and 8 nm thick. The lipid composition of the outer membrane has been found to be 48% phospholipids, 46% galactolipids and 7% sulfolipids, while the inner membrane has been found to contain 16% phospholipids, 79% galactolipids and 5% sulfolipids in spinach chloroplasts. The outer membrane is permeable to most ions and metabolites, but the inner membrane of the chloroplast is highly specialised with transport proteins. For example, carbohydrates are transported across the inner envelope membrane by a triose phosphate translocator. The two envelope membranes are separated by a gap of 10–20 nm, called the intermembrane space. Thylakoid membrane Within the envelope membranes, in the region called the stroma, there is a system of interconnecting flattened membrane compartments, called the thylakoids. The thylakoid membrane is quite similar in lipid composition to the inner envelope membrane, containing 78% galactolipids, 15.5% phospholipids and 6.5% sulfolipids in spinach chloroplasts. The thylakoid membrane encloses a single, continuous aqueous compartme Document 1::: The intermembrane space (IMS) is the space occurring between or involving two or more membranes. In cell biology, it is most commonly described as the region between the inner membrane and the outer membrane of a mitochondrion or a chloroplast. It also refers to the space between the inner and outer nuclear membranes of the nuclear envelope, but is often called the perinuclear space. The IMS of mitochondria plays a crucial role in coordinating a variety of cellular activities, such as regulation of respiration and metabolic functions. Unlike the IMS of the mitochondria, the IMS of the chloroplast does not seem to have any obvious function. Intermembrane space of mitochondria Mitochondria are surrounded by two membranes; the inner and outer mitochondrial membranes. These two membranes allow the formation of two aqueous compartments, which are the intermembrane space (IMS) and the matrix. Channel proteins called porins in the outer membrane allow free diffusion of ions and small proteins about 5000 daltons or less into the IMS. This makes the IMS chemically equivalent to the cytosol regarding the small molecules it contains. By contrast, specific transport proteins are required to transport ions and other small molecules across the inner mitochondrial membrane into the matrix due to its impermeability. The IMS also contains many enzymes that use the ATP moving out of the matrix to phosphorylate other nucleotides and proteins that initiate apoptosis. Translocation Most of proteins destined for the mitochondrial matrix are synthesized as precursors in the cytosol and are imported into the mitochondria by the translocase of the outer membrane (TOM) and the translocase of the inner membrane (TIM). The IMS is involved in the mitochondrial protein translocation. The precursor proteins called small TIM chaperones which are hexameric complexes are located in the IMS and they bind hydrophobic precursor proteins and delivery the precursors to the TIM. Oxidative phosphoryl Document 2::: Paracellular transport refers to the transfer of substances across an epithelium by passing through the intercellular space between the cells. It is in contrast to transcellular transport, where the substances travel through the cell, passing through both the apical membrane and basolateral membrane. The distinction has particular significance in renal physiology and intestinal physiology. Transcellular transport often involves energy expenditure whereas paracellular transport is unmediated and passive down a concentration gradient, or by osmosis (for water) and solvent drag for solutes. Paracellular transport also has the benefit that absorption rate is matched to load because it has no transporters that can be saturated. In most mammals, intestinal absorption of nutrients is thought to be dominated by transcellular transport, e.g., glucose is primarily absorbed via the SGLT1 transporter and other glucose transporters. Paracellular absorption therefore plays only a minor role in glucose absorption, although there is evidence that paracellular pathways become more available when nutrients are present in the intestinal lumen. In contrast, small flying vertebrates (small birds and bats) rely on the paracellular pathway for the majority of glucose absorption in the intestine. This has been hypothesized to compensate for an evolutionary pressure to reduce mass in flying animals, which resulted in a reduction in intestine size and faster transit time of food through the gut. Capillaries of the blood–brain barrier have only transcellular transport, in contrast with normal capillaries which have both transcellular and paracellular transport. The paracellular pathway of transport is also important for the absorption of drugs in the gastrointestinal tract. The paracellular pathway allows the permeation of hydrophilic molecules that are not able to permeate through the lipid membrane by the transcellular pathway of absorption. This is particularly important for hydrophi Document 3::: Intraflagellar transport (IFT) is a bidirectional motility along axoneme microtubules that is essential for the formation (ciliogenesis) and maintenance of most eukaryotic cilia and flagella. It is thought to be required to build all cilia that assemble within a membrane projection from the cell surface. Plasmodium falciparum cilia and the sperm flagella of Drosophila are examples of cilia that assemble in the cytoplasm and do not require IFT. The process of IFT involves movement of large protein complexes called IFT particles or trains from the cell body to the ciliary tip and followed by their return to the cell body. The outward or anterograde movement is powered by kinesin-2 while the inward or retrograde movement is powered by cytoplasmic dynein 2/1b. The IFT particles are composed of about 20 proteins organized in two subcomplexes called complex A and B. IFT was first reported in 1993 by graduate student Keith Kozminski while working in the lab of Dr. Joel Rosenbaum at Yale University. The process of IFT has been best characterized in the biflagellate alga Chlamydomonas reinhardtii as well as the sensory cilia of the nematode Caenorhabditis elegans. It has been suggested based on localization studies that IFT proteins also function outside of cilia. Biochemistry Intraflagellar transport (IFT) describes the bi-directional movement of non-membrane-bound particles along the doublet microtubules of the flagellar, and motile cilia axoneme, between the axoneme and the plasma membrane. Studies have shown that the movement of IFT particles along the microtubule is carried out by two different microtubule motors; the anterograde (towards the flagellar tip) motor is heterotrimeric kinesin-2, and the retrograde (towards the cell body) motor is cytoplasmic dynein 1b. IFT particles carry axonemal subunits to the site of assembly at the tip of the axoneme; thus, IFT is necessary for axonemal growth. Therefore, since the axoneme needs a continually fresh supply of prote Document 4::: Passive transport is a type of membrane transport that does not require energy to move substances across cell membranes. Instead of using cellular energy, like active transport, passive transport relies on the second law of thermodynamics to drive the movement of substances across cell membranes. Fundamentally, substances follow Fick's first law, and move from an area of high concentration to an area of low concentration because this movement increases the entropy of the overall system. The rate of passive transport depends on the permeability of the cell membrane, which, in turn, depends on the organization and characteristics of the membrane lipids and proteins. The four main kinds of passive transport are simple diffusion, facilitated diffusion, filtration, and/or osmosis. Passive transport follows Fick's first law. Diffusion Diffusion is the net movement of material from an area of high concentration to an area with lower concentration. The difference of concentration between the two areas is often termed as the concentration gradient, and diffusion will continue until this gradient has been eliminated. Since diffusion moves materials from an area of higher concentration to an area of lower concentration, it is described as moving solutes "down the concentration gradient" (compared with active transport, which often moves material from area of low concentration to area of higher concentration, and therefore referred to as moving the material "against the concentration gradient"). However, in many cases (e.g. passive drug transport) the driving force of passive transport can not be simplified to the concentration gradient. If there are different solutions at the two sides of the membrane with different equilibrium solubility of the drug, the difference in the degree of saturation is the driving force of passive membrane transport. It is also true for supersaturated solutions which are more and more important owing to the spreading of the application of amorph The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The electron transport chains are located on the inner membrane of which organelle? A. chloroplast B. lysosome C. mitochondrion D. axon Answer:
sciq-10708
multiple_choice
Developing cars that run on hydrogen gas can help solve our dependence on what?
[ "nonrenewable fossil fuels", "oxygen", "water", "food" ]
A
Relavent Documents: Document 0::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 1::: The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields. Description The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions. The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.” Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers. Current efforts The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo Document 2::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 3::: The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work. History It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council. Function Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres. STEM ambassadors To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell. Funding STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments. See also The WISE Campaign Engineering and Physical Sciences Research Council National Centre for Excellence in Teaching Mathematics Association for Science Education Glossary of areas of mathematics Glossary of astronomy Glossary of biology Glossary of chemistry Glossary of engineering Glossary of physics Document 4::: Electrochemical energy conversion is a field of energy technology concerned with electrochemical methods of energy conversion including fuel cells and photoelectrochemical. This field of technology also includes electrical storage devices like batteries and supercapacitors. It is increasingly important in context of automotive propulsion systems. There has been the creation of more powerful, longer running batteries allowing longer run times for electric vehicles. These systems would include the energy conversion fuel cells and photoelectrochemical mentioned above. See also Bioelectrochemical reactor Chemotronics Electrochemical cell Electrochemical engineering Electrochemical reduction of carbon dioxide Electrofuels Electrohydrogenesis Electromethanogenesis Enzymatic biofuel cell Photoelectrochemical cell Photoelectrochemical reduction of CO2 Notes External links International Journal of Energy Research MSAL NIST scientific journal article Georgia tech Electrochemistry Electrochemical engineering Energy engineering Energy conversion Biochemical engineering The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Developing cars that run on hydrogen gas can help solve our dependence on what? A. nonrenewable fossil fuels B. oxygen C. water D. food Answer:
sciq-3312
multiple_choice
What is the ph of pure water?
[ "5", "4", "2", "7" ]
D
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Advanced Placement (AP) Biology (also known as AP Bio) is an Advanced Placement biology course and exam offered by the College Board in the United States. For the 2012–2013 school year, the College Board unveiled a new curriculum with a greater focus on "scientific practices". This course is designed for students who wish to pursue an interest in the life sciences. The College Board recommends successful completion of high school biology and high school chemistry before commencing AP Biology, although the actual prerequisites vary from school to school and from state to state. This course, nevertheless, is considered very challenging and one of the most difficult AP classes, as shown with AP Finals grade distributions. Topic outline The exam covers the following 8 units. The percentage indicates the portion of the multiple-choice section of the exam focused on each content area: The course is based on and tests six skills, called scientific practices which include: In addition to the topics above, students are required to be familiar with general lab procedure. Students should know how to collect data, analyze data to form conclusions, and apply those conclusions. Exam Students are allowed to use a four-function, scientific, or graphing calculator. The exam has two sections: a 90 minute multiple choice section and a 90 minute free response section. There are 60 multiple choice questions and six free responses, two long and four short. Both sections are worth 50% of the score. Score distribution Commonly used textbooks Biology, AP Edition by Sylvia Mader (2012, hardcover ) Life: The Science of Biology (Sadava, Heller, Orians, Purves, and Hillis, ) Campbell Biology AP Ninth Edition (Reece, Urry, Cain, Wasserman, Minorsky, and Andrew Jackson ) See also Glossary of biology A.P Bio (TV Show) Document 2::: GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test. Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95. After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17. Content specification Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below: Biochemistry (36%) A Chemical and Physical Foundations Thermodynamics and kinetics Redox states Water, pH, acid-base reactions and buffers Solutions and equilibria Solute-solvent interactions Chemical interactions and bonding Chemical reaction mechanisms B Structural Biology: Structure, Assembly, Organization and Dynamics Small molecules Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids) Supramolecular complexes (e.g. Document 3::: The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591. On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education. Format This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions. The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test. Preparation The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a Document 4::: The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered. The 1995 version has 30 five-way multiple choice questions. Example question (question 4): Gender differences The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the ph of pure water? A. 5 B. 4 C. 2 D. 7 Answer:
sciq-9237
multiple_choice
The hamstrings flex the leg, whereas the quadriceps femoris have what effect?
[ "contract", "stabilize", "no effect", "extend" ]
D
Relavent Documents: Document 0::: Reciprocal inhibition describes the relaxation of muscles on one side of a joint to accommodate contraction on the other side. In some allied health disciplines, this is known as reflexive antagonism. The central nervous system sends a message to the agonist muscle to contract. The tension in the antagonist muscle is activated by impulses from motor neurons, causing it to relax. Mechanics Joints are controlled by two opposing sets of muscles called extensors and flexors, that work in synchrony for smooth movement. When a muscle spindle is stretched, the stretch reflex is activated, and the opposing muscle group must be inhibited to prevent it from working against the contraction of the homonymous muscle. This inhibition is accomplished by the actions of an inhibitor interneuron in the spinal cord. The afferent of the muscle spindle bifurcates in the spinal cord. One branch innervates the alpha motor neuron that causes the homonymous muscle to contract, producing the reflex. The other branch innervates the inhibitory interneuron, which then innervates the alpha motor neuron that synapses onto the opposing muscle. Because the interneuron is inhibitory, it prevents the opposing alpha motor neuron from firing, thereby reducing the contraction of the opposing muscle. Without this reciprocal inhibition, both groups of muscles might contract simultaneously and work against each other. If opposing muscles were to contract at the same time, a muscle tear can occur. This may occur during physical activities such as running, during which opposing muscles engage and disengage sequentially to produce coordinated movement. Reciprocal inhibition facilitates ease of movement and is a safeguard against injury. However, if a "misfiring" of motor neurons occurs, causing simultaneous contraction of opposing muscles, a tear can occur. For example, if the quadriceps femoris and hamstring contract simultaneously at a high intensity, the stronger muscle (traditionally the quadriceps) Document 1::: In an isotonic contraction, tension remains the same, whilst the muscle's length changes. Isotonic contractions differ from isokinetic contractions in that in isokinetic contractions the muscle speed remains constant. While superficially identical, as the muscle's force changes via the length-tension relationship during a contraction, an isotonic contraction will keep force constant while velocity changes, but an isokinetic contraction will keep velocity constant while force changes. A near isotonic contraction is known as Auxotonic contraction. There are two types of isotonic contractions: (1) concentric and (2) eccentric. In a concentric contraction, the muscle tension rises to meet the resistance, then remains the same as the muscle shortens. In eccentric, the muscle lengthens due to the resistance being greater than the force the muscle is producing. Concentric This type is typical of most exercise. The external force on the muscle is less than the force the muscle is generating - a shortening contraction. The effect is not visible during the classic biceps curl, which is in fact auxotonic because the resistance (torque due to the weight being lifted) does not remain the same through the exercise. Tension is highest at a parallel to the floor level, and eases off above and below this point. Therefore, tension changes as well as muscle length. Eccentric There are two main features to note regarding eccentric contractions. First, the absolute tensions achieved can be very high relative to the muscle's maximum tetanic tension generating capacity (you can set down a much heavier object than you can lift). Second, the absolute tension is relatively independent of lengthening velocity. Muscle injury and soreness are selectively associated with eccentric contraction. Muscle strengthening using exercises that involve eccentric contractions is lower than using concentric exercises. However because higher levels of tension are easier to attain during exercises th Document 2::: Normal aging movement control in humans is about the changes in the muscles, motor neurons, nerves, sensory functions, gait, fatigue, visual and manual responses, in men and women as they get older but who do not have neurological, muscular (atrophy, dystrophy...) or neuromuscular disorder. With aging, neuromuscular movements are impaired, though with training or practice, some aspects may be prevented. Force production For voluntary force production, action potentials occur in the cortex. They propagate in the spinal cord, the motor neurons and the set of muscle fibers they innervate. This results in a twitch which properties are driven by two mechanisms: motor unit recruitment and rate coding. Both mechanisms are affected with aging. For instance, the number of motor units may decrease, the size of the motor units, i.e. the number of muscle fibers they innervate may increase, the frequency at which the action potentials are triggered may be reduced. Consequently, force production is generally impaired in old adults. Aging is associated with decreases in muscle mass and strength. These decreases may be partially due to losses of alpha motor neurons. By the age of 70, these losses occur in both proximal and distal muscles. In biceps brachii and brachialis, old adults show decreased strength (by 1/3) correlated with a reduction in the number of motor units (by 1/2). Old adults show evidence that remaining motor units may become larger as motor units innervate collateral muscle fibers. In first dorsal interosseus, almost all motor units are recruited at moderate rate coding, leading to 30-40% of maximal voluntary contraction (MVC). Motor unit discharge rates measured at 50% MVC are not significantly different in the young subjects from those observed in the old adults. However, for the maximal effort contractions, there is an appreciable difference in discharge rates between the two age groups. Discharge rates obtained at 100% of MVC are 64% smaller in the old adul Document 3::: A stretch-shortening cycle (SSC) is an active stretch (eccentric contraction) of a muscle followed by an immediate shortening (concentric contraction) of that same muscle. Research studies The increased performance benefit associated with muscle contractions that take place during SSCs has been the focus of much research in order to determine the true nature of this enhancement. At present, there is some debate as to where and how this performance enhancement takes place. It has been postulated that elastic structures in series with the contractile component can store energy like a spring after being forcibly stretched. Since the length of the tendon increases due to the active stretch phase, if the series elastic component acts as a spring, it would therefore be storing more potential energy. This energy would be released as the tendon shortened. Thus, the recoil of the tendon during the shortening phase of the movement would result in a more efficient movement than one in which no energy had been stored. This research is further supported by Roberts et al. However, other studies have found that removing portions of these series-elastic components (by way of tendon length reduction) had little effect on muscle performance. Studies on turkeys have, nevertheless, shown that during SSC, a performance enhancement associated with elastic energy storage still takes place but it is thought that the aponeurosis could be a major source of energy storage (Roleveld et al., 1994). The contractile component itself has also been associated with the ability to increase contractile performance through muscle potentiation while other studies have found that this ability is quite limited and unable to account for such enhancements (Lensel and Goubel, 1987, Lensel-Corbeil and Goubel, 1990; Ettema and Huijing, 1989). Community agreement The results of these often contradictory studies have been associated with improved efficiencies for human or animal movements such as counter Document 4::: In biomechanics, Hill's muscle model refers to the 3-element model consisting of a contractile element (CE) in series with a lightly-damped elastic spring element (SE) and in parallel with lightly-damped elastic parallel element (PE). Within this model, the estimated force-velocity relation for the CE element is usually modeled by what is commonly called Hill's equation, which was based on careful experiments involving tetanized muscle contraction where various muscle loads and associated velocities were measured. They were derived by the famous physiologist Archibald Vivian Hill, who by 1938 when he introduced this model and equation had already won the Nobel Prize for Physiology. He continued to publish in this area through 1970. There are many forms of the basic "Hill-based" or "Hill-type" models, with hundreds of publications having used this model structure for experimental and simulation studies. Most major musculoskeletal simulation packages make use of this model. AV Hill's force-velocity equation for tetanized muscle This is a popular state equation applicable to skeletal muscle that has been stimulated to show Tetanic contraction. It relates tension to velocity with regard to the internal thermodynamics. The equation is where is the tension (or load) in the muscle is the velocity of contraction is the maximum isometric tension (or load) generated in the muscle coefficient of shortening heat is the maximum velocity, when Although Hill's equation looks very much like the van der Waals equation, the former has units of energy dissipation, while the latter has units of energy. Hill's equation demonstrates that the relationship between F and v is hyperbolic. Therefore, the higher the load applied to the muscle, the lower the contraction velocity. Similarly, the higher the contraction velocity, the lower the tension in the muscle. This hyperbolic form has been found to fit the empirical constant only during isotonic contractions near resting The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The hamstrings flex the leg, whereas the quadriceps femoris have what effect? A. contract B. stabilize C. no effect D. extend Answer:
sciq-5869
multiple_choice
What's the name for a cluster of genes, where one promoter serves adjacent genes?
[ "opteron", "nucleus", "plasma", "proton" ]
A
Relavent Documents: Document 0::: Distal promoter elements are regulatory DNA sequences that can be many kilobases distant from the gene that they regulate. They can either be enhancers (increasing expression) or silencers (decreasing expression). They act by binding activator or repressor proteins (transcription factors) and the intervening DNA bends such that the bound proteins contact the core promoter and RNA polymerase. Document 1::: Genetics (from Ancient Greek , “genite” and that from , “origin”), a discipline of biology, is the science of heredity and variation in living organisms. Articles (arranged alphabetically) related to genetics include: # A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Document 2::: Major gene is a gene with pronounced phenotype expression, in contrast to a modifier gene. Major gene characterizes common expression of oligogenic series, i.e. a small number of genes that determine the same trait. Major genes control the discontinuous or qualitative characters in contrast of minor genes or polygenes with individually small effects. Major genes segregate and may be easily subject to mendelian analysis. The gene categorization into major and minor determinants is more or less arbitrary. Both of the two types are in all probability only end points in a more or less continuous series of gene action and gene interactions. The term major gene was introduced into the science of inheritance by Keneth Mather (1941). See also Gene interaction Minor gene Gene Document 3::: In molecular biology and genetics, transcriptional regulation is the means by which a cell regulates the conversion of DNA to RNA (transcription), thereby orchestrating gene activity. A single gene can be regulated in a range of ways, from altering the number of copies of RNA that are transcribed, to the temporal control of when the gene is transcribed. This control allows the cell or organism to respond to a variety of intra- and extracellular signals and thus mount a response. Some examples of this include producing the mRNA that encode enzymes to adapt to a change in a food source, producing the gene products involved in cell cycle specific activities, and producing the gene products responsible for cellular differentiation in multicellular eukaryotes, as studied in evolutionary developmental biology. The regulation of transcription is a vital process in all living organisms. It is orchestrated by transcription factors and other proteins working in concert to finely tune the amount of RNA being produced through a variety of mechanisms. Bacteria and eukaryotes have very different strategies of accomplishing control over transcription, but some important features remain conserved between the two. Most importantly is the idea of combinatorial control, which is that any given gene is likely controlled by a specific combination of factors to control transcription. In a hypothetical example, the factors A and B might regulate a distinct set of genes from the combination of factors A and C. This combinatorial nature extends to complexes of far more than two proteins, and allows a very small subset (less than 10%) of the genome to control the transcriptional program of the entire cell. In bacteria Much of the early understanding of transcription came from bacteria, although the extent and complexity of transcriptional regulation is greater in eukaryotes. Bacterial transcription is governed by three main sequence elements: Promoters are elements of DNA that may bind Document 4::: The Oxford Centre for Gene Function is a multidisciplinary research institute in the University of Oxford, England. It is directed by Frances Ashcroft, Kay Davies and Peter Donnelly. It involves the departments of Human anatomy and genetics, Physiology, and Statistics. External links Oxford Centre for Gene Function website Wellcome Trust Centre for Human Genetics Departments of the University of Oxford Genetics in the United Kingdom Human genetics Research institutes in Oxford The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What's the name for a cluster of genes, where one promoter serves adjacent genes? A. opteron B. nucleus C. plasma D. proton Answer: