id
stringlengths 6
15
| question_type
stringclasses 1
value | question
stringlengths 15
683
| choices
listlengths 4
4
| answer
stringclasses 5
values | explanation
stringclasses 481
values | prompt
stringlengths 1.75k
10.9k
|
---|---|---|---|---|---|---|
sciq-6593
|
multiple_choice
|
Yogurt is made with milk fermented with what?
|
[
"bacteria",
"viruses",
"disease",
"pathogens"
] |
A
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Milk fat globule membrane (MFGM) is a complex and unique structure composed primarily of lipids and proteins that surrounds milk fat globule secreted from the milk producing cells of humans and other mammals. It is a source of multiple bioactive compounds, including phospholipids, glycolipids, glycoproteins, and carbohydrates that have important functional roles within the brain and gut.
Preclinical studies have demonstrated effects of MFGM-derived bioactive components on brain structure and function, intestinal development, and immune defense. Similarly, pediatric clinical trials have reported beneficial effects on cognitive and immune outcomes. In populations ranging from premature infants to preschool-age children, dietary supplementation with MFGM or its components has been associated with improvements in cognition and behavior, gut and oral bacterial composition, fever incidence, and infectious outcomes including diarrhea and otitis media.
MFGM may also play a role in supporting cardiovascular health by modulating cholesterol and fat uptake. Clinical trials in adult populations have shown that MFGM could positively affect markers associated with cardiovascular disease including lowering serum cholesterol and triacylglycerol levels as well as blood pressure.
Origin
MFGM secretion process in milk
Milk lipids are secreted in a unique manner by lactocytes, which are specialized epithelial cells within the alveoli of the lactating mammary gland.
The process takes place in multiple stages. First, fat synthesized within the endoplasmic reticulum accumulates in droplets between the inner and outer phospholipid monolayers of the endoplasmic reticulum membrane. As these droplets increase in size, the two monolayers separate further and eventually pinch off. This leads to the surrounding of the droplet in a phospholipid monolayer that allows it to disperse within the aqueous cytoplasm. In the next stage, lipid droplets then migrate to the apical surface of the cell,
Document 2:::
MicrobeLibrary is a permanent collection of over 1400 original peer-reviewed resources for teaching undergraduate microbiology. It is provided by the American Society for Microbiology, Washington DC, United States.
Contents include curriculum activities; images and animations; reviews of books, websites and other resources; and articles from Focus on Microbiology Education, Microbiology Education and Microbe. Around 40% of the materials are free to educators and students, the remainder require a subscription. the service is suspended with the message to:
"Please check back with us in 2017".
External links
MicrobeLibrary
Microbiology
Document 3:::
Wild Fermentation: The Flavor, Nutrition, and Craft of Live-Culture Foods is a 2003 book by Sandor Katz that discusses the ancient practice of fermentation. While most of the conventional literature assumes the use of modern technology, Wild Fermentation focuses more on the practice and culture of fermenting food.
The term "wild fermentation" refers to the reliance on naturally occurring bacteria and yeast to ferment food. For example, conventional bread making requires the use of a commercial, highly specialized yeast, while wild-fermented bread relies on naturally occurring cultures that are found on the flour, in the air, and so on. Similarly, the book's instructions on sauerkraut require only cabbage and salt, relying on the cultures that naturally exist on the vegetable to perform the fermentation.
The book also discusses some foods that are not, strictly speaking, wild ferments such as miso, yogurt, kefir, and nattō.
Beyond food, the book includes some discussion of social, personal, and political issues, such as the legality of raw milk cheeses in the United States.
Newsweek has referred to Wild Fermentation as the "fermentation bible".
Document 4:::
The Institut de technologie agroalimentaire (ITA) is a collegial institute specialized in agricultural technology and food production in Quebec, Canada. The institution is composed of two campuses, one in Saint-Hyacinthe and the other in La Pocatière. The institution is managed by the Ministère de l'Agriculture, des Pêcheries et de l'Alimentation du Québec (MAPAQ).
History
The origins of the ITA date back to the 19th century. The first francophone school of agriculture was founded in 1859 in Sainte-Anne-de-la-Pocatière, while the dairy school in Saint-Hyacinthe was created in 1892, the first such institution in North America.
In 1962, the Ministry of Agriculture, Fisheries and Food of Quebec (known today in French as the Ministère de l'Agriculture, des Pêcheries et de l'Alimentation, and in 1962 as the Ministère de l'Agriculture et de la Colonisation) formed the Instituts de technologie agroalimentaire. While the La Pocatière campus was an extension of the Faculty of Agronomy of Université Laval, the Saint-Hyacinthe campus was originally a dairy school founded in 1892.
Training programs
The ITA offers a total of eight CEGEP-level training programs, which lead to a Quebec Diploma of College Studies. Most programs are offered at both campuses. They include:
Gestion et technologies d'entreprise agricole
Gestion et technologies d'entreprise agricole : Profils en production animale biologique
Technologie des productions animales
Paysage et commercialisation en horticulture ornementale
Technologie de la production horticole agroenvironnementale
Technologie du génie agromécanique
Technologie des procédés et de la qualité des aliments
Techniques équines
The ITA's programs listed above allow graduates to pursue university-level studies in related fields such as agronomy, agricultural economics, agricultural engineering, food engineering, biology, food science, and landscape architecture, amongst others.
The ITA also offers one training program in equine massage therapy,
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Yogurt is made with milk fermented with what?
A. bacteria
B. viruses
C. disease
D. pathogens
Answer:
|
|
sciq-4194
|
multiple_choice
|
What term is used to describe organelles that are found only in animal cells?
|
[
"centrioles",
"fibrils",
"anticlines",
"acids"
] |
A
|
Relavent Documents:
Document 0:::
Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence.
Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism.
Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry.
See also
Cell (biology)
Cell biology
Biomolecule
Organelle
Tissue (biology)
External links
https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm
Document 1:::
The cell is the basic structural and functional unit of all forms of life. Every cell consists of cytoplasm enclosed within a membrane, and contains many macromolecules such as proteins, DNA and RNA, as well as many small molecules of nutrients and metabolites. The term comes from the Latin word meaning 'small room'.
Cells can acquire specified function and carry out various tasks within the cell such as replication, DNA repair, protein synthesis, and motility. Cells are capable of specialization and mobility within the cell.
Most plant and animal cells are only visible under a light microscope, with dimensions between 1 and 100 micrometres. Electron microscopy gives a much higher resolution showing greatly detailed cell structure. Organisms can be classified as unicellular (consisting of a single cell such as bacteria) or multicellular (including plants and animals). Most unicellular organisms are classed as microorganisms.
The study of cells and how they work has led to many other studies in related areas of biology, including: discovery of DNA, cancer systems biology, aging and developmental biology.
Cell biology is the study of cells, which were discovered by Robert Hooke in 1665, who named them for their resemblance to cells inhabited by Christian monks in a monastery. Cell theory, first developed in 1839 by Matthias Jakob Schleiden and Theodor Schwann, states that all organisms are composed of one or more cells, that cells are the fundamental unit of structure and function in all living organisms, and that all cells come from pre-existing cells. Cells emerged on Earth about 4 billion years ago.
Discovery
With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as i
Document 2:::
Cell physiology is the biological study of the activities that take place in a cell to keep it alive. The term physiology refers to normal functions in a living organism. Animal cells, plant cells and microorganism cells show similarities in their functions even though they vary in structure.
General characteristics
There are two types of cells: prokaryotes and eukaryotes.
Prokaryotes were the first of the two to develop and do not have a self-contained nucleus. Their mechanisms are simpler than later-evolved eukaryotes, which contain a nucleus that envelops the cell's DNA and some organelles.
Prokaryotes
Prokaryotes have DNA located in an area called the nucleoid, which is not separated from other parts of the cell by a membrane. There are two domains of prokaryotes: bacteria and archaea. Prokaryotes have fewer organelles than eukaryotes. Both have plasma membranes and ribosomes (structures that synthesize proteins and float free in cytoplasm). Two unique characteristics of prokaryotes are fimbriae (finger-like projections on the surface of a cell) and flagella (threadlike structures that aid movement).
Eukaryotes
Eukaryotes have a nucleus where DNA is contained. They are usually larger than prokaryotes and contain many more organelles. The nucleus, the feature of a eukaryote that distinguishes it from a prokaryote, contains a nuclear envelope, nucleolus and chromatin. In cytoplasm, endoplasmic reticulum (ER) synthesizes membranes and performs other metabolic activities. There are two types, rough ER (containing ribosomes) and smooth ER (lacking ribosomes). The Golgi apparatus consists of multiple membranous sacs, responsible for manufacturing and shipping out materials such as proteins. Lysosomes are structures that use enzymes to break down substances through phagocytosis, a process that comprises endocytosis and exocytosis. In the mitochondria, metabolic processes such as cellular respiration occur. The cytoskeleton is made of fibers that support the str
Document 3:::
In a multicellular organism, an organ is a collection of tissues joined in a structural unit to serve a common function. In the hierarchy of life, an organ lies between tissue and an organ system. Tissues are formed from same type cells to act together in a function. Tissues of different types combine to form an organ which has a specific function. The intestinal wall for example is formed by epithelial tissue and smooth muscle tissue. Two or more organs working together in the execution of a specific body function form an organ system, also called a biological system or body system.
An organ's tissues can be broadly categorized as parenchyma, the functional tissue, and stroma, the structural tissue with supportive, connective, or ancillary functions. For example, the gland's tissue that makes the hormones is the parenchyma, whereas the stroma includes the nerves that innervate the parenchyma, the blood vessels that oxygenate and nourish it and carry away its metabolic wastes, and the connective tissues that provide a suitable place for it to be situated and anchored. The main tissues that make up an organ tend to have common embryologic origins, such as arising from the same germ layer. Organs exist in most multicellular organisms. In single-celled organisms such as members of the eukaryotes, the functional analogue of an organ is known as an organelle. In plants, there are three main organs.
The number of organs in any organism depends on the definition used. By one widely adopted definition, 79 organs have been identified in the human body.
Animals
Except for placozoans, multicellular animals including humans have a variety of organ systems. These specific systems are widely studied in human anatomy. The functions of these organ systems often share significant overlap. For instance, the nervous and endocrine system both operate via a shared organ, the hypothalamus. For this reason, the two systems are combined and studied as the neuroendocrine system. The sam
Document 4:::
Like the nucleus, whether to include the vacuole in the protoplasm concept is controversial.
Terminology
Besides "protoplasm", many other related terms and distinctions were used for the cell contents over time. These were as follows:
Urschleim (Oken, 1802, 1809),
Protoplasma (Purkinje, 1840, von Mohl, 1846),
Primordialschlauch (primordial utricle, von Mohl, 1846),
sarcode (Dujardin, 1835, 1841),
Cytoplasma (Kölliker, 1863),
Hautschicht/Körnerschicht (ectoplasm/endoplasm, Pringsheim, 1854; Hofmeister, 1867),
Grundsubstanz (ground substance, Cienkowski, 1863),
metaplasm/protoplasm (Hanstein, 1868),
deutoplasm/protoplasm (van Beneden, 1870),
bioplasm (Beale, 1872),
paraplasm/protoplasm (Kupffer, 1875),
inter-filar substance theory (Velten, 1876)
Hyaloplasma (Pfeffer, 1877),
Protoplast (Hanstein, 1880),
Enchylema/Hyaloplasma (Hanstein, 1880),
Kleinkörperchen or Mikrosomen (small bodies or microsomes, Hanstein, 1882),
paramitome (Flemming, 1882),
Idioplasma (Nageli, 1884),
Zwischensu
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What term is used to describe organelles that are found only in animal cells?
A. centrioles
B. fibrils
C. anticlines
D. acids
Answer:
|
|
scienceQA-1704
|
multiple_choice
|
Select the solid.
|
[
"fruit punch",
"garbage can",
"wet paint",
"vinegar"
] |
B
|
Wet paint is a liquid. A liquid takes the shape of any container it is in. If you pour wet paint out of a can, the paint will change shape. But the wet paint will still take up the same amount of space.
Fruit punch is a liquid. A liquid takes the shape of any container it is in. If you pour fruit punch into a cup, the punch will take the shape of the cup. But the punch will still take up the same amount of space.
A garbage can is a solid. A solid has a size and shape of its own. You can open or close a garbage can. But it will still have a size and shape of its own.
Vinegar is a liquid. A liquid takes the shape of any container it is in. If you pour vinegar into a different container, the vinegar will take the shape of that container. But the vinegar will still take up the same amount of space.
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 2:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 3:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
Document 4:::
Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams.
Course content
E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are:
Electrostatics
Conductors, capacitors, and dielectrics
Electric circuits
Magnetic fields
Electromagnetism.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Select the solid.
A. fruit punch
B. garbage can
C. wet paint
D. vinegar
Answer:
|
sciq-6559
|
multiple_choice
|
What type of pollutants enter the air directly?
|
[
"secondary",
"liquid",
"primary",
"carbon"
] |
C
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 2:::
Activated carbon, also called activated charcoal, is a form of carbon commonly used to filter contaminants from water and air, among many other uses. It is processed (activated) to have small, low-volume pores that increase the surface area available for adsorption (which is not the same as absorption) or chemical reactions. Activation is analogous to making popcorn from dried corn kernels: popcorn is light, fluffy, and its kernels have a high surface-area-to-volume ratio. Activated is sometimes replaced by active.
Due to its high degree of microporosity, one gram of activated carbon has a surface area in excess of as determined by gas adsorption. Charcoal, before activation, has a specific surface area in the range of . An activation level sufficient for useful application may be obtained solely from high surface area. Further chemical treatment often enhances adsorption properties.
Activated carbon is usually derived from waste products such as coconut husks; waste from paper mills has been studied as a source. These bulk sources are converted into charcoal before being 'activated'. When derived from coal it is referred to as activated coal. Activated coke is derived from coke.
Uses
Activated carbon is used in methane and hydrogen storage, air purification, capacitive deionization, supercapacitive swing adsorption, solvent recovery, decaffeination, gold purification, metal extraction, water purification, medicine, sewage treatment, air filters in respirators, filters in compressed air, teeth whitening, production of hydrogen chloride, edible electronics, and many other applications.
Industrial
One major industrial application involves use of activated carbon in metal finishing for purification of electroplating solutions. For example, it is the main purification technique for removing organic impurities from bright nickel plating solutions. A variety of organic chemicals are added to plating solutions for improving their deposit qualities and for enhancing
Document 3:::
Biofiltration is a pollution control technique using a bioreactor containing living material to capture and biologically degrade pollutants. Common uses include processing waste water, capturing harmful chemicals or silt from surface runoff, and microbiotic oxidation of contaminants in air. Industrial biofiltration can be classified as the process of utilizing biological oxidation to remove volatile organic compounds, odors, and hydrocarbons.
Examples of biofiltration
Examples of biofiltration include:
Bioswales, biostrips, biobags, bioscrubbers, Vermifilters and trickling filters
Constructed wetlands and natural wetlands
Slow sand filters
Treatment ponds
Green belts
Green walls
Riparian zones, riparian forests, bosques
Bivalve bioaccumulation
Control of air pollution
When applied to air filtration and purification, biofilters use microorganisms to remove air pollution.
The air flows through a packed bed and the pollutant transfers into a thin biofilm on the surface of the packing material. Microorganisms, including bacteria and fungi are immobilized in the biofilm and degrade the pollutant. Trickling filters and bioscrubbers rely on a biofilm and the bacterial action in their recirculating waters.
The technology finds the greatest application in treating malodorous compounds and volatile organic compounds (VOCs). Industries employing the technology include food and animal products, off-gas from wastewater treatment facilities, pharmaceuticals, wood products manufacturing, paint and coatings application and manufacturing and resin manufacturing and application, etc. Compounds treated are typically mixed VOCs and various sulfur compounds, including hydrogen sulfide. Very large airflows may be treated and although a large area (footprint) has typically been required—a large biofilter (>200,000 acfm) may occupy as much or more land than a football field—this has been one of the principal drawbacks of the technology. Since the early 1990s, engineered biofil
Document 4:::
Bioremediation broadly refers to any process wherein a biological system (typically bacteria, microalgae, fungi in mycoremediation, and plants in phytoremediation), living or dead, is employed for removing environmental pollutants from air, water, soil, flue gasses, industrial effluents etc., in natural or artificial settings. The natural ability of organisms to adsorb, accumulate, and degrade common and emerging pollutants has attracted the use of biological resources in treatment of contaminated environment. In comparison to conventional physicochemical treatment methods bioremediation may offer considerable advantages as it aims to be sustainable, eco-friendly, cheap, and scalable.
Most bioremediation is inadvertent, involving native organisms. Research on bioremediation is heavily focused on stimulating the process by inoculation of a polluted site with organisms or supplying nutrients to promote the growth. In principle, bioremediation could be used to reduce the impact of byproducts created from anthropogenic activities, such as industrialization and agricultural processes. Bioremediation could prove less expensive and more sustainable than other remediation alternatives.
UNICEF, power producers, bulk water suppliers and local governments are early adopters of low cost bioremediation, such as aerobic bacteria tablets which are simply dropped into water.
While organic pollutants are susceptible to biodegradation, heavy metals are not degraded, but rather oxidized or reduced. Typical bioremediations involves oxidations. Oxidations enhance the water-solubility of organic compounds and their susceptibility to further degradation by further oxidation and hydrolysis. Ultimately biodegradation converts hydrocarbons to carbon dioxide and water. For heavy metals, bioremediation offers few solutions. Metal-containing pollutant can be removed or reduced with varying bioremediation techniques. The main challenge to bioremediations is rate: the processes are slow.
B
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of pollutants enter the air directly?
A. secondary
B. liquid
C. primary
D. carbon
Answer:
|
|
sciq-8435
|
multiple_choice
|
What does fog consist of?
|
[
"steam",
"helium",
"carbon monoxide",
"droplets of water"
] |
D
|
Relavent Documents:
Document 0:::
Haze is traditionally an atmospheric phenomenon in which dust, smoke, and other dry particulates suspended in air obscure visibility and the clarity of the sky. The World Meteorological Organization manual of codes includes a classification of particulates causing horizontal obscuration into categories of fog, ice fog, steam fog, mist, haze, smoke, volcanic ash, dust, sand, and snow. Sources for particles that cause haze include farming (ploughing in dry weather), traffic, industry, windy weather, volcanic activity and wildfires.
Seen from afar (e.g. an approaching airplane) and depending on the direction of view with respect to the Sun, haze may appear brownish or bluish, while mist tends to be bluish grey instead. Whereas haze often is considered a phenomenon occurring in dry air, mist formation is a phenomenon in saturated, humid air. However, haze particles may act as condensation nuclei that leads to the subsequent vapor condensation and formation of mist droplets; such forms of haze are known as "wet haze".
In meteorological literature, the word haze is generally used to denote visibility-reducing aerosols of the wet type suspended in the atmosphere. Such aerosols commonly arise from complex chemical reactions that occur as sulfur dioxide gases emitted during combustion are converted into small droplets of sulfuric acid when exposed. The reactions are enhanced in the presence of sunlight, high relative humidity, and an absence of air flow (wind). A small component of wet-haze aerosols appear to be derived from compounds released by trees when burning, such as terpenes. For all these reasons, wet haze tends to be primarily a warm-season phenomenon. Large areas of haze covering many thousands of kilometers may be produced under extensive favorable conditions each summer.
Air pollution
Haze often occurs when suspended dust and smoke particles accumulate in relatively dry air. When weather conditions block the dispersal of smoke and other pollutants they concen
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
At equilibrium, the relationship between water content and equilibrium relative humidity of a material can be displayed graphically by a curve, the so-called moisture sorption isotherm.
For each humidity value, a sorption isotherm indicates the corresponding water content value at a given, constant temperature. If the composition or quality of the material changes, then its sorption behaviour also changes. Because of the complexity of sorption process the isotherms cannot be determined explicitly by calculation, but must be recorded experimentally for each product.
The relationship between water content and water activity (aw) is complex. An increase in aw is usually accompanied by an increase in water content, but in a non-linear fashion. This relationship between water activity and moisture content at a given temperature is called the moisture sorption isotherm. These curves are determined experimentally and constitute the fingerprint of a food system.
BET theory (Brunauer-Emmett-Teller) provides a calculation to describe the physical adsorption of gas molecules on a solid surface. Because of the complexity of the process, these calculations are only moderately successful; however, Stephen Brunauer was able to classify sorption isotherms into five generalized shapes as shown in Figure 2. He found that Type II and Type III isotherms require highly porous materials or desiccants, with first monolayer adsorption, followed by multilayer adsorption and finally leading to capillary condensation, explaining these materials high moisture capacity at high relative humidity.
Care must be used in extracting data from isotherms, as the representation for each axis may vary in its designation. Brunauer provided the vertical axis as moles of gas adsorbed divided by the moles of the dry material, and on the horizontal axis he used the ratio of partial pressure of the gas just over the sample, divided by its partial pressure at saturation. More modern isotherms showing the
Document 3:::
Aerosol mass spectrometry is the application of mass spectrometry to the analysis of the composition of aerosol particles. Aerosol particles are defined as solid and liquid particles suspended in a gas (air), with size range of 3 nm to 100 μm in diameter and are produced from natural and anthropogenic sources, through a variety of different processes that include wind-blown suspension and combustion of fossil fuels and biomass. Analysis of these particles is important owing to their major impacts on global climate change, visibility, regional air pollution and human health. Aerosols are very complex in structure, can contain thousands of different chemical compounds within a single particle, and need to be analysed for both size and chemical composition, in real-time or off-line applications.
Off-line mass spectrometry is performed on collected particles, while on-line mass spectrometry is performed on particles introduced in real time.
History
In literature from ancient Rome there are complaints of foul air, while in 1273 the inhabitants of London were discussing the prohibition of coal burning to improve air quality. However, the measurement and analysis of aerosols only became established in the second half of the 19th century.
In 1847 Henri Becquerel presented the first concept of particles in the air in his condensation nuclei experiment and his ideas were confirmed in later experiments by Coulier in 1875. These ideas were expanded on between 1880 and 1890 by meteorologist John Aitken who demonstrated the fundamental role of dust particles in the formation of clouds and fogs. Aitken's method for aerosol analysis consisted of counting and sizing particles mounted on a slide, using a microscope. The composition of the particles was determined by their refractive index.
In the 1920s aerosol measurements, using Aitken's simple microscopic method, became more common place because the negative health effects of industrial aerosols and dust were starting to be re
Document 4:::
Sea spray are aerosol particles formed from the ocean, mostly by ejection into Earth's atmosphere by bursting bubbles at the air-sea interface. Sea spray contains both organic matter and inorganic salts that form sea salt aerosol (SSA). SSA has the ability to form cloud condensation nuclei (CCN) and remove anthropogenic aerosol pollutants from the atmosphere. Coarse sea spray has also been found to inhibit the development of lightning in storm clouds.
Sea spray is directly (and indirectly, through SSA) responsible for a significant degree of the heat and moisture fluxes between the atmosphere and the ocean, affecting global climate patterns and tropical storm intensity. Sea spray also influences plant growth and species distribution in coastal ecosystems and increases corrosion of building materials in coastal areas.
Generation
Formation
When wind, whitecaps, and breaking waves mix air into the sea surface, the air regroups to form bubbles, floats to the surface, and bursts at the air-sea interface. When they burst, they release up to a thousand particles of sea spray, which range in size from nanometers to micrometers and can be expelled up to 20 cm from the sea surface. Film droplets make up the majority of the smaller particles created by the initial burst, while jet droplets are generated by a collapse of the bubble cavity and are ejected from the sea surface in the form of a vertical jet. In windy conditions, water droplets are mechanically torn off from crests of breaking waves. Sea spray droplets generated via such a mechanism are called spume droplets and are typically larger in size and have less residence time in air. Impingement of plunging waves on sea surface also generates sea spray in the form of splash droplets . The composition of the sea spray depends primarily on the composition of the water from which it is produced, but broadly speaking is a mixture of salts and organic matter. Several factors determine the production flux of sea spray, e
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What does fog consist of?
A. steam
B. helium
C. carbon monoxide
D. droplets of water
Answer:
|
|
sciq-10847
|
multiple_choice
|
Because of it's repeating pattern, what is mendeleev's table of the elements called?
|
[
"periodic table",
"cycles table",
"phases table",
"serial chart"
] |
A
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 2:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
Document 3:::
MAOL tables (, ) is a reference handbook published by MAOL, the Finnish association for teachers of mathematical subjects, and distributed by Otava in both printed and digital forms. It is a book of numeric tables to aid in studying mathematics, chemistry and physics at the gymnasium level. The book includes a list of mathematical notation and symbols, scientific units and constants, a diverse collection of formulae, and several numeric tables. The Finnish Matriculation Examination Board has accepted the book and allowed it to be used in the Finnish matriculation examinations. From 2020 onwards, only the digital version has been allowed, and it is included for free in the digital examination environment, Abitti.
The colour of the cover of the book is changed with each edition of the book.
See also
Mathematical table
Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables
Reference book
Handbook
Rubber book (for chemistry & physics)
BINAS, a Dutch science handbook
Literature
Seppänen, Raimo et al.: MAOL-taulukot. Matemaattisten aineiden opettajien liitto, Otava, 1991. .
Finnish non-fiction books
Mathematics textbooks
Otava (publisher) books
Document 4:::
The Mathematics Subject Classification (MSC) is an alphanumerical classification scheme that has collaboratively been produced by staff of, and based on the coverage of, the two major mathematical reviewing databases, Mathematical Reviews and Zentralblatt MATH. The MSC is used by many mathematics journals, which ask authors of research papers and expository articles to list subject codes from the Mathematics Subject Classification in their papers. The current version is MSC2020.
Structure
The MSC is a hierarchical scheme, with three levels of structure. A classification can be two, three or five digits long, depending on how many levels of the classification scheme are used.
The first level is represented by a two-digit number, the second by a letter, and the third by another two-digit number. For example:
53 is the classification for differential geometry
53A is the classification for classical differential geometry
53A45 is the classification for vector and tensor analysis
First level
At the top level, 64 mathematical disciplines are labeled with a unique two-digit number. In addition to the typical areas of mathematical research, there are top-level categories for "History and Biography", "Mathematics Education", and for the overlap with different sciences. Physics (i.e. mathematical physics) is particularly well represented in the classification scheme with a number of different categories including:
Fluid mechanics
Quantum mechanics
Geophysics
Optics and electromagnetic theory
All valid MSC classification codes must have at least the first-level identifier.
Second level
The second-level codes are a single letter from the Latin alphabet. These represent specific areas covered by the first-level discipline. The second-level codes vary from discipline to discipline.
For example, for differential geometry, the top-level code is 53, and the second-level codes are:
A for classical differential geometry
B for local differential geometry
C for glo
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Because of it's repeating pattern, what is mendeleev's table of the elements called?
A. periodic table
B. cycles table
C. phases table
D. serial chart
Answer:
|
|
sciq-4235
|
multiple_choice
|
Because core electrons are closer to the nucleus, they are not involved in what?
|
[
"fission",
"splitting",
"diffusion",
"bonding"
] |
D
|
Relavent Documents:
Document 0:::
Core electrons are the electrons in an atom that are not valence electrons and do not participate in chemical bonding. The nucleus and the core electrons of an atom form the atomic core. Core electrons are tightly bound to the nucleus. Therefore, unlike valence electrons, core electrons play a secondary role in chemical bonding and reactions by screening the positive charge of the atomic nucleus from the valence electrons.
The number of valence electrons of an element can be determined by the periodic table group of the element (see valence electron):
For main-group elements, the number of valence electrons ranges from 1 to 8 (ns and np orbitals).
For transition metals, the number of valence electrons ranges from 3 to 12 (ns and (n−1)d orbitals).
For lanthanides and actinides, the number of valence electrons ranges from 3 to 16 (ns, (n−2)f and (n−1)d orbitals).
All other non-valence electrons for an atom of that element are considered core electrons.
Orbital theory
A more complex explanation of the difference between core and valence electrons can be described with atomic orbital theory.
In atoms with a single electron the energy of an orbital is determined exclusively by the principle quantum number n. The n = 1 orbital has the lowest possible energy in the atom. For large n, the energy increases so much that the electron can easily escape from the atom. In single electron atoms, all energy levels with the same principle quantum number are degenerate, and have the same energy.
In atoms with more than one electron, the energy of an electron depends not only on the properties of the orbital it resides in, but also on its interactions with the other electrons in other orbitals. This requires consideration of the ℓ quantum number. Higher values of ℓ are associated with higher values of energy; for instance, the 2p state is higher than the 2s state. When ℓ = 2, the increase in energy of the orbital becomes large enough to push the energy of orbital above the energy
Document 1:::
Secondary electrons are electrons generated as ionization products. They are called 'secondary' because they are generated by other radiation (the primary radiation). This radiation can be in the form of ions, electrons, or photons with sufficiently high energy, i.e. exceeding the ionization potential. Photoelectrons can be considered an example of secondary electrons where the primary radiation are photons; in some discussions photoelectrons with higher energy (>50 eV) are still considered "primary" while the electrons freed by the photoelectrons are "secondary".
Applications
Secondary electrons are also the main means of viewing images in the scanning electron microscope (SEM). The range of secondary electrons depends on the energy. Plotting the inelastic mean free path as a function of energy often shows characteristics of the "universal curve" familiar to electron spectroscopists and surface analysts. This distance is on the order of a few nanometers in metals and tens of nanometers in insulators. This small distance allows such fine resolution to be achieved in the SEM.
For SiO2, for a primary electron energy of 100 eV, the secondary electron range is up to 20 nm from the point of incidence.
See also
Delta ray
Everhart-Thornley detector
Document 2:::
Electron scattering occurs when electrons are displaced from their original trajectory. This is due to the electrostatic forces within matter interaction or, if an external magnetic field is present, the electron may be deflected by the Lorentz force. This scattering typically happens with solids such as metals, semiconductors and insulators; and is a limiting factor in integrated circuits and transistors.
The application of electron scattering is such that it can be used as a high resolution microscope for hadronic systems, that allows the measurement of the distribution of charges for nucleons and nuclear structure. The scattering of electrons has allowed us to understand that protons and neutrons are made up of the smaller elementary subatomic particles called quarks.
Electrons may be scattered through a solid in several ways:
Not at all: no electron scattering occurs at all and the beam passes straight through.
Single scattering: when an electron is scattered just once.
Plural scattering: when electron(s) scatter several times.
Multiple scattering: when electron(s) scatter many times over.
The likelihood of an electron scattering and the degree of the scattering is a probability function of the specimen thickness to the mean free path.
History
The principle of the electron was first theorised in the period of 1838-1851 by a natural philosopher by the name of Richard Laming who speculated the existence of sub-atomic, unit charged particles; he also pictured the atom as being an 'electrosphere' of concentric shells of electrical particles surrounding a material core.
It is generally accepted that J. J. Thomson first discovered the electron in 1897, although other notable members in the development in charged particle theory are George Johnstone Stoney (who coined the term "electron"), Emil Wiechert (who was first to publish his independent discovery of the electron), Walter Kaufmann, Pieter Zeeman and Hendrik Lorentz.
Compton scattering was first observed at
Document 3:::
The objective of the Thomson problem is to determine the minimum electrostatic potential energy configuration of electrons constrained to the surface of a unit sphere that repel each other with a force given by Coulomb's law. The physicist J. J. Thomson posed the problem in 1904 after proposing an atomic model, later called the plum pudding model, based on his knowledge of the existence of negatively charged electrons within neutrally-charged atoms.
Related problems include the study of the geometry of the minimum energy configuration and the study of the large behavior of the minimum energy.
Mathematical statement
The electrostatic interaction energy occurring between each pair of electrons of equal charges (, with the elementary charge of an electron) is given by Coulomb's law,
where is the electric constant and is the distance between each pair of electrons located at points on the sphere defined by vectors and , respectively.
Simplified units of and (the Coulomb constant) are used without loss of generality. Then,
The total electrostatic potential energy of each N-electron configuration may then be expressed as the sum of all pair-wise interaction energies
The global minimization of over all possible configurations of N distinct points is typically found by numerical minimization algorithms.
Thomson's problem is related to the 7th of the eighteen unsolved mathematics problems proposed by the mathematician Steve Smale — "Distribution of points on the 2-sphere".
The main difference is that in Smale's problem the function to minimise is not the electrostatic potential but a logarithmic potential given by A second difference is that Smale's question is about the asymptotic behaviour of the total potential when the number N of points goes to infinity, not for concrete values of N.
Example
The solution of the Thomson problem for two electrons is obtained when both electrons are as far apart as possible on opposite sides of the origin, , or
K
Document 4:::
Understanding the structure of the atomic nucleus is one of the central challenges in nuclear physics.
Models
The liquid drop model
The liquid drop model is one of the first models of nuclear structure, proposed by Carl Friedrich von Weizsäcker in 1935. It describes the nucleus as a semiclassical fluid made up of neutrons and protons, with an internal repulsive electrostatic force proportional to the number of protons. The quantum mechanical nature of these particles appears via the Pauli exclusion principle, which states that no two nucleons of the same kind can be at the same state. Thus the fluid is actually what is known as a Fermi liquid.
In this model, the binding energy of a nucleus with protons and neutrons is given by
where is the total number of nucleons (Mass Number). The terms proportional to and represent the volume and surface energy of the liquid drop, the term proportional to represents the electrostatic energy, the term proportional to represents the Pauli exclusion principle and the last term is the pairing term, which lowers the energy for even numbers of protons or neutrons.
The coefficients and the strength of the pairing term may be estimated theoretically, or fit to data.
This simple model reproduces the main features of the binding energy of nuclei.
The assumption of nucleus as a drop of Fermi liquid is still widely used in the form of Finite Range Droplet Model (FRDM), due to the possible good reproduction of nuclear binding energy on the whole chart, with the necessary accuracy for predictions of unknown nuclei.
The shell model
The expression "shell model" is ambiguous in that it refers to two different items. It was previously used to describe the existence of nucleon shells according to an approach closer to what is now called mean field theory.
Nowadays, it refers to a formalism analogous to the configuration interaction formalism used in quantum chemistry.
Introduction to the shell concept
Systematic measurements of th
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Because core electrons are closer to the nucleus, they are not involved in what?
A. fission
B. splitting
C. diffusion
D. bonding
Answer:
|
|
sciq-8953
|
multiple_choice
|
Spontaneous reactions release what type of energy, meaning it is available to do work?
|
[
"radioactive energy",
"kinetic energy",
"free energy",
"potential energy"
] |
C
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 2:::
Advanced Placement (AP) Physics C: Mechanics (also known as AP Mechanics) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a one-semester calculus-based university course in mechanics. The content of Physics C: Mechanics overlaps with that of AP Physics 1, but Physics 1 is algebra-based, while Physics C is calculus-based. Physics C: Mechanics may be combined with its electricity and magnetism counterpart to form a year-long course that prepares for both exams.
Course content
Intended to be equivalent to an introductory college course in mechanics for physics or engineering majors, the course modules are:
Kinematics
Newton's laws of motion
Work, energy and power
Systems of particles and linear momentum
Circular motion and rotation
Oscillations and gravitation.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a Calculus I class.
This course is often compared to AP Physics 1: Algebra Based for its similar course material involving kinematics, work, motion, forces, rotation, and oscillations. However, AP Physics 1: Algebra Based lacks concepts found in Calculus I, like derivatives or integrals.
This course may be combined with AP Physics C: Electricity and Magnetism to make a unified Physics C course that prepares for both exams.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Mechanics is separate from the AP examination for AP Physics C: Electricity and Magnetism. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday aftern
Document 3:::
The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work.
History
It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council.
Function
Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres.
STEM ambassadors
To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell.
Funding
STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments.
See also
The WISE Campaign
Engineering and Physical Sciences Research Council
National Centre for Excellence in Teaching Mathematics
Association for Science Education
Glossary of areas of mathematics
Glossary of astronomy
Glossary of biology
Glossary of chemistry
Glossary of engineering
Glossary of physics
Document 4:::
Computer science and engineering (CSE) is an academic program at many universities which comprises computer science classes (e.g. data structures and algorithms) and computer engineering classes (e.g computer architecture). There is no clear division in computing between science and engineering, just like in the field of materials science and engineering. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both undergraduate as well postgraduate with specializations.
Academic courses
Academic programs vary between colleges, but typically include a combination of topics in computer science, computer engineering, and electrical engineering. Undergraduate courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most CSE programs require introductory mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, probability, and statistics, as well as the basics of electrical and electronic engineering, physics, and electromagnetism.
Example universities with CSE majors and departments
APJ Abdul Kalam Technological University
American International University-B
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Spontaneous reactions release what type of energy, meaning it is available to do work?
A. radioactive energy
B. kinetic energy
C. free energy
D. potential energy
Answer:
|
|
sciq-3279
|
multiple_choice
|
In some plants, the sporophyte is diploid, while the gametophyte is what?
|
[
"meiosis",
"humanoid",
"gametes",
"haploid"
] |
D
|
Relavent Documents:
Document 0:::
A sporophyte () is the diploid multicellular stage in the life cycle of a plant or alga which produces asexual spores. This stage alternates with a multicellular haploid gametophyte phase.
Life cycle
The sporophyte develops from the zygote produced when a haploid egg cell is fertilized by a haploid sperm and each sporophyte cell therefore has a double set of chromosomes, one set from each parent. All land plants, and most multicellular algae, have life cycles in which a multicellular diploid sporophyte phase alternates with a multicellular haploid gametophyte phase. In the seed plants, the largest groups of which are the gymnosperms and flowering plants (angiosperms), the sporophyte phase is more prominent than the gametophyte, and is the familiar green plant with its roots, stem, leaves and cones or flowers. In flowering plants the gametophytes are very reduced in size, and are represented by the germinated pollen and the embryo sac.
The sporophyte produces spores (hence the name) by meiosis, a process also known as "reduction division" that reduces the number of chromosomes in each spore mother cell by half. The resulting meiospores develop into a gametophyte. Both the spores and the resulting gametophyte are haploid, meaning they only have one set of chromosomes.
The mature gametophyte produces male or female gametes (or both) by mitosis. The fusion of male and female gametes produces a diploid zygote which develops into a new sporophyte. This cycle is known as alternation of generations or alternation of phases.
Examples
Bryophytes (mosses, liverworts and hornworts) have a dominant gametophyte phase on which the adult sporophyte is dependent for nutrition. The embryo sporophyte develops by cell division of the zygote within the female sex organ or archegonium, and in its early development is therefore nurtured by the gametophyte.
Because this embryo-nurturing feature of the life cycle is common to all land plants they are known collectively as the embry
Document 1:::
Gametophores are prominent structures in seedless plants on which the reproductive organs are borne. The word gametophore and ‘-phore’ (Greek Φορά, "to be carried"). In mosses, liverworts and ferns (Archegoniata), the gametophores support gametangia (sex organs, female archegonia and male antheridia). If both archegonia and antheridia occur on the same plant, it is called monoecious. If there are separate female and male plants they are called dioecous.
In Bryopsida the leafy moss plant (q. v. "Thallus") is the haploid gametophyte. It grows from its juvenile form, the protonema, under the influence of phytohormones (mainly cytokinins). Whereas the filamentous protonema grows by apical cell division, the gametophyte grows by division of three-faced apical cells.
Document 2:::
Microgametogenesis is the process in plant reproduction where a microgametophyte develops in a pollen grain to the three-celled stage of its development. In flowering plants it occurs with a microspore mother cell inside the anther of the plant.
When the microgametophyte is first formed inside the pollen grain four sets of fertile cells called sporogenous cells are apparent. These cells are surrounded by a wall of sterile cells called the tapetum, which supplies food to the cell and eventually becomes the cell wall for the pollen grain. These sets of sporogenous cells eventually develop into diploid microspore mother cells. These microspore mother cells, also called microsporocytes, then undergo meiosis and become four microspore haploid cells. These new microspore cells then undergo mitosis and form a tube cell and a generative cell. The generative cell then undergoes mitosis one more time to form two male gametes, also called sperm.
See also
Gametogenesis
Document 3:::
Alternation of generations (also known as metagenesis or heterogenesis) is the predominant type of life cycle in plants and algae. In plants both phases are multicellular: the haploid sexual phase – the gametophyte – alternates with a diploid asexual phase – the sporophyte.
A mature sporophyte produces haploid spores by meiosis, a process which reduces the number of chromosomes to half, from two sets to one. The resulting haploid spores germinate and grow into multicellular haploid gametophytes. At maturity, a gametophyte produces gametes by mitosis, the normal process of cell division in eukaryotes, which maintains the original number of chromosomes. Two haploid gametes (originating from different organisms of the same species or from the same organism) fuse to produce a diploid zygote, which divides repeatedly by mitosis, developing into a multicellular diploid sporophyte. This cycle, from gametophyte to sporophyte (or equally from sporophyte to gametophyte), is the way in which all land plants and most algae undergo sexual reproduction.
The relationship between the sporophyte and gametophyte phases varies among different groups of plants. In the majority of algae, the sporophyte and gametophyte are separate independent organisms, which may or may not have a similar appearance. In liverworts, mosses and hornworts, the sporophyte is less well developed than the gametophyte and is largely dependent on it. Although moss and hornwort sporophytes can photosynthesise, they require additional photosynthate from the gametophyte to sustain growth and spore development and depend on it for supply of water, mineral nutrients and nitrogen. By contrast, in all modern vascular plants the gametophyte is less well developed than the sporophyte, although their Devonian ancestors had gametophytes and sporophytes of approximately equivalent complexity. In ferns the gametophyte is a small flattened autotrophic prothallus on which the young sporophyte is briefly dependent for its n
Document 4:::
A gametangium (: gametangia) is an organ or cell in which gametes are produced that is found in many multicellular protists, algae, fungi, and the gametophytes of plants. In contrast to gametogenesis in animals, a gametangium is a haploid structure and formation of gametes does not involve meiosis.
Types of gametangia
Depending on the type of gamete produced in a gametangium, several types can be distinguished.
Female
Female gametangia are most commonly called archegonia. They produce egg cells and are the sites for fertilization. Archegonia are common in algae and primitive plants as well as gymnosperms. In flowering plants, they are replaced by the embryo sac inside the ovule.
Male
The male gametangia are most commonly called antheridia. They produce sperm cells that they release for fertilization. Antheridia producing non-motile sperm (spermatia) are called spermatangia. Some antheridia do not release their sperm. For example, the oomycete antheridium is a syncytium with many sperm nuclei and fertilization occurs via fertilization tubes growing from the antheridium and making contact with the egg cells. Antheridia are common in the gametophytes in "lower" plants such as bryophytes, ferns, cycads and ginkgo. In "higher" plants such as conifers and flowering plants, they are replaced by pollen grains.
Isogamous
In isogamy, the gametes look alike and cannot be classified into "male" or "female." For example, in zygomycetes, two gametangia (single multinucleate cells at the end of hyphae) form good contact with each other and fuse into a zygosporangium. Inside the zygosporangium, the nuclei from each of the original two gametangia pair up.
See also
Zoosporangium, a gametangium that produces motile isogamous gametes, called zoospores
Reproduction
Reproductive system
Germ cells
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
In some plants, the sporophyte is diploid, while the gametophyte is what?
A. meiosis
B. humanoid
C. gametes
D. haploid
Answer:
|
|
sciq-588
|
multiple_choice
|
What kind of reaction is needed to prepare amides?
|
[
"amino acid reaction",
"lipophilic acid reaction",
"oxidize acid reaction",
"carboxylic acid reaction"
] |
D
|
Relavent Documents:
Document 0:::
This is a list of articles that describe particular biomolecules or types of biomolecules.
A
For substances with an A- or α- prefix such as
α-amylase, please see the parent page (in this case Amylase).
A23187 (Calcimycin, Calcium Ionophore)
Abamectine
Abietic acid
Acetic acid
Acetylcholine
Actin
Actinomycin D
Adenine
Adenosmeme
Adenosine diphosphate (ADP)
Adenosine monophosphate (AMP)
Adenosine triphosphate (ATP)
Adenylate cyclase
Adiponectin
Adonitol
Adrenaline, epinephrine
Adrenocorticotropic hormone (ACTH)
Aequorin
Aflatoxin
Agar
Alamethicin
Alanine
Albumins
Aldosterone
Aleurone
Alpha-amanitin
Alpha-MSH (Melaninocyte stimulating hormone)
Allantoin
Allethrin
α-Amanatin, see Alpha-amanitin
Amino acid
Amylase (also see α-amylase)
Anabolic steroid
Anandamide (ANA)
Androgen
Anethole
Angiotensinogen
Anisomycin
Antidiuretic hormone (ADH)
Anti-Müllerian hormone (AMH)
Arabinose
Arginine
Argonaute
Ascomycin
Ascorbic acid (vitamin C)
Asparagine
Aspartic acid
Asymmetric dimethylarginine
ATP synthase
Atrial-natriuretic peptide (ANP)
Auxin
Avidin
Azadirachtin A – C35H44O16
B
Bacteriocin
Beauvericin
beta-Hydroxy beta-methylbutyric acid
beta-Hydroxybutyric acid
Bicuculline
Bilirubin
Biopolymer
Biotin (Vitamin H)
Brefeldin A
Brassinolide
Brucine
Butyric acid
C
Document 1:::
In molecular biology, biosynthesis is a multi-step, enzyme-catalyzed process where substrates are converted into more complex products in living organisms. In biosynthesis, simple compounds are modified, converted into other compounds, or joined to form macromolecules. This process often consists of metabolic pathways. Some of these biosynthetic pathways are located within a single cellular organelle, while others involve enzymes that are located within multiple cellular organelles. Examples of these biosynthetic pathways include the production of lipid membrane components and nucleotides. Biosynthesis is usually synonymous with anabolism.
The prerequisite elements for biosynthesis include: precursor compounds, chemical energy (e.g. ATP), and catalytic enzymes which may need coenzymes (e.g. NADH, NADPH). These elements create monomers, the building blocks for macromolecules. Some important biological macromolecules include: proteins, which are composed of amino acid monomers joined via peptide bonds, and DNA molecules, which are composed of nucleotides joined via phosphodiester bonds.
Properties of chemical reactions
Biosynthesis occurs due to a series of chemical reactions. For these reactions to take place, the following elements are necessary:
Precursor compounds: these compounds are the starting molecules or substrates in a reaction. These may also be viewed as the reactants in a given chemical process.
Chemical energy: chemical energy can be found in the form of high energy molecules. These molecules are required for energetically unfavorable reactions. Furthermore, the hydrolysis of these compounds drives a reaction forward. High energy molecules, such as ATP, have three phosphates. Often, the terminal phosphate is split off during hydrolysis and transferred to another molecule.
Catalysts: these may be for example metal ions or coenzymes and they catalyze a reaction by increasing the rate of the reaction and lowering the activation energy.
In the sim
Document 2:::
Thioesters can be conveniently prepared from alcohols by the Mitsunobu reaction, using thioacetic acid.
They also arise via carbonylation of alkynes and alkenes in the presence of thiols.
Reactions
Thioesters hydrolyze to thiols and the carboxylic acid:
RC(O)SR' + H2O → RCO2H + RSH
The carbonyl center in thioesters is more reactive toward amine nucleophiles to give amides:
In a related reaction, but using a soft-metal to capture the thiolate, thioesters are converted into esters.
Document 3:::
Reactions
The reactions of the MEP pathway are as follows, taken primarily from Eisenreich and co-workers, excep
Document 4:::
Formylation refers to any chemical processes in which a compound is functionalized with a formyl group (-CH=O). In organic chemistry, the term is most commonly used with regards to aromatic compounds (for example the conversion of benzene to benzaldehyde in the Gattermann–Koch reaction). In biochemistry the reaction is catalysed by enzymes such as formyltransferases.
Formylation generally involves the use of formylation agents, reagents that give rise to the CHO group. Among the many formylation reagents, particularly important are formic acid and carbon monoxide. A formylation reaction in organic chemistry refers to organic reactions in which an organic compound is functionalized with a formyl group (-CH=O). The reaction is a route to aldehydes (C-CH=O), formamides (N-CH=O), and formate esters (O-CH=O).
Formylation agents
A reagent that delivers the formyl group is called a formylating agent.
Formic acid
Dimethylformamide and phosphorus oxychloride in the Vilsmeier-Haack reaction.
Hexamethylenetetramine in the Duff reaction and the Sommelet reaction
Carbon monoxide and hydrochloric acid in the Gattermann-Koch reaction
Cyanides in the Gattermann reaction. This method synthesizes aromatic aldehydes using hydrogen chloride and hydrogen cyanide (or another metallic cyanide as such zinc cyanide) in the presence of Lewis acid catalysts:
Chloroform in the Reimer-Tiemann reaction
Dichloromethyl methyl ether in Rieche formylation
A particularly important formylation process is hydroformylation, which converts alkenes to the homologated aldehyde.
Aromatic formylation
Formylation reactions are a form of electrophilic aromatic substitution and therefore work best when the aromatic starting materials are electron-rich. Phenols are very commonly encountered as they can be readily deprotonated to form phenoxides which are excellent nucleophiles, other electron rich substrates such as mesitylene, pyrrole, or fused aromatic rings can also be expected to react. Benzene w
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What kind of reaction is needed to prepare amides?
A. amino acid reaction
B. lipophilic acid reaction
C. oxidize acid reaction
D. carboxylic acid reaction
Answer:
|
|
sciq-4079
|
multiple_choice
|
Cytokinins promote cell division and prevent what?
|
[
"senescence",
"deficiency",
"mutations",
"apoptosis"
] |
A
|
Relavent Documents:
Document 0:::
Cell proliferation is the process by which a cell grows and divides to produce two daughter cells. Cell proliferation leads to an exponential increase in cell number and is therefore a rapid mechanism of tissue growth. Cell proliferation requires both cell growth and cell division to occur at the same time, such that the average size of cells remains constant in the population. Cell division can occur without cell growth, producing many progressively smaller cells (as in cleavage of the zygote), while cell growth can occur without cell division to produce a single larger cell (as in growth of neurons). Thus, cell proliferation is not synonymous with either cell growth or cell division, despite these terms sometimes being used interchangeably.
Stem cells undergo cell proliferation to produce proliferating "transit amplifying" daughter cells that later differentiate to construct tissues during normal development and tissue growth, during tissue regeneration after damage, or in cancer.
The total number of cells in a population is determined by the rate of cell proliferation minus the rate of cell death.
Cell size depends on both cell growth and cell division, with a disproportionate increase in the rate of cell growth leading to production of larger cells and a disproportionate increase in the rate of cell division leading to production of many smaller cells. Cell proliferation typically involves balanced cell growth and cell division rates that maintain a roughly constant cell size in the exponentially proliferating population of cells. Cell proliferation occurs by combining cell growth with regular "G1-S-M-G2" cell cycles to produce many diploid cell progeny.
In single-celled organisms, cell proliferation is largely responsive to the availability of nutrients in the environment (or laboratory growth medium).
In multicellular organisms, the process of cell proliferation is tightly controlled by gene regulatory networks encoded in the genome and executed mainly
Document 1:::
A progenitor cell is a biological cell that can differentiate into a specific cell type. Stem cells and progenitor cells have this ability in common. However, stem cells are less specified than progenitor cells. Progenitor cells can only differentiate into their "target" cell type. The most important difference between stem cells and progenitor cells is that stem cells can replicate indefinitely, whereas progenitor cells can divide only a limited number of times. Controversy about the exact definition remains and the concept is still evolving.
The terms "progenitor cell" and "stem cell" are sometimes equated.
Properties
Most progenitors are identified as oligopotent. In this point of view, they can compare to adult stem cells, but progenitors are said to be in a further stage of cell differentiation. They are "midway" between stem cells and fully differentiated cells. The kind of potency they have depends on the type of their "parent" stem cell and also on their niche. Some research found that progenitor cells were mobile and that these progenitor cells could move through the body and migrate towards the tissue where they are needed. Many properties are shared by adult stem cells and progenitor cells.
Research
Progenitor cells have become a hub for research on a few different fronts. Current research on progenitor cells focuses on two different applications: regenerative medicine and cancer biology. Research on regenerative medicine has focused on progenitor cells, and stem cells, because their cellular senescence contributes largely to the process of aging. Research on cancer biology focuses on the impact of progenitor cells on cancer responses, and the way that these cells tie into the immune response.
The natural aging of cells, called their cellular senescence, is one of the main contributors to aging on an organismal level. There are a few different ideas to the cause behind why aging happens on a cellular level. Telomere length has been shown to positive
Document 2:::
Adult stem cells are undifferentiated cells, found throughout the body after development, that multiply by cell division to replenish dying cells and regenerate damaged tissues. Also known as somatic stem cells (from Greek σωματικóς, meaning of the body), they can be found in juvenile, adult animals, and humans, unlike embryonic stem cells.
Scientific interest in adult stem cells is centered around two main characteristics. The first of which is their ability to divide or self-renew indefinitely, and the second their ability to generate all the cell types of the organ from which they originate, potentially regenerating the entire organ from a few cells. Unlike embryonic stem cells, the use of human adult stem cells in research and therapy is not considered to be controversial, as they are derived from adult tissue samples rather than human embryos designated for scientific research. The main functions of adult stem cells are to replace cells that are at risk of possibly dying as a result of disease or injury and to maintain a state of homeostasis within the cell. There are three main methods to determine if the adult stem cell is capable of becoming a specialized cell. The adult stem cell can be labeled in vivo and tracked, it can be isolated and then transplanted back into the organism, and it can be isolated in vivo and manipulated with growth hormones. They have mainly been studied in humans and model organisms such as mice and rats.
Structure
Defining properties
A stem cell possesses two properties:
Self-renewal is the ability to go through numerous cycles of cell division while still maintaining its undifferentiated state. Stem cells can replicate several times and can result in the formation of two stem cells, one stem cell more differentiated than the other, or two differentiated cells.
Multipotency or multidifferentiative potential is the ability to generate progeny of several distinct cell types, (for example glial cells and neurons) as opposed to u
Document 3:::
Cellular differentiation is the process in which a stem cell changes from one type to a differentiated one. Usually, the cell changes to a more specialized type. Differentiation happens multiple times during the development of a multicellular organism as it changes from a simple zygote to a complex system of tissues and cell types. Differentiation continues in adulthood as adult stem cells divide and create fully differentiated daughter cells during tissue repair and during normal cell turnover. Some differentiation occurs in response to antigen exposure. Differentiation dramatically changes a cell's size, shape, membrane potential, metabolic activity, and responsiveness to signals. These changes are largely due to highly controlled modifications in gene expression and are the study of epigenetics. With a few exceptions, cellular differentiation almost never involves a change in the DNA sequence itself. However, metabolic composition does get altered quite dramatically where stem cells are characterized by abundant metabolites with highly unsaturated structures whose levels decrease upon differentiation. Thus, different cells can have very different physical characteristics despite having the same genome.
A specialized type of differentiation, known as terminal differentiation, is of importance in some tissues, including vertebrate nervous system, striated muscle, epidermis and gut. During terminal differentiation, a precursor cell formerly capable of cell division permanently leaves the cell cycle, dismantles the cell cycle machinery and often expresses a range of genes characteristic of the cell's final function (e.g. myosin and actin for a muscle cell). Differentiation may continue to occur after terminal differentiation if the capacity and functions of the cell undergo further changes.
Among dividing cells, there are multiple levels of cell potency, which is the cell's ability to differentiate into other cell types. A greater potency indicates a larger n
Document 4:::
Cyclin A is a member of the cyclin family, a group of proteins that function in regulating progression through the cell cycle. The stages that a cell passes through that culminate in its division and replication are collectively known as the cell cycle Since the successful division and replication of a cell is essential for its survival, the cell cycle is tightly regulated by several components to ensure the efficient and error-free progression through the cell cycle. One such regulatory component is cyclin A which plays a role in the regulation of two different cell cycle stages.
Types
Cyclin A was first identified in 1983 in sea urchin embryos. Since its initial discovery, homologues of cyclin A have been identified in numerous eukaryotes including Drosophila, Xenopus, mice, and in humans but has not been found in lower eukaryotes like yeast. The protein exists in both an embryonic form and somatic form. A single cyclin A gene has been identified in Drosophila while Xenopus, mice and humans contain two distinct types of cyclin A: A1, the embryonic-specific form, and A2, the somatic form. Cyclin A1 is prevalently expressed during meiosis and early on in embryogenesis. Cyclin A2 is expressed in dividing somatic cells.
Role in cell cycle progression
Cyclin A, along with the other members of the cyclin family, regulates cell cycle progression through physically interacting with cyclin-dependent kinases (CDKs), which thereby activates the enzymatic activity of its CDK partner.
CDK partner association
The interaction between the cyclin box, a region conserved across cyclins, and a region of the CDK, called the PSTAIRE, confers the foundation of the cyclin-CDK complex. Cyclin A is the only cyclin that regulates multiple steps of the cell cycle. Cyclin A can regulate multiple cell cycle steps because it associates with, and thereby activates, two distinct CDKs – CDK2 and CDK1. Depending on which CDK partner cyclin A binds, the cell will continue through the S phase o
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Cytokinins promote cell division and prevent what?
A. senescence
B. deficiency
C. mutations
D. apoptosis
Answer:
|
|
sciq-5386
|
multiple_choice
|
Which ancient fish has just two living species and is at risk of extinction?
|
[
"coelacanths",
"latimeria",
"squids",
"hominids"
] |
A
|
Relavent Documents:
Document 0:::
Edward Brinton (January 12, 1924 – January 13, 2010) was a professor of oceanography and research biologist. His particular area of expertise was Euphausiids or krill, small shrimp-like creatures found in all the oceans of the world.
Early life
Brinton was born on January 12, 1924, in Richmond, Indiana to a Quaker couple, Howard Brinton and Anna Shipley Cox Brinton. Much of his childhood was spent on the grounds of Mills College where his mother was Dean of Faculty and his father was a professor. The family later moved to the Pendle Hill Quaker Center for Study and Contemplation, in Pennsylvania where his father and mother became directors.
Academic career
Brinton attended High School at Westtown School in Chester County, Pennsylvania. He studied at Haverford College and graduated in 1949 with a bachelor's degree in biology. He enrolled at Scripps Institution of Oceanography as a graduate student in 1950 and was awarded a Ph.D. in 1957. He continued on as a research biologist in the Marine Life Research Group, part of the CalCOFI program. He soon turned his dissertation into a major publication, The Distribution of Pacific Euphausiids. In this large monograph, he laid out the major biogeographic provinces of the Pacific (and part of the Atlantic), large-scale patterns of pelagic diversity and one of the most rational hypotheses for the mechanism of sympatric, oceanic speciation. In all of these studies the role of physical oceanography and circulation played a prominent part. His work has since been validated by others and continues, to this day, to form the basis for our attempts to understand large-scale pelagic ecology and the role of physics of the movement of water in the regulation of pelagic ecosystems. In addition to these studies he has led in the studies of how climatic variations have led to the large variations in the California Current, and its populations and communities. He has described several new species and, in collaboration with Margaret K
Document 1:::
Future Evolution is a book written by paleontologist Peter Ward and illustrated by Alexis Rockman. He addresses his own opinion of future evolution and compares it with Dougal Dixon's After Man: A Zoology of the Future and H. G. Wells's The Time Machine.
According to Ward, humanity may exist for a long time. Nevertheless, we are impacting our planet. He splits his book in different chronologies, starting with the near future (the next 1,000 years). Humanity would be struggling to support a massive population of 11 billion. Global warming raises sea levels. The ozone layer weakens. Most of the available land is devoted to agriculture due to the demand for food. Despite all this, the oceanic wildlife remains untethered by most of these impacts, specifically the commercial farmed fish. This is, according to Ward, an era of extinction that would last about 10 million years (note that many human-caused extinctions have already occurred). After that, Earth gets stranger.
Ward labels the species that have the potential to survive in a human-infested world. These include dandelions, raccoons, owls, pigs, cattle, rats, snakes, and crows to name but a few. In the human-infested ecosystem, those preadapted to live amongst man survived and prospered. Ward describes garbage dumps 10 million years in the future infested with multiple species of rats, a snake with a sticky frog-like tongue to snap up rodents, and pigs with snouts specialized for rooting through garbage. The story's time traveller who views this new refuse-covered habitat is gruesomely attacked by ravenous flesh-eating crows.
Ward then questions the potential for humanity to evolve into a new species. According to him, this is incredibly unlikely. For this to happen a human population must isolate itself and interbreed until it becomes a new species. Then he questions if humanity would survive or extinguish itself by climate change, nuclear war, disease, or the posing threat of nanotechnology as terrorist weapon
Document 2:::
Fisheries science is the academic discipline of managing and understanding fisheries. It is a multidisciplinary science, which draws on the disciplines of limnology, oceanography, freshwater biology, marine biology, meteorology, conservation, ecology, population dynamics, economics, statistics, decision analysis, management, and many others in an attempt to provide an integrated picture of fisheries. In some cases new disciplines have emerged, as in the case of bioeconomics and fisheries law. Because fisheries science is such an all-encompassing field, fisheries scientists often use methods from a broad array of academic disciplines. Over the most recent several decades, there have been declines in fish stocks (populations) in many regions along with increasing concern about the impact of intensive fishing on marine and freshwater biodiversity.
Fisheries science is typically taught in a university setting, and can be the focus of an undergraduate, master's or Ph.D. program. Some universities offer fully integrated programs in fisheries science. Graduates of university fisheries programs typically find employment as scientists, fisheries managers of both recreational and commercial fisheries, researchers, aquaculturists, educators, environmental consultants and planners, conservation officers, and many others.
Fisheries research
Because fisheries take place in a diverse set of aquatic environments (i.e., high seas, coastal areas, large and small rivers, and lakes of all sizes), research requires different sampling equipment, tools, and techniques. For example, studying trout populations inhabiting mountain lakes requires a very different set of sampling tools than, say, studying salmon in the high seas. Ocean fisheries research vessels (FRVs) often require platforms which are capable of towing different types of fishing nets, collecting plankton or water samples from a range of depths, and carrying acoustic fish-finding equipment. Fisheries research vessels a
Document 3:::
The Cichlid Room Companion (CRC) is a membership-based webpage dedicated to the fishes of the Cichlid family (Cichlidae). The site was launched in May 1996 and offers arguably the most comprehensive authoritative catalogue of cichlids on the web, which is illustrated with more than 25,000 photographs of fishes and 2,000 of habitats, as well as over 300 videos of cichlids and their habitats. It also “offers access to ample information about 253 genera and 2371 species”, a discussion forum as well as many articles about taxonomy, natural history, fish-keeping, field accounts, conservation, and other cichlid related topics; mostly written by citizen scientists and people who specialize in cichlids. The species summaries provided in the form of profiles include taxonomic, distribution and habitat, distribution maps, conservation, natural history, captive maintenance, images, videos, collection records, and an extensive bibliography of the species included and have been prepared by world-class specialists. A document establishes the standards followed in the preparation and maintenance of the cichlid catalogue. The site is administered by its creator and editor, Juan Miguel Artigas-Azas, a naturalist, who is also an aquarist and a nature photographer. In 2008, the American Cichlid Association (ACA) awarded Artigas-Azas the Guy Jordan Retrospective Award, which is the maximum honor that association gives to people who have done extensive contribution to the international cichlid hobby.
Contributions to public understanding of science
In the past decade, the Internet has fundamentally transformed the relationships between the scientific community and society as a whole, as the boundaries between public and private, professionals and hobbyists fade away; allowing for a wider range of participants to engage with science in unprecedented ways. The educational and citizens science task of the CRC has been acknowledged in the formal scientific literature, both as a source of d
Document 4:::
David P. Philipp is an American-born biologist known for his work on conservation genetics, reproductive ecology, and the effects of angling on fish populations. He is a conservation geneticist and Director of the Fisheries Genetics Lab at the Illinois Natural History Survey, an adjunct professor at the University of Illinois, and the Chair of the Fisheries Conservation Foundation. Philipp has supervised a number of graduate students including Steven J. Cooke, Cory Suski, Derek Aday, Jeff Koppelman, Jana Svec, Jimmy Ludden, Dale Burkett, Sascha Danylchuk and Jeff Stein.
Research
Philipp's research examines genetics, reproduction, and spatial ecology of fishes, and the effects of fisheries interactions on these dynamics in North America and the Caribbean. His early research examined centrarchid population genetics, gene expression, reproductive physiology, and strategies, heritability of fish behaviour, and life history strategies.
More recently, Philipp's research has focused on the effects of fisheries interactions and environmental stressors on reproductive success, physiology, behavior, and survival of fishes.
Philipp's research revealed that populations of largemouth bass, Micropterus salmoides, in most of North America composite a separate species from the Florida bass, M. floridanus, in Florida, and that stocking programs introducing Florida bass outside their native range have detrimental genetic effects on largemouth bass populations. Another research program showed that angling targets individual largemouth bass with certain behavioural and physiological characteristics, and in the process can cause evolutionary change in populations including reduced parental care and reproductive success, as well as reduced angling success rates. Philipps is also involved with research programs in The Bahamas examining spatial ecology and the effects of fisheries interactions on bonefish.
Conservation activity
Philipp is a co-founder and Chair of the Fisheries C
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which ancient fish has just two living species and is at risk of extinction?
A. coelacanths
B. latimeria
C. squids
D. hominids
Answer:
|
|
sciq-5134
|
multiple_choice
|
What is a factor in determining weight but not mass?
|
[
"location",
"gravity",
"material",
"function"
] |
A
|
Relavent Documents:
Document 0:::
In common usage, the mass of an object is often referred to as its weight, though these are in fact different concepts and quantities. Nevertheless, one object will always weigh more than another with less mass if both are subject to the same gravity (i.e. the same gravitational field strength).
In scientific contexts, mass is the amount of "matter" in an object (though "matter" may be difficult to define), but weight is the force exerted on an object's matter by gravity. At the Earth's surface, an object whose mass is exactly one kilogram weighs approximately 9.81 newtons, the product of its mass and the gravitational field strength there. The object's weight is less on Mars, where gravity is weaker; more on Saturn, where gravity is stronger; and very small in space, far from significant sources of gravity, but it always has the same mass.
Material objects at the surface of the Earth have weight despite such sometimes being difficult to measure. An object floating freely on water, for example, does not appear to have weight since it is buoyed by the water. But its weight can be measured if it is added to water in a container which is entirely supported by and weighed on a scale. Thus, the "weightless object" floating in water actually transfers its weight to the bottom of the container (where the pressure increases). Similarly, a balloon has mass but may appear to have no weight or even negative weight, due to buoyancy in air. However the weight of the balloon and the gas inside it has merely been transferred to a large area of the Earth's surface, making the weight difficult to measure. The weight of a flying airplane is similarly distributed to the ground, but does not disappear. If the airplane is in level flight, the same weight-force is distributed to the surface of the Earth as when the plane was on the runway, but spread over a larger area.
A better scientific definition of mass is its description as being a measure of inertia, which is the tendency of an
Document 1:::
A proof mass or test mass is a known quantity of mass used in a measuring instrument as a reference for the measurement of an unknown quantity.
A mass used to calibrate a weighing scale is sometimes called a calibration mass or calibration weight.
A proof mass that deforms a spring in an accelerometer is sometimes called the seismic mass. In a convective accelerometer, a fluid proof mass may be employed.
See also
Calibration, checking or adjustment by comparison with a standard
Control variable, the experimental element that is constant and unchanged throughout the course of a scientific investigation
Test particle, an idealized model of an object in which all physical properties are assumed to be negligible, except for the property being studied
Document 2:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 3:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
Document 4:::
In physics and mechanics, mass distribution is the spatial distribution of mass within a solid body. In principle, it is relevant also for gases or liquids, but on Earth their mass distribution is almost homogeneous.
Astronomy
In astronomy mass distribution has decisive influence on the development e.g. of nebulae, stars and planets.
The mass distribution of a solid defines its center of gravity and influences its dynamical behaviour - e.g. the oscillations and eventual rotation.
Mathematical modelling
A mass distribution can be modeled as a measure. This allows point masses, line masses, surface masses, as well as masses given by a volume density function. Alternatively the latter can be generalized to a distribution. For example, a point mass is represented by a delta function defined in 3-dimensional space. A surface mass on a surface given by the equation may be represented by a density distribution , where is the mass per unit area.
The mathematical modelling can be done by potential theory, by numerical methods (e.g. a great number of mass points), or by theoretical equilibrium figures.
Geology
In geology the aspects of rock density are involved.
Rotating solids
Rotating solids are affected considerably by the mass distribution, either if they are homogeneous or inhomogeneous - see Torque, moment of inertia, wobble, imbalance and stability.
See also
Bouguer plate
Gravity
Mass function
Mass concentration (astronomy)
External links
Mass distribution of the Earth
Mechanics
Celestial mechanics
Geophysics
Mass
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is a factor in determining weight but not mass?
A. location
B. gravity
C. material
D. function
Answer:
|
|
sciq-7112
|
multiple_choice
|
Where do archea live?
|
[
"in mammals",
"in the ocean",
"everywhere",
"underground"
] |
C
|
Relavent Documents:
Document 0:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 1:::
The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work.
History
It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council.
Function
Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres.
STEM ambassadors
To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell.
Funding
STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments.
See also
The WISE Campaign
Engineering and Physical Sciences Research Council
National Centre for Excellence in Teaching Mathematics
Association for Science Education
Glossary of areas of mathematics
Glossary of astronomy
Glossary of biology
Glossary of chemistry
Glossary of engineering
Glossary of physics
Document 2:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
Document 3:::
Animal science is described as "studying the biology of animals that are under the control of humankind". It can also be described as the production and management of farm animals. Historically, the degree was called animal husbandry and the animals studied were livestock species, like cattle, sheep, pigs, poultry, and horses. Today, courses available look at a broader area, including companion animals, like dogs and cats, and many exotic species. Degrees in Animal Science are offered at a number of colleges and universities. Animal science degrees are often offered at land-grant universities, which will often have on-campus farms to give students hands-on experience with livestock animals.
Education
Professional education in animal science prepares students for careers in areas such as animal breeding, food and fiber production, nutrition, animal agribusiness, animal behavior, and welfare. Courses in a typical Animal Science program may include genetics, microbiology, animal behavior, nutrition, physiology, and reproduction. Courses in support areas, such as genetics, soils, agricultural economics and marketing, legal aspects, and the environment also are offered.
Bachelor degree
At many universities, a Bachelor of Science (BS) degree in Animal Science allows emphasis in certain areas. Typical areas are species-specific or career-specific. Species-specific areas of emphasis prepare students for a career in dairy management, beef management, swine management, sheep or small ruminant management, poultry production, or the horse industry. Other career-specific areas of study include pre-veterinary medicine studies, livestock business and marketing, animal welfare and behavior, animal nutrition science, animal reproduction science, or genetics. Youth programs are also an important part of animal science programs.
Pre-veterinary emphasis
Many schools that offer a degree option in Animal Science also offer a pre-veterinary emphasis such as Iowa State University, th
Document 4:::
Animal geography is a subfield of the nature–society/human–environment branch of geography as well as a part of the larger, interdisciplinary umbrella of human–animal studies (HAS). Animal geography is defined as the study of "the complex entanglings of human–animal relations with space, place, location, environment and landscape" or "the study of where, when, why and how nonhuman animals intersect with human societies". Recent work advances these perspectives to argue about an ecology of relations in which humans and animals are enmeshed, taking seriously the lived spaces of animals themselves and their sentient interactions with not just human but other nonhuman bodies as well.
The Animal Geography Specialty Group of the Association of American Geographers was founded in 2009 by Monica Ogra and Julie Urbanik, and the Animal Geography Research Network was founded in 2011 by Daniel Allen.
Overview
First wave
The first wave of animal geography, known as zoogeography, came to prominence as a geographic subfield from the late 1800s through the early part of the 20th century. During this time the study of animals was seen as a key part of the discipline and the goal was "the scientific study of animal life with reference to the distribution of animals on the earth and the mutual influence of environment and animals upon each other". The animals that were the focus of studies were almost exclusively wild animals and zoogeographers were building on the new theories of evolution and natural selection. They mapped the evolution and movement of species across time and space and also sought to understand how animals adapted to different ecosystems. "The ambition was to establish general laws of how animals arranged themselves across the earth's surface or, at smaller scales, to establish patterns of spatial co-variation between animals and other environmental factors." Key works include Newbigin's Animal Geography, Bartholomew, Clarke, and Grimshaw's Atlas of Zoogeography
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Where do archea live?
A. in mammals
B. in the ocean
C. everywhere
D. underground
Answer:
|
|
sciq-7313
|
multiple_choice
|
What term is used to describe the distance traveled divided by the time it took to travel that distance?
|
[
"speed",
"movement",
"motion",
"velocity"
] |
A
|
Relavent Documents:
Document 0:::
Velocity is the speed in combination with the direction of motion of an object. Velocity is a fundamental concept in kinematics, the branch of classical mechanics that describes the motion of bodies.
Velocity is a physical vector quantity: both magnitude and direction are needed to define it. The scalar absolute value (magnitude) of velocity is called , being a coherent derived unit whose quantity is measured in the SI (metric system) as metres per second (m/s or m⋅s−1). For example, "5 metres per second" is a scalar, whereas "5 metres per second east" is a vector. If there is a change in speed, direction or both, then the object is said to be undergoing an acceleration.
Constant velocity vs acceleration
To have a constant velocity, an object must have a constant speed in a constant direction. Constant direction constrains the object to motion in a straight path thus, a constant velocity means motion in a straight line at a constant speed.
For example, a car moving at a constant 20 kilometres per hour in a circular path has a constant speed, but does not have a constant velocity because its direction changes. Hence, the car is considered to be undergoing an acceleration.
Difference between speed and velocity
While the terms speed and velocity are often colloquially used interchangeably to connote how fast an object is moving, in scientific terms they are different. Speed, the scalar magnitude of a velocity vector, denotes only how fast an object is moving, while velocity indicates both an objects speed and direction.
Equation of motion
Average velocity
Velocity is defined as the rate of change of position with respect to time, which may also be referred to as the instantaneous velocity to emphasize the distinction from the average velocity. In some applications the average velocity of an object might be needed, that is to say, the constant velocity that would provide the same resultant displacement as a variable velocity in the same time interval, , over some
Document 1:::
Linear motion, also called rectilinear motion, is one-dimensional motion along a straight line, and can therefore be described mathematically using only one spatial dimension. The linear motion can be of two types: uniform linear motion, with constant velocity (zero acceleration); and non-uniform linear motion, with variable velocity (non-zero acceleration). The motion of a particle (a point-like object) along a line can be described by its position , which varies with (time). An example of linear motion is an athlete running a 100-meter dash along a straight track.
Linear motion is the most basic of all motion. According to Newton's first law of motion, objects that do not experience any net force will continue to move in a straight line with a constant velocity until they are subjected to a net force. Under everyday circumstances, external forces such as gravity and friction can cause an object to change the direction of its motion, so that its motion cannot be described as linear.
One may compare linear motion to general motion. In general motion, a particle's position and velocity are described by vectors, which have a magnitude and direction. In linear motion, the directions of all the vectors describing the system are equal and constant which means the objects move along the same axis and do not change direction. The analysis of such systems may therefore be simplified by neglecting the direction components of the vectors involved and dealing only with the magnitude.
Background
Displacement
The motion in which all the particles of a body move through the same distance in the same time is called translatory motion. There are two types of translatory motions: rectilinear motion; curvilinear motion. Since linear motion is a motion in a single dimension, the distance traveled by an object in particular direction is the same as displacement. The SI unit of displacement is the metre. If is the initial position of an object and is the final position, then mat
Document 2:::
In mechanics, the derivative of the position vs. time graph of an object is equal to the velocity of the object. In the International System of Units, the position of the moving object is measured in meters relative to the origin, while the time is measured in seconds. Placing position on the y-axis and time on the x-axis, the slope of the curve is given by:
Here is the position of the object, and is the time. Therefore, the slope of the curve gives the change in position divided by the change in time, which is the definition of the average velocity for that interval of time on the graph. If this interval is made to be infinitesimally small, such that becomes and becomes , the result is the instantaneous velocity at time , or the derivative of the position with respect to time.
A similar fact also holds true for the velocity vs. time graph. The slope of a velocity vs. time graph is acceleration, this time, placing velocity on the y-axis and time on the x-axis. Again the slope of a line is change in over change in :
where is the velocity, and is the time. This slope therefore defines the average acceleration over the interval, and reducing the interval infinitesimally gives , the instantaneous acceleration at time , or the derivative of the velocity with respect to time (or the second derivative of the position with respect to time). In SI, this slope or derivative is expressed in the units of meters per second per second (, usually termed "meters per second-squared").
Since the velocity of the object is the derivative of the position graph, the area under the line in the velocity vs. time graph is the displacement of the object. (Velocity is on the y-axis and time on the x-axis. Multiplying the velocity by the time, the time cancels out, and only displacement remains.)
The same multiplication rule holds true for acceleration vs. time graphs. When acceleration is multiplied
Variable rates of change
The expressions given above apply only when the rate o
Document 3:::
In physics, work is the energy transferred to or from an object via the application of force along a displacement. In its simplest form, for a constant force aligned with the direction of motion, the work equals the product of the force strength and the distance traveled. A force is said to do positive work if when applied it has a component in the direction of the displacement of the point of application. A force does negative work if it has a component opposite to the direction of the displacement at the point of application of the force.
For example, when a ball is held above the ground and then dropped, the work done by the gravitational force on the ball as it falls is positive, and is equal to the weight of the ball (a force) multiplied by the distance to the ground (a displacement). If the ball is thrown upwards, the work done by the gravitational force is negative, and is equal to the weight multiplied by the displacement in the upwards direction.
Both force and displacement are vectors. The work done is given by the dot product of the two vectors. When the force is constant and the angle between the force and the displacement is also constant, then the work done is given by:
Work is a scalar quantity, so it has only magnitude and no direction. Work transfers energy from one place to another, or one form to another. The SI unit of work is the joule (J), the same unit as for energy.
History
The ancient Greek understanding of physics was limited to the statics of simple machines (the balance of forces), and did not include dynamics or the concept of work. During the Renaissance the dynamics of the Mechanical Powers, as the simple machines were called, began to be studied from the standpoint of how far they could lift a load, in addition to the force they could apply, leading eventually to the new concept of mechanical work. The complete dynamic theory of simple machines was worked out by Italian scientist Galileo Galilei in 1600 in Le Meccaniche (On Me
Document 4:::
In mathematics, a rate is the quotient of two quantities in different units of measurement, often represented as a fraction. If the divisor (or fraction denominator) in the rate is equal to one expressed as a single unit, and if it is assumed that this quantity can be changed systematically (i.e., is an independent variable), then the dividend (the fraction numerator) of the rate expresses the corresponding rate of change in the other (dependent) variable.
Temporal rate is a common type of rate ("per unit of time"), such as speed, heart rate, and flux.
In fact, often rate is a synonym of rhythm or frequency, a count per second (i.e., hertz); e.g., radio frequencies or sample rates.
In describing the units of a rate, the word "per" is used to separate the units of the two measurements used to calculate the rate; for example, a heart rate is expressed as "beats per minute".
Rates that have a non-time divisor or denominator include exchange rates, literacy rates, and electric field (in volts per meter).
A rate defined using two numbers of the same units will result in a dimensionless quantity, also known as ratio or simply as a rate (such as tax rates) or counts (such as literacy rate). Dimensionless rates can be expressed as a percentage (for example, the global literacy rate in 1998 was 80%), fraction, or multiple.
Properties and examples
Rates and ratios often vary with time, location, particular element (or subset) of a set of objects, etc. Thus they are often mathematical functions.
A rate (or ratio) may often be thought of as an output-input ratio, benefit-cost ratio, all considered in the broad sense. For example, miles per hour in transportation is the output (or benefit) in terms of miles of travel, which one gets from spending an hour (a cost in time) of traveling (at this velocity).
A set of sequential indices may be used to enumerate elements (or subsets) of a set of ratios under study. For example, in finance, one could define I by assigning con
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What term is used to describe the distance traveled divided by the time it took to travel that distance?
A. speed
B. movement
C. motion
D. velocity
Answer:
|
|
sciq-11136
|
multiple_choice
|
The kinetic energy of molecules is generally proportionate to what other property that they have?
|
[
"variation",
"precipitation",
"mass",
"temperature"
] |
D
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
kT (also written as kBT) is the product of the Boltzmann constant, k (or kB), and the temperature, T. This product is used in physics as a scale factor for energy values in molecular-scale systems (sometimes it is used as a unit of energy), as the rates and frequencies of many processes and phenomena depend not on their energy alone, but on the ratio of that energy and kT, that is, on (see Arrhenius equation, Boltzmann factor). For a system in equilibrium in canonical ensemble, the probability of the system being in state with energy E is proportional to .
More fundamentally, kT is the amount of heat required to increase the thermodynamic entropy of a system by k.
In physical chemistry, as kT often appears in the denominator of fractions (usually because of Boltzmann distribution), sometimes β = 1/kT is used instead of kT, turning into .
RT
RT is the product of the molar gas constant, R, and the temperature, T. This product is used in physics and chemistry as a scaling factor for energy values in macroscopic scale (sometimes it is used as a pseudo-unit of energy), as many processes and phenomena depend not on the energy alone, but on the ratio of energy and RT, i.e. E/RT. The SI units for RT are joules per mole (J/mol).
It differs from kT only by a factor of the Avogadro constant, NA. Its dimension is energy or ML2T−2, expressed in SI units as joules (J):
kT = RT/NA
Document 2:::
Specific kinetic energy is the kinetic energy of an object per unit of mass.
It is defined as .
Where is the specific kinetic energy and is velocity. It has units of J/kg, which is equivalent to m2/s2.
Energy (physics)
Document 3:::
Physical or chemical properties of materials and systems can often be categorized as being either intensive or extensive, according to how the property changes when the size (or extent) of the system changes.
The terms "intensive and extensive quantities" were introduced into physics by German mathematician Georg Helm in 1898, and by American physicist and chemist Richard C. Tolman in 1917.
According to International Union of Pure and Applied Chemistry (IUPAC), an intensive property or intensive quantity is one whose magnitude is independent of the size of the system.
An intensive property is not necessarily homogeneously distributed in space; it can vary from place to place in a body of matter and radiation. Examples of intensive properties include temperature, T; refractive index, n; density, ρ; and hardness, η.
By contrast, an extensive property or extensive quantity is one whose magnitude is additive for subsystems.
Examples include mass, volume and entropy.
Not all properties of matter fall into these two categories. For example, the square root of the volume is neither intensive nor extensive. If a system is doubled in size by juxtaposing a second identical system, the value of an intensive property equals the value for each subsystem and the value of an extensive property is twice the value for each subsystem. However the property √V is instead multiplied by √2 .
Intensive properties
An intensive property is a physical quantity whose value does not depend on the amount of substance which was measured. The most obvious intensive quantities are ratios of extensive quantities. In a homogeneous system divided into two halves, all its extensive properties, in particular its volume and its mass, are divided into two halves. All its intensive properties, such as the mass per volume (mass density) or volume per mass (specific volume), must remain the same in each half.
The temperature of a system in thermal equilibrium is the same as the temperature of any part
Document 4:::
Physical biochemistry is a branch of biochemistry that deals with the theory, techniques, and methodology used to study the physical chemistry of biomolecules.
It also deals with the mathematical approaches for the analysis of biochemical reaction and the modelling of biological systems. It provides insight into the structure of macromolecules, and how chemical structure influences the physical properties of a biological substance.
It involves the use of physics, physical chemistry principles, and methodology to study biological systems. It employs various physical chemistry techniques such as chromatography, spectroscopy, Electrophoresis, X-ray crystallography, electron microscopy, and hydrodynamics.
See also
Physical chemistry
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The kinetic energy of molecules is generally proportionate to what other property that they have?
A. variation
B. precipitation
C. mass
D. temperature
Answer:
|
|
sciq-10688
|
multiple_choice
|
What is the electrode at which reduction occurs called?
|
[
"cathine",
"cathode",
"anode",
"reducthode"
] |
B
|
Relavent Documents:
Document 0:::
Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams.
Course content
E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are:
Electrostatics
Conductors, capacitors, and dielectrics
Electric circuits
Magnetic fields
Electromagnetism.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with
Document 1:::
Computer science and engineering (CSE) is an academic program at many universities which comprises computer science classes (e.g. data structures and algorithms) and computer engineering classes (e.g computer architecture). There is no clear division in computing between science and engineering, just like in the field of materials science and engineering. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both undergraduate as well postgraduate with specializations.
Academic courses
Academic programs vary between colleges, but typically include a combination of topics in computer science, computer engineering, and electrical engineering. Undergraduate courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most CSE programs require introductory mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, probability, and statistics, as well as the basics of electrical and electronic engineering, physics, and electromagnetism.
Example universities with CSE majors and departments
APJ Abdul Kalam Technological University
American International University-B
Document 2:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 3:::
The current of injury – also known as the demarcation current, hermann's demarcation current or injury potential – is the electric current from the central part of the body to an injured nerve or muscle, or to another injured excitable tissue. The injured tissue has a negative voltage compared to the central part of the body.
History
The concept originates from the research of Carlo Matteucci and Emil du Bois-Reymond in the mid-19th century. It has later occasionally been used in physiology textbooks, but is now mostly used in connection with heart damages (as listed in e.g. the index of Guyton's Textbook of Medical Physiology). Such manifestations in the heart may be seen in the electrocardiogram as Osborn waves.
It has been found by Elmer J. Lund that establishing an artificial electrical field causing a current mimicking the current of injury could facilitate regeneration. This potential for a regeneration therapy was further studied by Robert O. Becker, who described this work in his book The Body Electric. He found that the current of injury runs through the perineurium – through the myelin sheaths of the peripheral nerves.
Document 4:::
In electromagnetism and electronics, electromotive force (also electromotance, abbreviated emf, denoted or ) is an energy transfer to an electric circuit per unit of electric charge, measured in volts. Devices called electrical transducers provide an emf by converting other forms of energy into electrical energy. Other electrical equipment also produce an emf, such as batteries, which convert chemical energy, and generators, which convert mechanical energy. This energy conversion is achieved by physical forces applying physical work on electric charges. However, electromotive force itself is not a physical force, and ISO/IEC standards have deprecated the term in favor of source voltage or source tension instead (denoted ).
An electronic–hydraulic analogy may view emf as the mechanical work done to water by a pump, which results in a pressure difference (analogous to voltage).
In electromagnetic induction, emf can be defined around a closed loop of a conductor as the electromagnetic work that would be done on an elementary electric charge (such as an electron) if it travels once around the loop.
For two-terminal devices modeled as a Thévenin equivalent circuit, an equivalent emf can be measured as the open-circuit voltage between the two terminals. This emf can drive an electric current if an external circuit is attached to the terminals, in which case the device becomes the voltage source of that circuit.
Although an emf gives rise to a voltage and can be measured as a voltage and may sometimes informally be called a "voltage", they are not the same phenomenon (see ).
Overview
Devices that can provide emf include electrochemical cells, thermoelectric devices, solar cells, photodiodes, electrical generators, inductors, transformers and even Van de Graaff generators. In nature, emf is generated when magnetic field fluctuations occur through a surface. For example, the shifting of the Earth's magnetic field during a geomagnetic storm induces currents in an electr
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the electrode at which reduction occurs called?
A. cathine
B. cathode
C. anode
D. reducthode
Answer:
|
|
sciq-7936
|
multiple_choice
|
What generally sets the direction that technology takes?
|
[
"problems of society",
"random chance",
"local animals",
"local weather"
] |
A
|
Relavent Documents:
Document 0:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 1:::
The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work.
History
It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council.
Function
Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres.
STEM ambassadors
To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell.
Funding
STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments.
See also
The WISE Campaign
Engineering and Physical Sciences Research Council
National Centre for Excellence in Teaching Mathematics
Association for Science Education
Glossary of areas of mathematics
Glossary of astronomy
Glossary of biology
Glossary of chemistry
Glossary of engineering
Glossary of physics
Document 2:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
Document 3:::
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
Document 4:::
Tech City College (Formerly STEM Academy) is a free school sixth form located in the Islington area of the London Borough of Islington, England.
It originally opened in September 2013, as STEM Academy Tech City and specialised in Science, Technology, Engineering and Maths (STEM) and the Creative Application of Maths and Science. In September 2015, STEM Academy joined the Aspirations Academy Trust was renamed Tech City College. Tech City College offers A-levels and BTECs as programmes of study for students.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What generally sets the direction that technology takes?
A. problems of society
B. random chance
C. local animals
D. local weather
Answer:
|
|
sciq-2399
|
multiple_choice
|
A complete ionic equation is a chemical equation in which the dissolved ionic compounds are written as what?
|
[
"charged ions",
"joined ions",
"separated ions",
"realized ions"
] |
C
|
Relavent Documents:
Document 0:::
The use of ionic liquids in carbon capture is a potential application of ionic liquids as absorbents for use in carbon capture and sequestration. Ionic liquids, which are salts that exist as liquids near room temperature, are polar, nonvolatile materials that have been considered for many applications. The urgency of climate change has spurred research into their use in energy-related applications such as carbon capture and storage.
Carbon capture using absorption
Ionic liquids as solvents
Amines are the most prevalent absorbent in postcombustion carbon capture technology today. In particular, monoethanolamine (MEA) has been used in industrial scales in postcombustion carbon capture, as well as in other CO2 separations, such as "sweetening" of natural gas. However, amines are corrosive, degrade over time, and require large industrial facilities. Ionic liquids on the other hand, have low vapor pressures . This property results from their strong Coulombic attractive force. Vapor pressure remains low through the substance's thermal decomposition point (typically >300 °C). In principle, this low vapor pressure simplifies their use and makes them "green" alternatives. Additionally, it reduces risk of contamination of the CO2 gas stream and of leakage into the environment.
The solubility of CO2 in ionic liquids is governed primarily by the anion, less so by the cation. The hexafluorophosphate (PF6–) and tetrafluoroborate (BF4–) anions have been shown to be especially amenable to CO2 capture.
Ionic liquids have been considered as solvents in a variety of liquid-liquid extraction processes, but never commercialized. Beside that, ionic liquids have replaced the conventional volatile solvents in industry such as absorption of gases or extractive distillation. Additionally, ionic liquids are used as co-solutes for the generation of aqueous biphasic systems, or purification of biomolecules.
Process
A typical CO2 absorption process consists of a feed gas, an absorptio
Document 1:::
The ionic strength of a solution is a measure of the concentration of ions in that solution. Ionic compounds, when dissolved in water, dissociate into ions. The total electrolyte concentration in solution will affect important properties such as the dissociation constant or the solubility of different salts. One of the main characteristics of a solution with dissolved ions is the ionic strength. Ionic strength can be molar (mol/L solution) or molal (mol/kg solvent) and to avoid confusion the units should be stated explicitly. The concept of ionic strength was first introduced by Lewis and Randall in 1921 while describing the activity coefficients of strong electrolytes.
Quantifying ionic strength
The molar ionic strength, I, of a solution is a function of the concentration of all ions present in that solution.
where one half is because we are including both cations and anions, ci is the molar concentration of ion i (M, mol/L), zi is the charge number of that ion, and the sum is taken over all ions in the solution. For a 1:1 electrolyte such as sodium chloride, where each ion is singly-charged, the ionic strength is equal to the concentration. For the electrolyte MgSO4, however, each ion is doubly-charged, leading to an ionic strength that is four times higher than an equivalent concentration of sodium chloride:
Generally multivalent ions contribute strongly to the ionic strength.
Calculation example
As a more complex example, the ionic strength of a mixed solution 0.050 M in Na2SO4 and 0.020 M in KCl is:
Non-ideal solutions
Because in non-ideal solutions volumes are no longer strictly additive it is often preferable to work with molality b (mol/kg of H2O) rather than molarity c (mol/L). In that case, molal ionic strength is defined as:
in which
i = ion identification number
z = charge of ion
b = molality (mol solute per Kg solvent)
Importance
The ionic strength plays a central role in the Debye–Hückel theory that describes the strong deviations from id
Document 2:::
The Bromley equation was developed in 1973 by Leroy A. Bromley with the objective of calculating activity coefficients for aqueous electrolyte solutions whose concentrations are above the range of validity of the Debye–Hückel equation. This equation, together with Specific ion interaction theory (SIT) and Pitzer equations is important for the understanding of the behaviour of ions dissolved in natural waters such as rivers, lakes and sea-water.
Description
Guggenheim had proposed an extension of the Debye-Hückel equation which is the basis of SIT theory. The equation can be written, in its simplest form for a 1:1 electrolyte, MX, as
is the mean molal activity coefficient. The first term on the right-hand side is the Debye–Hückel term, with a constant, A, and the ionic strength I. β is an interaction coefficient and b the molality of the electrolyte. As the concentration decreases so the second term becomes less important until, at very low concentrations,the Debye-Hückel equation gives a satisfactory account of the activity coefficient.
Leroy A. Bromley observed that experimental values of were often approximately proportional to ionic strength. Accordingly, he developed the equation, for a salt of general formula
At 25 °C Aγ is equal to 0.511 and ρ is equal to one. Bromley tabulated values of the interaction coefficient B. He noted that the equation gave satisfactory agreement with experimental data up to ionic strength of 6 molal, though with decreasing precision when extrapolating to very high ionic strength. As with other equations, it is not satisfactory when there is ion association as, for example, with divalent metal sulfates. Bromley also found that B could be expressed in terms of single-ion quantities as
where the + subscript refers to a cation and the minus subscript refers to an anion. Bromley's equation can easily be transformed for the calculation of osmotic coefficients, and Bromley also proposed extensions to multicomponent solutions and for
Document 3:::
An ionic liquid (IL) is a salt in the liquid state. In some contexts, the term has been restricted to salts whose melting point is below a specific temperature, such as . While ordinary liquids such as water and gasoline are predominantly made of electrically neutral molecules, ionic liquids are largely made of ions. These substances are variously called liquid electrolytes, ionic melts, ionic fluids, fused salts, liquid salts, or ionic glasses.
Ionic liquids have many potential applications. They are powerful solvents and can be used as electrolytes. Salts that are liquid at near-ambient temperature are important for electric battery applications, and have been considered as sealants due to their very low vapor pressure.
Any salt that melts without decomposing or vaporizing usually yields an ionic liquid. Sodium chloride (NaCl), for example, melts at into a liquid that consists largely of sodium cations () and chloride anions (). Conversely, when an ionic liquid is cooled, it often forms an ionic solid—which may be either crystalline or glassy.
The ionic bond is usually stronger than the Van der Waals forces between the molecules of ordinary liquids. Because of these strong interactions, salts tend to have high lattice energies, manifested in high melting points. Some salts, especially those with organic cations, have low lattice energies and thus are liquid at or below room temperature. Examples include compounds based on the 1-ethyl-3-methylimidazolium (EMIM) cation and include: EMIM:Cl, EMIMAc (acetate anion), EMIM dicyanamide, ()()·, that melts at ; and 1-butyl-3,5-dimethylpyridinium bromide which becomes a glass below .
Low-temperature ionic liquids can be compared to ionic solutions, liquids that contain both ions and neutral molecules, and in particular to the so-called deep eutectic solvents, mixtures of ionic and non-ionic solid substances which have much lower melting points than the pure compounds. Certain mixtures of nitrate salts can have melt
Document 4:::
Ionic transfer is the transfer of ions from one liquid phase to another. This is related to the phase transfer catalysts which are a special type of liquid-liquid extraction which is used in synthetic chemistry.
For instance nitrate anions can be transferred between water and nitrobenzene. One way to observe this is to use a cyclic voltammetry experiment where the liquid-liquid interface is the working electrode. This can be done by placing secondary electrodes in each phase and close to interface each phase has a reference electrode. One phase is attached to a potentiostat which is set to zero volts, while the other potentiostat is driven with a triangular wave. This experiment is known as a polarised Interface between Two Immiscible Electrolyte Solutions (ITIES) experiment.
See also
Diffusion potential
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A complete ionic equation is a chemical equation in which the dissolved ionic compounds are written as what?
A. charged ions
B. joined ions
C. separated ions
D. realized ions
Answer:
|
|
sciq-1810
|
multiple_choice
|
What is the minimum number of loops a circuit can have?
|
[
"3",
"2",
"1",
".1"
] |
C
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Computer science and engineering (CSE) is an academic program at many universities which comprises computer science classes (e.g. data structures and algorithms) and computer engineering classes (e.g computer architecture). There is no clear division in computing between science and engineering, just like in the field of materials science and engineering. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both undergraduate as well postgraduate with specializations.
Academic courses
Academic programs vary between colleges, but typically include a combination of topics in computer science, computer engineering, and electrical engineering. Undergraduate courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most CSE programs require introductory mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, probability, and statistics, as well as the basics of electrical and electronic engineering, physics, and electromagnetism.
Example universities with CSE majors and departments
APJ Abdul Kalam Technological University
American International University-B
Document 2:::
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
Document 3:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 4:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the minimum number of loops a circuit can have?
A. 3
B. 2
C. 1
D. .1
Answer:
|
|
sciq-7483
|
multiple_choice
|
What is the name of earth’s only natural satellite?
|
[
"sun",
"moon",
"venus",
"titan"
] |
B
|
Relavent Documents:
Document 0:::
The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work.
History
It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council.
Function
Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres.
STEM ambassadors
To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell.
Funding
STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments.
See also
The WISE Campaign
Engineering and Physical Sciences Research Council
National Centre for Excellence in Teaching Mathematics
Association for Science Education
Glossary of areas of mathematics
Glossary of astronomy
Glossary of biology
Glossary of chemistry
Glossary of engineering
Glossary of physics
Document 1:::
Sir Isaac Newton Sixth Form is a specialist maths and science sixth form with free school status located in Norwich, owned by the Inspiration Trust. It has the capacity for 480 students aged 16–19. It specialises in mathematics and science.
History
Prior to becoming a Sixth Form College the building functioned as a fire station serving the central Norwich area until August 2011 when it closed down. Two years later the Sixth Form was created within the empty building with various additions being made to the existing structure. The sixth form was ranked the 7th best state sixth form in England by the Times in 2022.
Curriculum
At Sir Isaac Newton Sixth Form, students can study a choice of either Maths, Further Maths, Core Maths, Biology, Chemistry, Physics, Computer Science, Environmental Science or Psychology. Additionally, students can also study any of the subjects on offer at the partner free school Jane Austen College, also located in Norwich and specialising in humanities, Arts and English.
Document 2:::
The Inverness Campus is an area in Inverness, Scotland. 5.5 hectares of the site have been designated as an enterprise area for life sciences by the Scottish Government. This designation is intended to encourage research and development in the field of life sciences, by providing incentives to locate at the site.
The enterprise area is part of a larger site, over 200 acres, which will house Inverness College, Scotland's Rural College (SRUC), the University of the Highlands and Islands, a health science centre and sports and other community facilities. The purpose built research hub will provide space for up to 30 staff and researchers, allowing better collaboration.
The Highland Science Academy will be located on the site, a collaboration formed by Highland Council, employers and public bodies. The academy will be aimed towards assisting young people to gain the necessary skills to work in the energy, engineering and life sciences sectors.
History
The site was identified in 2006. Work started to develop the infrastructure on the site in early 2012. A virtual tour was made available in October 2013 to help mark Doors Open Day.
The construction had reached halfway stage in May 2014, meaning that it is on track to open doors to receive its first students in August 2015.
In May 2014, work was due to commence on a building designed to provide office space and laboratories as part of the campus's "life science" sector. Morrison Construction have been appointed to undertake the building work.
Scotland's Rural College (SRUC) will be able to relocate their Inverness-based activities to the Campus. SRUC's research centre for Comparative Epidemiology and Medicine, and Agricultural Business Consultancy services could co-locate with UHI where their activities have complementary themes.
By the start of 2017, there were more than 600 people working at the site.
In June 2021, a new bridge opened connecting Inverness Campus to Inverness Shopping Park. It crosses the Aberdeen
Document 3:::
Tech City College (Formerly STEM Academy) is a free school sixth form located in the Islington area of the London Borough of Islington, England.
It originally opened in September 2013, as STEM Academy Tech City and specialised in Science, Technology, Engineering and Maths (STEM) and the Creative Application of Maths and Science. In September 2015, STEM Academy joined the Aspirations Academy Trust was renamed Tech City College. Tech City College offers A-levels and BTECs as programmes of study for students.
Document 4:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the name of earth’s only natural satellite?
A. sun
B. moon
C. venus
D. titan
Answer:
|
|
sciq-10547
|
multiple_choice
|
What are unsaturated hydrocarbons with at least one triple bond between carbon atoms?
|
[
"ketones",
"alkynes",
"lipids",
"amines"
] |
B
|
Relavent Documents:
Document 0:::
In chemistry, the carbon-hydrogen bond ( bond) is a chemical bond between carbon and hydrogen atoms that can be found in many organic compounds. This bond is a covalent, single bond, meaning that carbon shares its outer valence electrons with up to four hydrogens. This completes both of their outer shells, making them stable.
Carbon–hydrogen bonds have a bond length of about 1.09 Å (1.09 × 10−10 m) and a bond energy of about 413 kJ/mol (see table below). Using Pauling's scale—C (2.55) and H (2.2)—the electronegativity difference between these two atoms is 0.35. Because of this small difference in electronegativities, the bond is generally regarded as being non-polar. In structural formulas of molecules, the hydrogen atoms are often omitted. Compound classes consisting solely of bonds and bonds are alkanes, alkenes, alkynes, and aromatic hydrocarbons. Collectively they are known as hydrocarbons.
In October 2016, astronomers reported that the very basic chemical ingredients of life—the carbon-hydrogen molecule (CH, or methylidyne radical), the carbon-hydrogen positive ion () and the carbon ion ()—are the result, in large part, of ultraviolet light from stars, rather than in other ways, such as the result of turbulent events related to supernovae and young stars, as thought earlier.
Bond length
The length of the carbon-hydrogen bond varies slightly with the hybridisation of the carbon atom. A bond between a hydrogen atom and an sp2 hybridised carbon atom is about 0.6% shorter than between hydrogen and sp3 hybridised carbon. A bond between hydrogen and sp hybridised carbon is shorter still, about 3% shorter than sp3 C-H. This trend is illustrated by the molecular geometry of ethane, ethylene and acetylene.
Reactions
The C−H bond in general is very strong, so it is relatively unreactive. In several compound classes, collectively called carbon acids, the C−H bond can be sufficiently acidic for proton removal. Unactivated C−H bonds are found in alkanes and are no
Document 1:::
A carbon–carbon bond is a covalent bond between two carbon atoms. The most common form is the single bond: a bond composed of two electrons, one from each of the two atoms. The carbon–carbon single bond is a sigma bond and is formed between one hybridized orbital from each of the carbon atoms. In ethane, the orbitals are sp3-hybridized orbitals, but single bonds formed between carbon atoms with other hybridizations do occur (e.g. sp2 to sp2). In fact, the carbon atoms in the single bond need not be of the same hybridization. Carbon atoms can also form double bonds in compounds called alkenes or triple bonds in compounds called alkynes. A double bond is formed with an sp2-hybridized orbital and a p-orbital that is not involved in the hybridization. A triple bond is formed with an sp-hybridized orbital and two p-orbitals from each atom. The use of the p-orbitals forms a pi bond.
Chains and branching
Carbon is one of the few elements that can form long chains of its own atoms, a property called catenation. This coupled with the strength of the carbon–carbon bond gives rise to an enormous number of molecular forms, many of which are important structural elements of life, so carbon compounds have their own field of study: organic chemistry.
Branching is also common in C−C skeletons. Carbon atoms in a molecule are categorized by the number of carbon neighbors they have:
A primary carbon has one carbon neighbor.
A secondary carbon has two carbon neighbors.
A tertiary carbon has three carbon neighbors.
A quaternary carbon has four carbon neighbors.
In "structurally complex organic molecules", it is the three-dimensional orientation of the carbon–carbon bonds at quaternary loci which dictates the shape of the molecule. Further, quaternary loci are found in many biologically active small molecules, such as cortisone and morphine.
Synthesis
Carbon–carbon bond-forming reactions are organic reactions in which a new carbon–carbon bond is formed. They are important in th
Document 2:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
Document 3:::
Glycerol dialkyl glycerol tetraether lipids (GDGTs) are a class of membrane lipids synthesized by archaea and some bacteria, making them useful biomarkers for these organisms in the geological record. Their presence, structure, and relative abundances in natural materials can be useful as proxies for temperature, terrestrial organic matter input, and soil pH for past periods in Earth history. Some structural forms of GDGT form the basis for the TEX86 paleothermometer. Isoprenoid GDGTs, now known to be synthesized by many archaeal classes, were first discovered in extremophilic archaea cultures. Branched GDGTs, likely synthesized by acidobacteriota, were first discovered in a natural Dutch peat sample in 2000.
Chemical structure
The two primary structural classes of GDGTs are isoprenoid (isoGDGT) and branched (brGDGT), which refer to differences in the carbon skeleton structures. Isoprenoid compounds are numbered -0 through -8, with the numeral representing the number of cyclopentane rings present within the carbon skeleton structure. The exception is crenarchaeol, a Nitrososphaerota product with one cyclohexane ring moiety in addition to four cyclopentane rings. Branched GDGTs have zero, one, or two cyclopentane moieties and are further classified based the positioning of their branches. They are numbered with roman numerals and letters, with -I indicating structures with four modifications (i.e. either a branch or a cyclopentane moiety), -II indicating structures with five modifications, and -III indicating structures with six modifications. The suffix a after the roman numeral means one of its modifications is a cyclopentane moiety; b means two modifications are cyclopentane moieties. For example, GDGT-IIb is a compound with three branches and two cyclopentane moieties (a total of five modifications). GDGTs form as monolayers and with ether bonds to glycerol, as opposed to as bilayers and with ester bonds as is the case in eukaryotes and most bacteria.
Biologi
Document 4:::
Bisnorhopanes (BNH) are a group of demethylated hopanes found in oil shales across the globe and can be used for understanding depositional conditions of the source rock. The most common member, 28,30-bisnorhopane, can be found in high concentrations in petroleum source rocks, most notably the Monterey Shale, as well as in oil and tar samples. 28,30-Bisnorhopane was first identified in samples from the Monterey Shale Formation in 1985. It occurs in abundance throughout the formation and appears in stratigraphically analogous locations along the California coast. Since its identification and analysis, 28,30-bisnorhopane has been discovered in oil shales around the globe, including lacustrine and offshore deposits of Brazil, silicified shales of the Eocene in Gabon, the Kimmeridge Clay Formation in the North Sea, and in Western Australian oil shales.
Chemistry
28,30-bisnorhopane exists in three epimers: 17α,18α21β(H), 17β,18α,21α(H), and 17β,18α,21β(H). During GC-MS, the three epimers coelute at the same time and are nearly indistinguishable. However, mass spectral fragmentation of the 28,30-bisnorhopane is predominantly characterized by m/z 191, 177, and 163. The ratios of 163/191 fragments can be used to distinguish the epimers, where the βαβ orientation has the highest, m/z 163/191 ratio. Further, the D/E ring ratios can be used to create a hierarchy of epimer maturity. From this, it is believed that the ααβ epimer is the first-formed, diagenetically, supported also by its percent dominance in younger shales. 28,30-bisnorhopane is created independently from kerogen, instead derived from bitumen, unbound as free oil-hydrocarbons. As such, as oil generation increases with source maturation, the concentration of 28,30-bisnorhopane decreases. Bisnorhopane may not be a reliable diagnostic for oil maturity due to microbial biodegradation.
Nomenclature
Norhopanes are a family of demethylated hopanes, identical to the methylated hopane structure, minus indicated desmet
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are unsaturated hydrocarbons with at least one triple bond between carbon atoms?
A. ketones
B. alkynes
C. lipids
D. amines
Answer:
|
|
sciq-8336
|
multiple_choice
|
What is the term for the distance between two corresponding points on adjacent waves?
|
[
"wavelength",
"variation",
"bandwidth",
"arc wave"
] |
A
|
Relavent Documents:
Document 0:::
A wavenumber–frequency diagram is a plot displaying the relationship between the wavenumber (spatial frequency) and the frequency (temporal frequency) of certain phenomena. Usually frequencies are placed on the vertical axis, while wavenumbers are placed on the horizontal axis.
In the atmospheric sciences, these plots are a common way to visualize atmospheric waves.
In the geosciences, especially seismic data analysis, these plots also called f–k plot, in which energy density within a given time interval is contoured on a frequency-versus-wavenumber basis. They are used to examine the direction and apparent velocity of seismic waves and in velocity filter design.
Origins
In general, the relationship between wavelength , frequency , and the phase velocity of a sinusoidal wave is:
Using the wavenumber () and angular frequency () notation, the previous equation can be rewritten as
On the other hand, the group velocity is equal to the slope of the wavenumber–frequency diagram:
Analyzing such relationships in detail often yields information on the physical properties of the medium, such as density, composition, etc.
See also
Dispersion relation
Document 1:::
A crest point on a wave is the maximum value of upward displacement within a cycle. A crest is a point on a surface wave where the displacement of the medium is at a maximum. A trough is the opposite of a crest, so the minimum or lowest point in a cycle.
When the crests and troughs of two sine waves of equal amplitude and frequency intersect or collide, while being in phase with each other, the result is called constructive interference and the magnitudes double (above and below the line). When in antiphase – 180° out of phase – the result is destructive interference: the resulting wave is the undisturbed line having zero amplitude.
See also
Crest factor
Superposition principle
Wave
Document 2:::
Sinuosity, sinuosity index, or sinuosity coefficient of a continuously differentiable curve having at least one inflection point is the ratio of the curvilinear length (along the curve) and the Euclidean distance (straight line) between the end points of the curve. This dimensionless quantity can also be rephrased as the "actual path length" divided by the "shortest path length" of a curve.
The value ranges from 1 (case of straight line) to infinity (case of a closed loop, where the shortest path length is zero or for an infinitely-long actual path).
Interpretation
The curve must be continuous (no jump) between the two ends. The sinuosity value is really significant when the line is continuously differentiable (no angular point). The distance between both ends can also be evaluated by a plurality of segments according to a broken line passing through the successive inflection points (sinuosity of order 2).
The calculation of the sinuosity is valid in a 3-dimensional space (e.g. for the central axis of the small intestine), although it is often performed in a plane (with then a possible orthogonal projection of the curve in the selected plan; "classic" sinuosity on the horizontal plane, longitudinal profile sinuosity on the vertical plane).
The classification of a sinuosity (e.g. strong / weak) often depends on the cartographic scale of the curve (see the coastline paradox for further details) and of the object velocity which flowing therethrough (river, avalanche, car, bicycle, bobsleigh, skier, high speed train, etc.): the sinuosity of the same curved line could be considered very strong for a high speed train but low for a river. Nevertheless, it is possible to see a very strong sinuosity in the succession of few river bends, or of laces on some mountain roads.
Notable values
The sinuosity S of:
2 inverted continuous semicircles located in the same plane is . It is independent of the circle radius;
a sine function (over a whole number n of half-periods), wh
Document 3:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 4:::
In fluid dynamics, dispersion of water waves generally refers to frequency dispersion, which means that waves of different wavelengths travel at different phase speeds. Water waves, in this context, are waves propagating on the water surface, with gravity and surface tension as the restoring forces. As a result, water with a free surface is generally considered to be a dispersive medium.
For a certain water depth, surface gravity waves – i.e. waves occurring at the air–water interface and gravity as the only force restoring it to flatness – propagate faster with increasing wavelength. On the other hand, for a given (fixed) wavelength, gravity waves in deeper water have a larger phase speed than in shallower water. In contrast with the behavior of gravity waves, capillary waves (i.e. only forced by surface tension) propagate faster for shorter wavelengths.
Besides frequency dispersion, water waves also exhibit amplitude dispersion. This is a nonlinear effect, by which waves of larger amplitude have a different phase speed from small-amplitude waves.
Frequency dispersion for surface gravity waves
This section is about frequency dispersion for waves on a fluid layer forced by gravity, and according to linear theory. For surface tension effects on frequency dispersion, see surface tension effects in Airy wave theory and capillary wave.
Wave propagation and dispersion
The simplest propagating wave of unchanging form is a sine wave. A sine wave with water surface elevation η( x, t ) is given by:
where a is the amplitude (in metres) and θ = θ( x, t ) is the phase function (in radians), depending on the horizontal position ( x , in metres) and time ( t , in seconds):
with and
where:
λ is the wavelength (in metres),
T is the period (in seconds),
k is the wavenumber (in radians per metre) and
ω is the angular frequency (in radians per second).
Characteristic phases of a water wave are:
the upward zero-crossing at θ = 0,
the wave crest at θ = ½ π,
th
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the term for the distance between two corresponding points on adjacent waves?
A. wavelength
B. variation
C. bandwidth
D. arc wave
Answer:
|
|
sciq-809
|
multiple_choice
|
How many eyelid membranes do frogs have?
|
[
"four",
"two",
"three",
"one"
] |
C
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
Document 2:::
Advanced Placement (AP) Biology (also known as AP Bio) is an Advanced Placement biology course and exam offered by the College Board in the United States. For the 2012–2013 school year, the College Board unveiled a new curriculum with a greater focus on "scientific practices".
This course is designed for students who wish to pursue an interest in the life sciences. The College Board recommends successful completion of high school biology and high school chemistry before commencing AP Biology, although the actual prerequisites vary from school to school and from state to state. This course, nevertheless, is considered very challenging and one of the most difficult AP classes, as shown with AP Finals grade distributions.
Topic outline
The exam covers the following 8 units. The percentage indicates the portion of the multiple-choice section of the exam focused on each content area:
The course is based on and tests six skills, called scientific practices which include:
In addition to the topics above, students are required to be familiar with general lab procedure. Students should know how to collect data, analyze data to form conclusions, and apply those conclusions.
Exam
Students are allowed to use a four-function, scientific, or graphing calculator.
The exam has two sections: a 90 minute multiple choice section and a 90 minute free response section. There are 60 multiple choice questions and six free responses, two long and four short. Both sections are worth 50% of the score.
Score distribution
Commonly used textbooks
Biology, AP Edition by Sylvia Mader (2012, hardcover )
Life: The Science of Biology (Sadava, Heller, Orians, Purves, and Hillis, )
Campbell Biology AP Ninth Edition (Reece, Urry, Cain, Wasserman, Minorsky, and Andrew Jackson )
See also
Glossary of biology
A.P Bio (TV Show)
Document 3:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
Document 4:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How many eyelid membranes do frogs have?
A. four
B. two
C. three
D. one
Answer:
|
|
sciq-7513
|
multiple_choice
|
The validity of thought experiments, of course, is determined by this?
|
[
"actual observation",
"hypothetical observation",
"predictive observation",
"theoretical observation"
] |
A
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 2:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 3:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
Document 4:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The validity of thought experiments, of course, is determined by this?
A. actual observation
B. hypothetical observation
C. predictive observation
D. theoretical observation
Answer:
|
|
sciq-10799
|
multiple_choice
|
The level of carbon dioxide in the atmosphere is greatly influenced by the reservoir of carbon where?
|
[
"before the oceans",
"in the oceans",
"in the earth",
"after the oceans"
] |
B
|
Relavent Documents:
Document 0:::
Carbon sequestration (or carbon storage) is the process of storing carbon in a carbon pool. Carbon sequestration is a naturally occurring process but it can also be enhanced or achieved with technology, for example within carbon capture and storage projects. There are two main types of carbon sequestration: geologic and biologic (also called biosequestration).
Carbon dioxide () is naturally captured from the atmosphere through biological, chemical, and physical processes. These changes can be accelerated through changes in land use and agricultural practices, such as converting crop land into land for non-crop fast growing plants. Artificial processes have been devised to produce similar effects, including large-scale, artificial capture and sequestration of industrially produced using subsurface saline aquifers or aging oil fields. Other technologies that work with carbon sequestration include bio-energy with carbon capture and storage, biochar, enhanced weathering, direct air carbon capture and sequestration (DACCS).
Forests, kelp beds, and other forms of plant life absorb carbon dioxide from the air as they grow, and bind it into biomass. However, these biological stores are considered volatile carbon sinks as the long-term sequestration cannot be guaranteed. For example, natural events, such as wildfires or disease, economic pressures and changing political priorities can result in the sequestered carbon being released back into the atmosphere. Carbon dioxide that has been removed from the atmosphere can also be stored in the Earth's crust by injecting it into the subsurface, or in the form of insoluble carbonate salts (mineral sequestration). These methods are considered non-volatile because they remove carbon from the atmosphere and sequester it indefinitely and presumably for a considerable duration (thousands to millions of years).
To enhance carbon sequestration processes in oceans the following technologies have been proposed but none have achieved lar
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
Document 3:::
The carbon cycle is that part of the biogeochemical cycle by which carbon is exchanged among the biosphere, pedosphere, geosphere, hydrosphere, and atmosphere of Earth. Other major biogeochemical cycles include the nitrogen cycle and the water cycle. Carbon is the main component of biological compounds as well as a major component of many minerals such as limestone. The carbon cycle comprises a sequence of events that are key to making Earth capable of sustaining life. It describes the movement of carbon as it is recycled and reused throughout the biosphere, as well as long-term processes of carbon sequestration (storage) to and release from carbon sinks.
To describe the dynamics of the carbon cycle, a distinction can be made between the fast and slow carbon cycle. The fast carbon cycle is also referred to as the biological carbon cycle. Fast carbon cycles can complete within years, moving substances from atmosphere to biosphere, then back to the atmosphere. Slow or geological cycles (also called deep carbon cycle) can take millions of years to complete, moving substances through the Earth's crust between rocks, soil, ocean and atmosphere.
Human activities have disturbed the fast carbon cycle for many centuries by modifying land use, and moreover with the recent industrial-scale mining of fossil carbon (coal, petroleum, and gas extraction, and cement manufacture) from the geosphere. Carbon dioxide in the atmosphere had increased nearly 52% over pre-industrial levels by 2020, forcing greater atmospheric and Earth surface heating by the Sun. The increased carbon dioxide has also caused a reduction in the ocean's pH value and is fundamentally altering marine chemistry. The majority of fossil carbon has been extracted over just the past half century, and rates continue to rise rapidly, contributing to human-caused climate change.
Main compartments
The carbon cycle was first described by Antoine Lavoisier and Joseph Priestley, and popularised by Humphry Davy. The g
Document 4:::
Carbon dioxide is a chemical compound with the chemical formula . It is made up of molecules that each have one carbon atom covalently double bonded to two oxygen atoms. It is found in the gas state at room temperature, and as the source of available carbon in the carbon cycle, atmospheric is the primary carbon source for life on Earth. In the air, carbon dioxide is transparent to visible light but absorbs infrared radiation, acting as a greenhouse gas. Carbon dioxide is soluble in water and is found in groundwater, lakes, ice caps, and seawater. When carbon dioxide dissolves in water, it forms carbonate and mainly bicarbonate (), which causes ocean acidification as atmospheric levels increase.
It is a trace gas in Earth's atmosphere at 421 parts per million (ppm), or about 0.04% (as of May 2022) having risen from pre-industrial levels of 280 ppm or about 0.025%. Burning fossil fuels is the primary cause of these increased concentrations and also the primary cause of climate change.
Its concentration in Earth's pre-industrial atmosphere since late in the Precambrian was regulated by organisms and geological phenomena. Plants, algae and cyanobacteria use energy from sunlight to synthesize carbohydrates from carbon dioxide and water in a process called photosynthesis, which produces oxygen as a waste product. In turn, oxygen is consumed and is released as waste by all aerobic organisms when they metabolize organic compounds to produce energy by respiration. is released from organic materials when they decay or combust, such as in forest fires. Since plants require for photosynthesis, and humans and animals depend on plants for food, is necessary for the survival of life on earth.
Carbon dioxide is 53% more dense than dry air, but is long lived and thoroughly mixes in the atmosphere. About half of excess emissions to the atmosphere are absorbed by land and ocean carbon sinks. These sinks can become saturated and are volatile, as decay and wildfires result i
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The level of carbon dioxide in the atmosphere is greatly influenced by the reservoir of carbon where?
A. before the oceans
B. in the oceans
C. in the earth
D. after the oceans
Answer:
|
|
sciq-8963
|
multiple_choice
|
What is the term for liquid water falling from the sky?
|
[
"rain",
"wind",
"mud",
"fire"
] |
A
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 2:::
Evapotranspiration (ET) is the combined processes which move water from the Earth's surface into the atmosphere. It covers both water evaporation (movement of water to the air directly from soil, canopies, and water bodies) and transpiration (evaporation that occurs through the stomata, or openings, in plant leaves). Evapotranspiration is an important part of the local water cycle and climate, and measurement of it plays a key role in agricultural irrigation and water resource management.
Definition of evapotranspiration
Evapotranspiration is a combination of evaporation and transpiration, measured in order to better understand crop water requirements, irrigation scheduling, and watershed management. The two key components of evapotranspiration are:
Evaporation: the movement of water directly to the air from sources such as the soil and water bodies. It can be affected by factors including heat, humidity, solar radiation and wind speed.
Transpiration: the movement of water from root systems, through a plant, and exit into the air as water vapor. This exit occurs through stomata in the plant. Rate of transpiration can be influenced by factors including plant type, soil type, weather conditions and water content, and also cultivation practices.
Evapotranspiration is typically measured in millimeters of water (i.e. volume of water moved per unit area of the Earth's surface) in a set unit of time. Globally, it is estimated that on average between three-fifths and three-quarters of land precipitation is returned to the atmosphere via evapotranspiration.
Evapotranspiration does not, in general, account for other mechanisms which are involved in returning water to the atmosphere, though some of these, such as snow and ice sublimation in regions of high elevation or high latitude, can make a large contribution to atmospheric moisture even under standard conditions.
Factors that impact evapotranspiration levels
Primary factors
Because evaporation and transpiration
Document 3:::
In hydrology, throughfall is the process which describes how wet leaves shed excess water onto the ground surface. These drops have greater erosive power because they are heavier than rain drops. Furthermore, where there is a high canopy, falling drops may reach terminal velocity, about , thus maximizing the drop's erosive potential.
Rates of throughfall are higher in areas of forest where the leaves are broad-leaved. This is because the flat leaves allow water to collect. Drip-tips also facilitate throughfall. Rates of throughfall are lower in coniferous forests as conifers can only hold individual droplets of water on their needles.
Throughfall is a crucial process when designing pesticides for foliar application since it will condition their washing and the fate of potential pollutants in the environment.
See also
Stemflow
Canopy interception
Forest floor interception
Tree shape
Notes
Hydrology
Forest ecology
Document 4:::
At equilibrium, the relationship between water content and equilibrium relative humidity of a material can be displayed graphically by a curve, the so-called moisture sorption isotherm.
For each humidity value, a sorption isotherm indicates the corresponding water content value at a given, constant temperature. If the composition or quality of the material changes, then its sorption behaviour also changes. Because of the complexity of sorption process the isotherms cannot be determined explicitly by calculation, but must be recorded experimentally for each product.
The relationship between water content and water activity (aw) is complex. An increase in aw is usually accompanied by an increase in water content, but in a non-linear fashion. This relationship between water activity and moisture content at a given temperature is called the moisture sorption isotherm. These curves are determined experimentally and constitute the fingerprint of a food system.
BET theory (Brunauer-Emmett-Teller) provides a calculation to describe the physical adsorption of gas molecules on a solid surface. Because of the complexity of the process, these calculations are only moderately successful; however, Stephen Brunauer was able to classify sorption isotherms into five generalized shapes as shown in Figure 2. He found that Type II and Type III isotherms require highly porous materials or desiccants, with first monolayer adsorption, followed by multilayer adsorption and finally leading to capillary condensation, explaining these materials high moisture capacity at high relative humidity.
Care must be used in extracting data from isotherms, as the representation for each axis may vary in its designation. Brunauer provided the vertical axis as moles of gas adsorbed divided by the moles of the dry material, and on the horizontal axis he used the ratio of partial pressure of the gas just over the sample, divided by its partial pressure at saturation. More modern isotherms showing the
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the term for liquid water falling from the sky?
A. rain
B. wind
C. mud
D. fire
Answer:
|
|
sciq-4504
|
multiple_choice
|
What is the vertical extent of ocean water called?
|
[
"water column",
"water row",
"ocean column",
"oceanic pillar"
] |
A
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
The Bachelor of Science in Aquatic Resources and Technology (B.Sc. in AQT) (or Bachelor of Aquatic Resource) is an undergraduate degree that prepares students to pursue careers in the public, private, or non-profit sector in areas such as marine science, fisheries science, aquaculture, aquatic resource technology, food science, management, biotechnology and hydrography. Post-baccalaureate training is available in aquatic resource management and related areas.
The Department of Animal Science and Export Agriculture, at the Uva Wellassa University of Badulla, Sri Lanka, has the largest enrollment of undergraduate majors in Aquatic Resources and Technology, with about 200 students as of 2014.
The Council on Education for Aquatic Resources and Technology includes undergraduate AQT degrees in the accreditation review of Aquatic Resources and Technology programs and schools.
See also
Marine Science
Ministry of Fisheries and Aquatic Resources Development
Document 2:::
Aquatic science is the study of the various bodies of water that make up our planet including oceanic and freshwater environments. Aquatic scientists study the movement of water, the chemistry of water, aquatic organisms, aquatic ecosystems, the movement of materials in and out of aquatic ecosystems, and the use of water by humans, among other things. Aquatic scientists examine current processes as well as historic processes, and the water bodies that they study can range from tiny areas measured in millimeters to full oceans. Moreover, aquatic scientists work in Interdisciplinary groups. For example, a physical oceanographer might work with a biological oceanographer to understand how physical processes, such as tropical cyclones or rip currents, affect organisms in the Atlantic Ocean. Chemists and biologists, on the other hand, might work together to see how the chemical makeup of a certain body of water affects the plants and animals that reside there. Aquatic scientists can work to tackle global problems such as global oceanic change and local problems, such as trying to understand why a drinking water supply in a certain area is polluted.
There are two main fields of study that fall within the field of aquatic science. These fields of study include oceanography and limnology.
Oceanography
Oceanography refers to the study of the physical, chemical, and biological characteristics of oceanic environments. Oceanographers study the history, current condition, and future of the planet's oceans. They also study marine life and ecosystems, ocean circulation, plate tectonics, the geology of the seafloor, and the chemical and physical properties of the ocean.
Oceanography is interdisciplinary. For example, there are biological oceanographers and marine biologists. These scientists specialize in marine organisms. They study how these organisms develop, their relationship with one another, and how they interact and adapt to their environment. Biological oceanographers
Document 3:::
Branched flow refers to a phenomenon in wave dynamics, that produces a tree-like pattern involving successive mostly forward scattering events by smooth obstacles deflecting traveling rays or waves. Sudden and significant momentum or wavevector changes are absent, but accumulated small changes can lead to large momentum changes. The path of a single ray is less important than the environs around a ray, which rotate, compress, and stretch around in an area preserving way. Even more revealing are groups, or manifolds of neighboring rays extending over significant zones. Starting rays out from a point but varying their direction over a range, one to the next, or from different points along a line all with the same initial directions are examples of a manifold. Waves have analogous launching conditions, such as a point source spraying in many directions, or an extended plane wave heading on one direction. The ray bending or refraction leads to characteristic structure in phase space and nonuniform distributions in coordinate space that look somehow universal and resemble branches in trees or stream beds. The branches taken on non-obvious paths through the refracting landscape that are indirect and nonlocal results of terrain already traversed. For a given refracting landscape, the branches will look completely different depending on the initial manifold.
Examples
Two-dimensional electron gas
Branched flow was first identified in experiments with a two-dimensional electron gas. Electrons flowing from a quantum point contact were scanned using a scanning probe microscope. Instead of usual diffraction patterns, the electrons flowed forming branching strands that persisted for several correlation lengths of the background potential.
Ocean dynamics
Focusing of random waves in the ocean can also lead to branched flow. The fluctuation in the depth of the ocean floor can be described as a random potential. A tsunami wave propagating in such medium will form branches which
Document 4:::
In physical oceanography, the significant wave height (SWH, HTSGW or Hs)
is defined traditionally as the mean wave height (trough to crest) of the highest third of the waves (H1/3). It is usually defined as four times the standard deviation of the surface elevation – or equivalently as four times the square root of the zeroth-order moment (area) of the wave spectrum. The symbol Hm0 is usually used for that latter definition. The significant wave height (Hs) may thus refer to Hm0 or H1/3; the difference in magnitude between the two definitions is only a few percent.
SWH is used to characterize sea state, including winds and swell.
Origin and definition
The original definition resulted from work by the oceanographer Walter Munk during World War II. The significant wave height was intended to mathematically express the height estimated by a "trained observer". It is commonly used as a measure of the height of ocean waves.
Time domain definition
Significant wave height H1/3, or Hs or Hsig, as determined in the time domain, directly from the time series of the surface elevation, is defined as the average height of that one-third of the N measured waves having the greatest heights: where Hm represents the individual wave heights, sorted into descending order of height as m increases from 1 to N. Only the highest one-third is used, since this corresponds best with visual observations of experienced mariners, whose vision apparently focuses on the higher waves.
Frequency domain definition
Significant wave height Hm0, defined in the frequency domain, is used both for measured and forecasted wave variance spectra. Most easily, it is defined in terms of the variance m0 or standard deviation ση of the surface elevation: where m0, the zeroth-moment of the variance spectrum, is obtained by integration of the variance spectrum. In case of a measurement, the standard deviation ση is the easiest and most accurate statistic to be used.
Another wave-height statistic in common u
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the vertical extent of ocean water called?
A. water column
B. water row
C. ocean column
D. oceanic pillar
Answer:
|
|
sciq-7089
|
multiple_choice
|
What are surface currents generally caused by?
|
[
"major steam belts",
"minor wind belts",
"major wind belts",
"major humidity belts"
] |
C
|
Relavent Documents:
Document 0:::
Currentology is a science that studies the internal movements of water masses.
Description
In the study of fluid mechanics, researchers attempt to give a correct explanation of marine currents. Currents are caused by external driving forces such as wind, gravitational effects, coriolis forces and physical differences between various water masses, the main parameter being the difference of density that varies in function of the temperature and salinity.
The study of currents, combined with other factors such as tides and waves is relevant for understanding marine hydrodynamics and linked processes such as sediment transport and climate balance.
The measurement of maritime currents
The measurements of maritime currents can be made according to different techniques:
current meter
diversion buoys
See also
Document 1:::
In fluid dynamics, wave–current interaction is the interaction between surface gravity waves and a mean flow. The interaction implies an exchange of energy, so after the start of the interaction both the waves and the mean flow are affected.
For depth-integrated and phase-averaged flows, the quantity of primary importance for the dynamics of the interaction is the wave radiation stress tensor.
Wave–current interaction is also one of the possible mechanisms for the occurrence of rogue waves, such as in the Agulhas Current. When a wave group encounters an opposing current, the waves in the group may pile up on top of each other which will propagate into a rogue wave.
Classification
identifies five major sub-classes within wave–current interaction:
interaction of waves with a large-scale current field, with slow – as compared to the wavelength – two-dimensional horizontal variations of the current fields;
interaction of waves with small-scale current changes (in contrast with the case above), where the horizontal current varies suddenly, over a length scale comparable with the wavelength;
the combined wave–current motion for currents varying (strongly) with depth below the free surface;
interaction of waves with turbulence; and
interaction of ship waves and currents, such as in the ship's wake.
See also
generalized Lagrangian mean
rip current
Footnotes
Document 2:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 3:::
A subsurface ocean current is an oceanic current that runs beneath surface currents. Examples include the Equatorial Undercurrents of the Pacific, Atlantic, and Indian Oceans, the California Undercurrent, and the Agulhas Undercurrent, the deep thermohaline circulation in the Atlantic, and bottom gravity currents near Antarctica. The forcing mechanisms vary for these different types of subsurface currents.
Density current
The most common of these is the density current, epitomized by the Thermohaline current. The density current works on a basic principle: the denser water sinks to the bottom, separating from the less dense water, and causing an opposite reaction from it. There are numerous factors controlling density.
Salinity
One is the salinity of water, a prime example of this being the Mediterranean/Atlantic exchange. The saltier waters of the Mediterranean sink to the bottom and flow along there, until they reach the ledge between the two bodies of water. At this point, they rush over the ledge into the Atlantic, pushing the less saline surface water into the Mediterranean.
Temperature
Another factor of density is temperature. Thermohaline (literally meaning heat-salty) currents are very influenced by heat. Cold water from glaciers, icebergs, etc. descends to join the ultra-deep, cold section of the worldwide Thermohaline current. After spending an exceptionally long time in the depths, it eventually heats up, rising to join the higher Thermohaline current section. Because of the temperature and expansiveness of the Thermohaline current, it is substantially slower, taking nearly 1000 years to run its worldwide circuit.
Turbidity current
One factor of density is so unique that it warrants its own current type. This is the turbidity current. Turbidity current is caused when the density of water is increased by sediment. This current is the underwater equivalent of a landslide. When sediment increases the density of the water, it falls to the bottom, and then
Document 4:::
Zonal and meridional flow are directions and regions of fluid flow on a globe.
Zonal flow follows a pattern along latitudinal lines, latitudinal circles or in the west–east direction.
Meridional flow follows a pattern from north to south, or from south to north, along the Earth's longitude lines, longitudinal circles (meridian) or in the north–south direction.
These terms are often used in the atmospheric and earth sciences to describe global phenomena, such as "meridional wind", or "zonal average temperature".
In the context of physics, zonal flow connotes a tendency of flux to conform to a pattern parallel to the equator of a sphere. In meteorological term regarding atmospheric circulation, zonal flow brings a temperature contrast along the Earth's longitude. Extratropical cyclones in zonal flows tend to be weaker, moving faster and producing relatively little impact on local weather.
Extratropical cyclones in meridional flows tend to be stronger and move slower. This pattern is responsible for most instances of extreme weather, as not only are storms stronger in this type of flow regime, but temperatures can reach extremes as well, producing heat waves and cold waves depending on the equator-ward or poleward direction of the flow.
For vector fields (such as wind velocity), the zonal component (or x-coordinate) is denoted as u, while the meridional component (or y-coordinate) is denoted as v.
In plasma physics Zonal flow (plasma) means poloidal, which is the opposite from the meaning in planetary atmospheres and weather/climate studies.
See also
Zonal and poloidal
Zonal flow (plasma)
Meridione
Notes
Orientation (geometry)
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are surface currents generally caused by?
A. major steam belts
B. minor wind belts
C. major wind belts
D. major humidity belts
Answer:
|
|
sciq-5532
|
multiple_choice
|
What type of seeds come from plants that were traditionally grown in human populations, as opposed to the seeds used for large-scale agricultural production?
|
[
"modified seeds",
"old-fashioned seeds",
"original seeds",
"heirloom seeds"
] |
D
|
Relavent Documents:
Document 0:::
Plant breeding started with sedentary agriculture, particularly the domestication of the first agricultural plants, a practice which is estimated to date back 9,000 to 11,000 years. Initially, early human farmers selected food plants with particular desirable characteristics and used these as a seed source for subsequent generations, resulting in an accumulation of characteristics over time. In time however, experiments began with deliberate hybridization, the science and understanding of which was greatly enhanced by the work of Gregor Mendel. Mendel's work ultimately led to the new science of genetics. Modern plant breeding is applied genetics, but its scientific basis is broader, covering molecular biology, cytology, systematics, physiology, pathology, entomology, chemistry, and statistics (biometrics). It has also developed its own technology. Plant breeding efforts are divided into a number of different historical landmarks.
Early plant breeding
Domestication
Domestication of plants is an artificial selection process conducted by humans to produce plants that have more desirable traits than wild plants, and which renders them dependent on artificial usually enhanced environments for their continued existence. The practice is estimated to date back 9,000-11,000 years. Many crops in present-day cultivation are the result of domestication in ancient times, about 5,000 years ago in the Old World and 3,000 years ago in the New World. In the Neolithic period, domestication took a minimum of 1,000 years and a maximum of 7,000 years. Today, all principal food crops come from domesticated varieties. Almost all the domesticated plants used today for food and agriculture were domesticated in the centers of origin. In these centers there is still a great diversity of closely related wild plants, so-called crop wild relatives, that can also be used for improving modern cultivars by plant breeding.
A plant whose origin or selection is due primarily to intentional human a
Document 1:::
A seed bank (also seed banks or seeds bank) stores seeds to preserve genetic diversity; hence it is a type of gene bank. There are many reasons to store seeds. One is to preserve the genes that plant breeders need to increase yield, disease resistance, drought tolerance, nutritional quality, taste, etc. of crops. Another is to forestall loss of genetic diversity in rare or imperiled plant species in an effort to conserve biodiversity ex situ. Many plants that were used centuries ago by humans are used less frequently now; seed banks offer a way to preserve that historical and cultural value. Collections of seeds stored at constant low temperature and low moisture are guarded against loss of genetic resources that are otherwise maintained in situ or in field collections. These alternative "living" collections can be damaged by natural disasters, outbreaks of disease, or war. Seed banks are considered seed libraries, containing valuable information about evolved strategies to combat plant stress, and can be used to create genetically modified versions of existing seeds. The work of seed banks often span decades and even centuries. Most seed banks are publicly funded and seeds are usually available for research that benefits the public.
Storage conditions and regeneration
Seeds are living plants and keeping them viable over the long term requires adjusting storage moisture and temperature appropriately. As they mature on the mother plant, many seeds attain an innate ability to survive drying. Survival of these so-called 'orthodox' seeds can be extended by dry, low temperature storage. The level of dryness and coldness depends mostly on the longevity that is required and the investment in infrastructure that is affordable. Practical guidelines from a US scientist in the 1950s and 1960s, James Harrington, are known as 'Thumb Rules'. The 'Hundreds Rule' guides that the sum of relative humidity and temperature (in Fahrenheit) should be less than 100 for the sample to surv
Document 2:::
Plant breeding is the science of changing the traits of plants in order to produce desired characteristics. It has been used to improve the quality of nutrition in products for humans and animals. The goals of plant breeding are to produce crop varieties that boast unique and superior traits for a variety of applications. The most frequently addressed agricultural traits are those related to biotic and abiotic stress tolerance, grain or biomass yield, end-use quality characteristics such as taste or the concentrations of specific biological molecules (proteins, sugars, lipids, vitamins, fibers) and ease of processing (harvesting, milling, baking, malting, blending, etc.).
Plant breeding can be performed through many different techniques ranging from simply selecting plants with desirable characteristics for propagation, to methods that make use of knowledge of genetics and chromosomes, to more complex molecular techniques. Genes in a plant are what determine what type of qualitative or quantitative traits it will have. Plant breeders strive to create a specific outcome of plants and potentially new plant varieties, and in the course of doing so, narrow down the genetic diversity of that variety to a specific few biotypes.
It is practiced worldwide by individuals such as gardeners and farmers, and by professional plant breeders employed by organizations such as government institutions, universities, crop-specific industry associations or research centers. International development agencies believe that breeding new crops is important for ensuring food security by developing new varieties that are higher yielding, disease resistant, drought tolerant or regionally adapted to different environments and growing conditions.
A recent study shows that without plant breeding, Europe would have produced 20% fewer arable crops over the last 20 years, consuming an additional of land and emitting of carbon. Wheat species created for Morocco are currently being crossed with
Document 3:::
Seed companies produce and sell seeds for flowers, fruits and vegetables to commercial growers and amateur gardeners. The production of seed is a multibillion-dollar business, which uses growing facilities and growing locations worldwide. While most of the seed is produced by large specialist growers, large amounts are also produced by small growers that produce only one to a few crop types. The larger companies supply seed both to commercial resellers and wholesalers. The resellers and wholesalers sell to vegetable and fruit growers, and to companies who package seed into packets and sell them on to the amateur gardener.
Most seed companies or resellers that sell to retail produce a catalog, for seed to be sown the following spring, that is generally published during early winter. These catalogs are eagerly awaited by the amateur gardener, as during winter months there is little that can be done in the garden so this time can be spent planning the following year’s gardening. The largest collection of nursery and seed trade catalogs in the U.S. is held at the National Agricultural Library where the earliest catalogs date from the late 18th century, with most published from the 1890s to the present.
Seed companies produce a huge range of seeds from highly developed F1 hybrids to open pollinated wild species. They have extensive research facilities to produce plants with genetic materials that result in improved uniformity and appeal. These qualities might include disease resistance, higher yields, dwarf habit and vibrant or new colors. These improvements are often closely guarded to protect them from being utilized by other producers, thus plant cultivars are often sold under the company's own name and protected by international laws from being grown for seed production by others. Along with the growth in the allotment movement, and the increasing popularity of gardening, there have emerged many small independent seed companies. Many of these are active in seed co
Document 4:::
The soil seed bank is the natural storage of seeds, often dormant, within the soil of most ecosystems. The study of soil seed banks started in 1859 when Charles Darwin observed the emergence of seedlings using soil samples from the bottom of a lake. The first scientific paper on the subject was published in 1882 and reported on the occurrence of seeds at different soil depths. Weed seed banks have been studied intensely in agricultural science because of their important economic impacts; other fields interested in soil seed banks include forest regeneration and restoration ecology.
Henry David Thoreau wrote that the contemporary popular belief explaining the succession of a logged forest, specifically to trees of a dissimilar species to the trees cut down, was that seeds either spontaneously generated in the soil, or sprouted after lying dormant for centuries. However, he dismissed this idea, noting that heavy nuts unsuited for distribution by wind were distributed instead by animals.
Background
Many taxa have been classified according to the longevity of their seeds in the soil seed bank. Seeds of transient species remain viable in the soil seed bank only to the next opportunity to germinate, while seeds of persistent species can survive longer than the next opportunity—often much longer than one year. Species with seeds that remain viable in the soil longer than five years form the long-term persistent seed bank, while species whose seeds generally germinate or die within one to five years are called short-term persistent. A typical long-term persistent species is Chenopodium album (Lambsquarters); its seeds commonly remain viable in the soil for up to 40 years and in rare situations perhaps as long as 1,600 years. A species forming no soil seed bank at all (except the dry season between ripening and the first autumnal rains) is Agrostemma githago (Corncockle), which was formerly a widespread cereal weed.
Seed longevity
Longevity of seeds is very var
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of seeds come from plants that were traditionally grown in human populations, as opposed to the seeds used for large-scale agricultural production?
A. modified seeds
B. old-fashioned seeds
C. original seeds
D. heirloom seeds
Answer:
|
|
sciq-4262
|
multiple_choice
|
What property means that something can return to its original shape after being stretched or compressed?
|
[
"viscosity",
"homeostasis",
"friction",
"elasticity"
] |
D
|
Relavent Documents:
Document 0:::
A T1 process (or topological rearrangement process of the first kind) is one of the main processes by which cellular materials such as foams or biological tissues change shapes. It involves four discrete objects such as bubbles, drops, cells, etc. The four objects are initially arranged in a plane in the following way. Objects A and B are in contact and objects C and D are on either side of the AB group and touching both A and B. The T1 process consists in breaking the contact between A and B and establishing the contact between C and D.
When a significant number of rearrangement events such as T1 processes with similar orientations occur inside a foam or a tissue, the material correspondingly undergoes a deformation: it elongates in the direction in which neighbours depart (here, AB) while it contracts in the direction in which new neighbour pairs form (here, CD).
As a result of the existence of the T1 and similar processes, materials made of these objects have a number of similar rheological properties. Among these, plasticity allows them to be deformed irreversibly. For such materials, such irreversible deformations arise from the ability to rearrange their constitutive objects. Thus, the T1 process is the major mesoscopic ingredient of plasticity for these materials.
Document 1:::
In physics and continuum mechanics, deformation is the change in the shape or size of an object. It has dimension of length with SI unit of metre (m). It is quantified as the residual displacement of particles in a non-rigid body, from an configuration to a configuration, excluding the body's average translation and rotation (its rigid transformation). A configuration is a set containing the positions of all particles of the body.
A deformation can occur because of external loads, intrinsic activity (e.g. muscle contraction), body forces (such as gravity or electromagnetic forces), or changes in temperature, moisture content, or chemical reactions, etc.
In a continuous body, a deformation field results from a stress field due to applied forces or because of some changes in the conditions of the body. The relation between stress and strain (relative deformation) is expressed by constitutive equations, e.g., Hooke's law for linear elastic materials.
Deformations which cease to exist after the stress field is removed are termed as elastic deformation. In this case, the continuum completely recovers its original configuration. On the other hand, irreversible deformations remain. They exist even after stresses have been removed. One type of irreversible deformation is plastic deformation, which occurs in material bodies after stresses have attained a certain threshold value known as the elastic limit or yield stress, and are the result of slip, or dislocation mechanisms at the atomic level. Another type of irreversible deformation is viscous deformation, which is the irreversible part of viscoelastic deformation.
In the case of elastic deformations, the response function linking strain to the deforming stress is the compliance tensor of the material.
Definition and formulation
Deformation is the change in the metric properties of a continuous body, meaning that a curve drawn in the initial body placement changes its length when displaced to a curve in the final
Document 2:::
Continuum mechanics is a branch of mechanics that deals with the deformation of and transmission of forces through materials modeled as a continuous medium (also called a continuum) rather than as discrete particles. The French mathematician Augustin-Louis Cauchy was the first to formulate such models in the 19th century.
Continuum mechanics deals with deformable bodies, as opposed to rigid bodies.
A continuum model assumes that the substance of the object completely fills the space it occupies. This ignores the fact that matter is made of atoms, however provides a sufficiently accurate description of matter on length scales much greater than that of inter-atomic distances. The concept of a continuous medium allows for intuitive analysis of bulk matter by using differential equations that describe the behavior of such matter according to physical laws, such as mass conservation, momentum conservation, and energy conservation. Information about the specific material is expressed in constitutive relationships.
Continuum mechanics treats the physical properties of solids and fluids independently of any particular coordinate system in which they are observed. These properties are represented by tensors, which are mathematical objects with the salient property of being independent of coordinate systems. This permits definition of physical properties at any point in the continuum, according to mathematically convenient continuous functions. The theories of elasticity, plasticity and fluid mechanics are based on the concepts of continuum mechanics.
Concept of a continuum
The concept of a continuum underlies the mathematical framework for studying large-scale forces and deformations in materials. Although materials are composed of discrete atoms and molecules, separated by empty space or microscopic cracks and crystallographic defects, physical phenomena can often be modeled by considering a substance distributed throughout some region of space. A continuum is a body th
Document 3:::
The Gough–Joule effect (a.k.a. Gow–Joule effect) is originally the tendency of elastomers to contract when heated if they are under tension. Elastomers that are not under tension do not see this effect. The term is also used more generally to refer to the dependence of the temperature of any solid on the mechanical deformation. This effect can be observed in nylon strings of classical guitars, whereby the string contracts as a result of heating. The effect is due to the decrease of entropy when long chain molecules are stretched.
If an elastic band is first stretched and then subjected to heating, it will shrink rather than expand. This effect was first observed by John Gough in 1802, and was investigated further by James Joule in the 1850s, when it then became known as the Gough–Joule effect.
Examples in Literature:
Popular Science magazine, January 1972: "A stretched piece of rubber contracts when heated. In doing so, it exerts a measurable increase in its pull. This surprising property of rubber was first observed by James Prescott Joule about a hundred years ago and is known as the Joule effect."
Rubber as an Engineering Material (book), by Khairi Nagdi: "The Joule effect is a phenomenon of practical importance that must be considered by machine designers. The simplest way of demonstrating this effect is to suspend a weight on a rubber band sufficient to elongate it at least 50%. When the stretched rubber band is warmed up by an infrared lamp, it does not elongate because of thermal expansion, as may be expected, but it retracts and lifts the weight."
The effect is important in O-ring seal design, where the seals can be mounted in a peripherally compressed state in hot applications to prolong life.
The effect is also relevant to rotary seals which can bind if the seal shrinks due to overheating.
Document 4:::
In materials science, a Bingham plastic is a viscoplastic material that behaves as a rigid body at low stresses but flows as a viscous fluid at high stress. It is named after Eugene C. Bingham who proposed its mathematical form.
It is used as a common mathematical model of mud flow in drilling engineering, and in the handling of slurries. A common example is toothpaste, which will not be extruded until a certain pressure is applied to the tube. It is then pushed out as a relatively coherent plug.
Explanation
Figure 1 shows a graph of the behaviour of an ordinary viscous (or Newtonian) fluid in red, for example in a pipe. If the pressure at one end of a pipe is increased this produces a stress on the fluid tending to make it move (called the shear stress) and the volumetric flow rate increases proportionally. However, for a Bingham Plastic fluid (in blue), stress can be applied but it will not flow until a certain value, the yield stress, is reached. Beyond this point the flow rate increases steadily with increasing shear stress. This is roughly the way in which Bingham presented his observation, in an experimental study of paints. These properties allow a Bingham plastic to have a textured surface with peaks and ridges instead of a featureless surface like a Newtonian fluid.
Figure 2 shows the way in which it is normally presented currently. The graph shows shear stress on the vertical axis and shear rate on the horizontal one. (Volumetric flow rate depends on the size of the pipe, shear rate is a measure of how the velocity changes with distance. It is proportional to flow rate, but does not depend on pipe size.) As before, the Newtonian fluid flows and gives a shear rate for any finite value of shear stress. However, the Bingham plastic again does not exhibit any shear rate (no flow and thus no velocity) until a certain stress is achieved. For the Newtonian fluid the slope of this line is the viscosity, which is the only parameter needed to describe its flow
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What property means that something can return to its original shape after being stretched or compressed?
A. viscosity
B. homeostasis
C. friction
D. elasticity
Answer:
|
|
sciq-8458
|
multiple_choice
|
What kind of family is haumea part of?
|
[
"a collisional family",
"orbital family",
"a moclobemide family",
"eoan family"
] |
A
|
Relavent Documents:
Document 0:::
A haplotype is a group of alleles in an organism that are inherited together from a single parent, and a haplogroup (haploid from the , haploûs, "onefold, simple" and ) is a group of similar haplotypes that share a common ancestor with a single-nucleotide polymorphism mutation. More specifically, a haplotype is a combination of alleles at different chromosomal regions that are closely linked and that tend to be inherited together. As a haplogroup consists of similar haplotypes, it is usually possible to predict a haplogroup from haplotypes. Haplogroups pertain to a single line of descent. As such, membership of a haplogroup, by any individual, relies on a relatively small proportion of the genetic material possessed by that individual.
Each haplogroup originates from, and remains part of, a preceding single haplogroup (or paragroup). As such, any related group of haplogroups may be precisely modelled as a nested hierarchy, in which each set (haplogroup) is also a subset of a single broader set (as opposed, that is, to biparental models, such as human family trees).
Haplogroups are normally identified by an initial letter of the alphabet, and refinements consist of additional number and letter combinations, such as (for example) . The alphabetical nomenclature was published in 2002 by the Y Chromosome Consortium.
In human genetics, the haplogroups most commonly studied are Y-chromosome (Y-DNA) haplogroups and mitochondrial DNA (mtDNA) haplogroups, each of which can be used to define genetic populations. Y-DNA is passed solely along the patrilineal line, from father to son, while mtDNA is passed down the matrilineal line, from mother to offspring of both sexes. Neither recombines, and thus Y-DNA and mtDNA change only by chance mutation at each generation with no intermixture between parents' genetic material.
Haplogroup formation
Mitochondria are small organelles that lie in the cytoplasm of eukaryotic cells, such as those of humans. Their primary function is to
Document 1:::
The outer Solar System planetoid Haumea has two known moons, Hiʻiaka and Namaka, named after Hawaiian goddesses. These small moons were discovered in 2005, from observations of Haumea made at the large telescopes of the W. M. Keck Observatory in Hawaii.
Haumea's moons are unusual in a number of ways. They are thought to be part of its extended collisional family, which formed billions of years ago from icy debris after a large impact disrupted Haumea's ice mantle. Hiʻiaka, the larger, outermost moon, has large amounts of pure water ice on its surface, which is rare among Kuiper belt objects. Namaka, about one tenth the mass, has an orbit with surprising dynamics: it is unusually eccentric and appears to be greatly influenced by the larger satellite.
History
Two small satellites were discovered around Haumea (which was at that time still designated 2003 EL61) through observations using the W.M. Keck Observatory by a Caltech team in 2005.
The outer and larger of the two satellites was discovered 26 January 2005, and formally designated S/2005 (2003 EL61) 1, though nicknamed "Rudolph" by the Caltech team. The smaller, inner satellite of Haumea was discovered on 30 June 2005, formally termed S/2005 (2003 EL61) 2, and nicknamed "Blitzen". On 7 September 2006, both satellites were numbered and admitted into the official minor planet catalogue as (136108) 2003 EL61 I and II, respectively.
The permanent names of these moons were announced, together with that of 2003 EL61, by the International Astronomical Union on 17 September 2008: (136108) Haumea I Hiʻiaka and (136108) Haumea II Namaka. Each moon was named after a daughter of Haumea, the Hawaiian goddess of fertility and childbirth. Hiʻiaka is the goddess of dance and patroness of the Big Island of Hawaii, where the Mauna Kea Observatory is located. Nāmaka is the goddess of water and the sea; she cooled her sister Pele's lava as it flowed into the sea, turning it into new land.
In her legend, Haumea's many children c
Document 2:::
In human mitochondrial genetics, the Haplogroup CZ is a human mitochondrial DNA (mtDNA) haplogroup.
Origin
Haplogroup CZ is a descendant of haplogroup M8 and is a parent to the haplogroups C and Z. The C and Z subclades share a common ancestor dated to approximately 36,500 years ago.
Distribution
Today, CZ is found in eastern Asian, Central Asian, Siberian, indigenous American, and European populations, and is most common in Siberian populations. It is recognized by a genetic marker at 249d.
Subclades
Tree
This phylogenetic tree of haplogroup CZ subclades is based on the paper by Mannis van Oven and Manfred Kayser Updated comprehensive phylogenetic tree of global human mitochondrial DNA variation and subsequent published research.
M
M8
CZ
C
Z
See also
Genealogical DNA test
Genetic genealogy
Human mitochondrial genetics
Population genetics
Human mitochondrial DNA haplogroups
Document 3:::
The Seven Daughters of Eve is a 2001 semi-fictional book by Bryan Sykes that presents the science of human origin in Africa and their dispersion to a general audience. Sykes explains the principles of genetics and human evolution, the particularities of mitochondrial DNA, and analyses of ancient DNA to genetically link modern humans to prehistoric ancestors.
Following the developments of mitochondrial genetics, Sykes traces back human migrations, discusses the "out of Africa theory" and casts serious doubt upon Thor Heyerdahl's theory of the Peruvian origin of the Polynesians, which opposed the theory of their origin in Indonesia. He also describes the use of mitochondrial DNA in identifying the remains of Emperor Nicholas II of Russia, and in assessing the genetic makeup of modern Europe.
The title of the book comes from one of the principal achievements of mitochondrial genetics, which is the classification of all modern Europeans into seven groups, the mitochondrial haplogroups. Each haplogroup is defined by a set of characteristic mutations on the mitochondrial genome, and can be traced along a person's maternal line to a specific prehistoric woman. Sykes refers to these women as "clan mothers", though these women did not all live concurrently. All these women in turn shared a common maternal ancestor, the Mitochondrial Eve.
The last third of the book is spent on a series of fictional narratives, written by Sykes, describing his creative guesses about the lives of each of these seven "clan mothers". This latter half generally met with mixed reviews in comparison with the first part.
Mitochondrial haplogroups in The Seven Daughters of Eve
The seven "clan mothers" mentioned by Sykes each correspond to one (or more) human mitochondrial haplogroups.
Ursula: corresponds to Haplogroup U (specifically U5, and excluding its subgroup K)
Xenia: corresponds to Haplogroup X
Helena: corresponds to Haplogroup H
Velda: corresponds to Haplogroup V, found with part
Document 4:::
Haplogroup C-B477, also known as Haplogroup C1b2, is a Y-chromosome haplogroup. It is one of two primary branches of Haplogroup C1b, one of the descendants of Haplogroup C1.
It is distributed in high frequency in Indigenous Australians, Papuan people, Melanesian people, and Polynesian people.
Subgroups
C1b2(C-B477)
C1b2a(C-M38)Papuan people and other Oceanians
C1b2b(C-M347)Indigenous Australians
Frequency
C-M38
C-M347
Indigenous Australians 60.2%-68.7%
Migration history
Haplogroup C-B477 took South route after the Out of Africa through Indian subcontinent to Sahul Shelf. C-M38 was born 49,600 years before present around New Guinea.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What kind of family is haumea part of?
A. a collisional family
B. orbital family
C. a moclobemide family
D. eoan family
Answer:
|
|
sciq-5191
|
multiple_choice
|
Making a specific prediction based on a general principle is known as what type of reasoning?
|
[
"deductive reasoning",
"validating reasoning",
"logical reasoning",
"common sense reasoning"
] |
A
|
Relavent Documents:
Document 0:::
Analytical skill is the ability to deconstruct information into smaller categories in order to draw conclusions. Analytical skill consists of categories that include logical reasoning, critical thinking, communication, research, data analysis and creativity. Analytical skill is taught in contemporary education with the intention of fostering the appropriate practises for future professions. The professions that adopt analytical skill include educational institutions, public institutions, community organisations and industry.
Richard J. Heuer Jr. explained that In the article by Freed, the need for programs within the educational system to help students develop these skills is demonstrated. Workers "will need more than elementary basic skills to maintain the standard of living of their parents. They will have to think for a living, analyse problems and solutions, and work cooperatively in teams".
Logical Reasoning
Logical reasoning is a process consisting of inferences, where premises and hypotheses are formulated to arrive at a probable conclusion. It is a broad term covering three sub-classifications in deductive reasoning, inductive reasoning and abductive reasoning.
Deductive Reasoning
‘Deductive reasoning is a basic form of valid reasoning, commencing with a general statement or hypothesis, then examines the possibilities to reach a specific, logical conclusion’. This scientific method utilises deductions, to test hypotheses and theories, to predict if possible observations were correct.
A logical deductive reasoning sequence can be executed by establishing: an assumption, followed by another assumption and finally, conducting an inference. For example, ‘All men are mortal. Harold is a man. Therefore, Harold is mortal.’
For deductive reasoning to be upheld, the hypothesis must be correct, therefore, reinforcing the notion that the conclusion is logical and true. It is possible for deductive reasoning conclusions to be inaccurate or incorrect entirely, bu
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 3:::
Validity is the main extent to which a concept, conclusion, or measurement is well-founded and likely corresponds accurately to the real world. The word "valid" is derived from the Latin validus, meaning strong. The validity of a measurement tool (for example, a test in education) is the degree to which the tool measures what it claims to measure. Validity is based on the strength of a collection of different types of evidence (e.g. face validity, construct validity, etc.) described in greater detail below.
In psychometrics, validity has a particular application known as test validity: "the degree to which evidence and theory support the interpretations of test scores" ("as entailed by proposed uses of tests").
It is generally accepted that the concept of scientific validity addresses the nature of reality in terms of statistical measures and as such is an epistemological and philosophical issue as well as a question of measurement. The use of the term in logic is narrower, relating to the relationship between the premises and conclusion of an argument. In logic, validity refers to the property of an argument whereby if the premises are true then the truth of the conclusion follows by necessity. The conclusion of an argument is true if the argument is sound, which is to say if the argument is valid and its premises are true. By contrast, "scientific or statistical validity" is not a deductive claim that is necessarily truth preserving, but is an inductive claim that remains true or false in an undecided manner. This is why "scientific or statistical validity" is a claim that is qualified as being either strong or weak in its nature, it is never necessary nor certainly true. This has the effect of making claims of "scientific or statistical validity" open to interpretation as to what, in fact, the facts of the matter mean.
Validity is important because it can help determine what types of tests to use, and help to ensure researchers are using methods that are not o
Document 4:::
Declarative knowledge is an awareness of facts that can be expressed using declarative sentences, like knowing that Princess Diana died in 1997. It is also called theoretical knowledge, descriptive knowledge, propositional knowledge, and knowledge-that. It is not restricted to one specific use or purpose and can be stored in books or on computers.
Epistemology is the main discipline studying declarative knowledge. Among other things, it studies the essential components of declarative knowledge. According to a traditionally influential view, it has three elements: it is a belief that is true and justified. As a belief, it is a subjective commitment to the accuracy of the believed claim while truth is an objective aspect. To be justified, a belief has to be rational by being based on good reasons. This means that mere guesses do not amount to knowledge even if they are true. In contemporary epistemology, additional or alternative components have been suggested. One proposal is that no contradicting evidence is present. Other suggestions are that the belief was caused by a reliable cognitive process and that the belief is infallible.
Types of declarative knowledge can be distinguished based on the source of knowledge, the type of claim that is known, and how certain the knowledge is. A central contrast is between a posteriori knowledge, which arises from experience, and a priori knowledge, which is grounded in pure rational reflection. Other classifications include domain-specific knowledge and general knowledge, knowledge of facts, concepts, and principles as well as explicit and implicit knowledge.
Declarative knowledge is often contrasted with practical knowledge and knowledge by acquaintance. Practical knowledge consists of skills, like knowing how to ride a horse. It is a form of non-intellectual knowledge since it does not need to involve true beliefs. Knowledge by acquaintance is a familiarity with something based on first-hand experience, like knowing the ta
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Making a specific prediction based on a general principle is known as what type of reasoning?
A. deductive reasoning
B. validating reasoning
C. logical reasoning
D. common sense reasoning
Answer:
|
|
sciq-9236
|
multiple_choice
|
Where do greenhouse gases trap heat?
|
[
"space",
"ground",
"atmosphere",
"altitude"
] |
C
|
Relavent Documents:
Document 0:::
The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work.
History
It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council.
Function
Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres.
STEM ambassadors
To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell.
Funding
STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments.
See also
The WISE Campaign
Engineering and Physical Sciences Research Council
National Centre for Excellence in Teaching Mathematics
Association for Science Education
Glossary of areas of mathematics
Glossary of astronomy
Glossary of biology
Glossary of chemistry
Glossary of engineering
Glossary of physics
Document 1:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 2:::
The temperatures of a planet's surface and atmosphere are governed by a delicate balancing of their energy flows. The idealized greenhouse model is based on the fact that certain gases in the Earth's atmosphere, including carbon dioxide and water vapour, are transparent to the high-frequency solar radiation, but are much more opaque to the lower frequency infrared radiation leaving Earth's surface. Thus heat is easily let in, but is partially trapped by these gases as it tries to leave. Rather than get hotter and hotter, Kirchhoff's law of thermal radiation says that the gases of the atmosphere also have to re-emit the infrared energy that they absorb, and they do so, also at long infrared wavelengths, both upwards into space as well as downwards back towards the Earth's surface. In the long-term, the planet's thermal inertia is surmounted and a new thermal equilibrium is reached when all energy arriving on the planet is leaving again at the same rate. In this steady-state model, the greenhouse gases cause the surface of the planet to be warmer than it would be without them, in order for a balanced amount of heat energy to finally be radiated out into space from the top of the atmosphere.
Essential features of this model where first published by Svante Arrhenius in 1896. It has since become a common introductory "textbook model" of the radiative heat transfer physics underlying Earth's energy balance and the greenhouse effect. The planet is idealized by the model as being functionally "layered" with regard to a sequence of simplified energy flows, but dimensionless (i.e. a zero-dimensional model) in terms of its mathematical space. The layers include a surface with constant temperature Ts and an atmospheric layer with constant temperature Ta. For diagrammatic clarity, a gap can be depicted between the atmosphere and the surface. Alternatively, Ts could be interpreted as a temperature representative of the surface and the lower atmosphere, and Ta could be inter
Document 3:::
Tech City College (Formerly STEM Academy) is a free school sixth form located in the Islington area of the London Borough of Islington, England.
It originally opened in September 2013, as STEM Academy Tech City and specialised in Science, Technology, Engineering and Maths (STEM) and the Creative Application of Maths and Science. In September 2015, STEM Academy joined the Aspirations Academy Trust was renamed Tech City College. Tech City College offers A-levels and BTECs as programmes of study for students.
Document 4:::
Twisted: The Distorted Mathematics of Greenhouse Denial is a 2007 book by Ian G. Enting, who is the Professorial Research Fellow in the ARC Centre of Excellence for Mathematics and Statistics of Complex Systems (MASCOS) based at the University of Melbourne. The book analyses the arguments of climate change deniers and the use and presentation of statistics. Enting contends there are contradictions in their various arguments. The author also presents calculations of the actual emission levels that would be required to stabilise CO2 concentrations. This is an update of calculations that he contributed to the pre-Kyoto IPCC report on Radiative Forcing of Climate.
See also
Climate change
Greenhouse effect
Radiative forcing
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Where do greenhouse gases trap heat?
A. space
B. ground
C. atmosphere
D. altitude
Answer:
|
|
sciq-2121
|
multiple_choice
|
What is the diffusion of water known as?
|
[
"hemostasis",
"osmosis",
"electrolysis",
"evaporation"
] |
B
|
Relavent Documents:
Document 0:::
The convection–diffusion equation is a combination of the diffusion and convection (advection) equations, and describes physical phenomena where particles, energy, or other physical quantities are transferred inside a physical system due to two processes: diffusion and convection. Depending on context, the same equation can be called the advection–diffusion equation, drift–diffusion equation, or (generic) scalar transport equation.
Equation
General
The general equation is
where
is the variable of interest (species concentration for mass transfer, temperature for heat transfer),
is the diffusivity (also called diffusion coefficient), such as mass diffusivity for particle motion or thermal diffusivity for heat transport,
is the velocity field that the quantity is moving with. It is a function of time and space. For example, in advection, might be the concentration of salt in a river, and then would be the velocity of the water flow as a function of time and location. Another example, might be the concentration of small bubbles in a calm lake, and then would be the velocity of bubbles rising towards the surface by buoyancy (see below) depending on time and location of the bubble. For multiphase flows and flows in porous media, is the (hypothetical) superficial velocity.
describes sources or sinks of the quantity . For example, for a chemical species, means that a chemical reaction is creating more of the species, and means that a chemical reaction is destroying the species. For heat transport, might occur if thermal energy is being generated by friction.
represents gradient and represents divergence. In this equation, represents concentration gradient.
Understanding the terms involved
The right-hand side of the equation is the sum of three contributions.
The first, , describes diffusion. Imagine that is the concentration of a chemical. When concentration is low somewhere compared to the surrounding areas (e.g. a local minimum of concentration), t
Document 1:::
Wet Processing Engineering is one of the major streams in Textile Engineering or Textile manufacturing which refers to the engineering of textile chemical processes and associated applied science. The other three streams in textile engineering are yarn engineering, fabric engineering, and apparel engineering. The processes of this stream are involved or carried out in an aqueous stage. Hence, it is called a wet process which usually covers pre-treatment, dyeing, printing, and finishing.
The wet process is usually done in the manufactured assembly of interlacing fibers, filaments and yarns, having a substantial surface (planar) area in relation to its thickness, and adequate mechanical strength to give it a cohesive structure. In other words, the wet process is done on manufactured fiber, yarn and fabric.
All of these stages require an aqueous medium which is created by water. A massive amount of water is required in these processes per day. It is estimated that, on an average, almost 50–100 liters of water is used to process only 1 kilogram of textile goods, depending on the process engineering and applications. Water can be of various qualities and attributes. Not all water can be used in the textile processes; it must have some certain properties, quality, color and attributes of being used. This is the reason why water is a prime concern in wet processing engineering.
Water
Water consumption and discharge of wastewater are the two major concerns. The textile industry uses a large amount of water in its varied processes especially in wet operations such as pre-treatment, dyeing, and printing. Water is required as a solvent of various dyes and chemicals and it is used in washing or rinsing baths in different steps. Water consumption depends upon the application methods, processes, dyestuffs, equipment/machines and technology which may vary mill to mill and material composition. Longer processing sequences, processing of extra dark colors and reprocessing lead
Document 2:::
Evapotranspiration (ET) is the combined processes which move water from the Earth's surface into the atmosphere. It covers both water evaporation (movement of water to the air directly from soil, canopies, and water bodies) and transpiration (evaporation that occurs through the stomata, or openings, in plant leaves). Evapotranspiration is an important part of the local water cycle and climate, and measurement of it plays a key role in agricultural irrigation and water resource management.
Definition of evapotranspiration
Evapotranspiration is a combination of evaporation and transpiration, measured in order to better understand crop water requirements, irrigation scheduling, and watershed management. The two key components of evapotranspiration are:
Evaporation: the movement of water directly to the air from sources such as the soil and water bodies. It can be affected by factors including heat, humidity, solar radiation and wind speed.
Transpiration: the movement of water from root systems, through a plant, and exit into the air as water vapor. This exit occurs through stomata in the plant. Rate of transpiration can be influenced by factors including plant type, soil type, weather conditions and water content, and also cultivation practices.
Evapotranspiration is typically measured in millimeters of water (i.e. volume of water moved per unit area of the Earth's surface) in a set unit of time. Globally, it is estimated that on average between three-fifths and three-quarters of land precipitation is returned to the atmosphere via evapotranspiration.
Evapotranspiration does not, in general, account for other mechanisms which are involved in returning water to the atmosphere, though some of these, such as snow and ice sublimation in regions of high elevation or high latitude, can make a large contribution to atmospheric moisture even under standard conditions.
Factors that impact evapotranspiration levels
Primary factors
Because evaporation and transpiration
Document 3:::
The Stefan flow, occasionally called Stefan's flow, is a transport phenomenon concerning the movement of a chemical species by a flowing fluid (typically in the gas phase) that is induced to flow by the production or removal of the species at an interface. Any process that adds the species of interest to or removes it from the flowing fluid may cause the Stefan flow, but the most common processes include evaporation, condensation, chemical reaction, sublimation, ablation, adsorption, absorption, and desorption. It was named after the Slovenian physicist, mathematician, and poet Josef Stefan for his early work on calculating evaporation rates.
The Stefan flow is distinct from diffusion as described by Fick's law, but diffusion almost always also occurs in multi-species systems that are experiencing the Stefan flow. In systems undergoing one of the species addition or removal processes mentioned previously, the addition or removal generates a mean flow in the flowing fluid as the fluid next to the interface is displaced by the production or removal of additional fluid by the processes occurring at the interface. The transport of the species by this mean flow is the Stefan flow. When concentration gradients of the species are also present, diffusion transports the species relative to the mean flow. The total transport rate of the species is then given by a summation of the Stefan flow and diffusive contributions.
An example of the Stefan flow occurs when a droplet of liquid evaporates in air. In this case, the vapor/air mixture surrounding the droplet is the flowing fluid, and liquid/vapor boundary of the droplet is the interface. As heat is absorbed by the droplet from the environment, some of the liquid evaporates into vapor at the surface of the droplet, and flows away from the droplet as it is displaced by additional vapor evaporating from the droplet. This process causes the flowing medium to move away from the droplet at some mean speed that is dependent on
Document 4:::
Dispersive mass transfer, in fluid dynamics, is the spreading of mass from highly concentrated areas to less concentrated areas. It is one form of mass transfer.
Dispersive mass flux is analogous to diffusion, and it can also be described using Fick's first law:
where c is mass concentration of the species being dispersed, E is the dispersion coefficient, and x is the position in the direction of the concentration gradient. Dispersion can be differentiated from diffusion in that it is caused by non-ideal flow patterns (i.e. deviations from plug flow) and is a macroscopic phenomenon, whereas diffusion is caused by random molecular motions (i.e. Brownian motion) and is a microscopic phenomenon. Dispersion is often more significant than diffusion in convection-diffusion problems. The dispersion coefficient is frequently modeled as the product of the fluid velocity, U, and some characteristic length scale, α:
Transport phenomena
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the diffusion of water known as?
A. hemostasis
B. osmosis
C. electrolysis
D. evaporation
Answer:
|
|
sciq-10078
|
multiple_choice
|
What elements do lipids primarily consist of?
|
[
"carbon, hydrogen, and oxygen",
"iron, hydrogen , and oxygen",
"silicon, hydrogen , and oxygen",
"helium, hydrogen , and oxygen"
] |
A
|
Relavent Documents:
Document 0:::
The lipidome refers to the totality of lipids in cells. Lipids are one of the four major molecular components of biological organisms, along with proteins, sugars and nucleic acids. Lipidome is a term coined in the context of omics in modern biology, within the field of lipidomics. It can be studied using mass spectrometry and bioinformatics as well as traditional lab-based methods. The lipidome of a cell can be subdivided into the membrane-lipidome and mediator-lipidome.
The first cell lipidome to be published was that of a mouse macrophage in 2010. The lipidome of the yeast Saccharomyces cerevisiae has been characterised with an estimated 95% coverage; studies of the human lipidome are ongoing. For example, the human plasma lipidome consist of almost 600 distinct molecular species. Research suggests that the lipidome of an individual may be able to indicate cancer risks associated with dietary fats, particularly breast cancer.
See also
Genome
Proteome
Glycome
Document 1:::
A saponifiable lipid is part of the ester functional group. They are made up of long chain carboxylic (of fatty) acids connected to an alcoholic functional group through the ester linkage which can undergo a saponification reaction. The fatty acids are released upon base-catalyzed ester hydrolysis to form ionized salts. The primary saponifiable lipids are free fatty acids, neutral glycerolipids, glycerophospholipids, sphingolipids, and glycolipids.
By comparison, the non-saponifiable class of lipids is made up of terpenes, including fat-soluble A and E vitamins, and certain steroids, such as cholesterol.
Applications
Saponifiable lipids have relevant applications as a source of biofuel and can be extracted from various forms of biomass to produce biodiesel.
See also
Lipids
Simple lipid
Document 2:::
Lipidology is the scientific study of lipids. Lipids are a group of biological macromolecules that have a multitude of functions in the body. Clinical studies on lipid metabolism in the body have led to developments in therapeutic lipidology for disorders such as cardiovascular disease.
History
Compared to other biomedical fields, lipidology was long-neglected as the handling of oils, smears, and greases was unappealing to scientists and lipid separation was difficult. It was not until 2002 that lipidomics, the study of lipid networks and their interaction with other molecules, appeared in the scientific literature. Attention to the field was bolstered by the introduction of chromatography, spectrometry, and various forms of spectroscopy to the field, allowing lipids to be isolated and analyzed. The field was further popularized following the cytologic application of the electron microscope, which led scientists to find that many metabolic pathways take place within, along, and through the cell membrane - the properties of which are strongly influenced by lipid composition.
Clinical lipidology
The Framingham Heart Study and other epidemiological studies have found a correlation between lipoproteins and cardiovascular disease (CVD). Lipoproteins are generally a major target of study in lipidology since lipids are transported throughout the body in the form of lipoproteins.
A class of lipids known as phospholipids help make up what is known as lipoproteins, and a type of lipoprotein is called high density lipoprotein (HDL). A high concentration of high density lipoproteins-cholesterols (HDL-C) have what is known as a vasoprotective effect on the body, a finding that correlates with an enhanced cardiovascular effect. There is also a correlation between those with diseases such as chronic kidney disease, coronary artery disease, or diabetes mellitus and the possibility of low vasoprotective effect from HDL.
Another factor of CVD that is often overlooked involves the
Document 3:::
Fat globules (also known as mature lipid droplets) are individual pieces of intracellular fat in human cell biology. The lipid droplet's function is to store energy for the organism's body and is found in every type of adipocytes. They can consist of a vacuole, droplet of triglyceride, or any other blood lipid, as opposed to fat cells in between other cells in an organ. They contain a hydrophobic core and are encased in a phospholipid monolayer membrane. Due to their hydrophobic nature, lipids and lipid digestive derivatives must be transported in the globular form within the cell, blood, and tissue spaces.
The formation of a fat globule starts within the membrane bilayer of the endoplasmic reticulum. It starts as a bud and detaches from the ER membrane to join other droplets. After the droplets fuse, a mature droplet (full-fledged globule) is formed and can then partake in neutral lipid synthesis or lipolysis.
Globules of fat are emulsified in the duodenum into smaller droplets by bile salts during food digestion, speeding up the rate of digestion by the enzyme lipase at a later point in digestion. Bile salts possess detergent properties that allow them to emulsify fat globules into smaller emulsion droplets, and then into even smaller micelles. This increases the surface area for lipid-hydrolyzing enzymes to act on the fats.
Micelles are roughly 200 times smaller than fat emulsion droplets, allowing them to facilitate the transport of monoglycerides and fatty acids across the surface of the enterocyte, where absorption occurs.
Milk fat globules (MFGs) are another form of intracellular fat found in the mammary glands of female mammals. Their function is to provide enriching glycoproteins from the female to their offspring. They are formed in the endoplasmic reticulum found in the mammary epithelial lactating cell. The globules are made up of triacylglycerols encased in cellular membranes and proteins like adipophilin and TIP 47. The proteins are spread througho
Document 4:::
This list consists of common foods with their cholesterol content recorded in milligrams per 100 grams (3.5 ounces) of food.
Functions
Cholesterol is a sterol, a steroid-like lipid made by animals, including humans. The human body makes one-eighth to one-fourth teaspoons of pure cholesterol daily. A cholesterol level of 5.5 millimoles per litre or below is recommended for an adult. The rise of cholesterol in the body can give a condition in which excessive cholesterol is deposited in artery walls called atherosclerosis. This condition blocks the blood flow to vital organs which can result in high blood pressure or stroke.
Cholesterol is not always bad. It's a vital part of the cell wall and a precursor to substances such as brain matter and some sex hormones. There are some types of cholesterol which are beneficial to the heart and blood vessels. High-density lipoprotein is commonly called "good" cholesterol. These lipoproteins help in the removal of cholesterol from the cells, which is then transported back to the liver where it is disintegrated and excreted as waste or broken down into parts.
Cholesterol content of various foods
See also
Nutrition
Plant stanol ester
Fatty acid
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What elements do lipids primarily consist of?
A. carbon, hydrogen, and oxygen
B. iron, hydrogen , and oxygen
C. silicon, hydrogen , and oxygen
D. helium, hydrogen , and oxygen
Answer:
|
|
sciq-6825
|
multiple_choice
|
What are large sheets of ice that cover relatively flat ground called?
|
[
"cellular glaciers",
"rocky glaciers",
"land glaciers",
"continental glaciers"
] |
D
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Further Mathematics is the title given to a number of advanced secondary mathematics courses. The term "Higher and Further Mathematics", and the term "Advanced Level Mathematics", may also refer to any of several advanced mathematics courses at many institutions.
In the United Kingdom, Further Mathematics describes a course studied in addition to the standard mathematics AS-Level and A-Level courses. In the state of Victoria in Australia, it describes a course delivered as part of the Victorian Certificate of Education (see § Australia (Victoria) for a more detailed explanation). Globally, it describes a course studied in addition to GCE AS-Level and A-Level Mathematics, or one which is delivered as part of the International Baccalaureate Diploma.
In other words, more mathematics can also be referred to as part of advanced mathematics, or advanced level math.
United Kingdom
Background
A qualification in Further Mathematics involves studying both pure and applied modules. Whilst the pure modules (formerly known as Pure 4–6 or Core 4–6, now known as Further Pure 1–3, where 4 exists for the AQA board) build on knowledge from the core mathematics modules, the applied modules may start from first principles.
The structure of the qualification varies between exam boards.
With regard to Mathematics degrees, most universities do not require Further Mathematics, and may incorporate foundation math modules or offer "catch-up" classes covering any additional content. Exceptions are the University of Warwick, the University of Cambridge which requires Further Mathematics to at least AS level; University College London requires or recommends an A2 in Further Maths for its maths courses; Imperial College requires an A in A level Further Maths, while other universities may recommend it or may promise lower offers in return. Some schools and colleges may not offer Further mathematics, but online resources are available
Although the subject has about 60% of its cohort obtainin
Document 2:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
Document 3:::
Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams.
Course content
E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are:
Electrostatics
Conductors, capacitors, and dielectrics
Electric circuits
Magnetic fields
Electromagnetism.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with
Document 4:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are large sheets of ice that cover relatively flat ground called?
A. cellular glaciers
B. rocky glaciers
C. land glaciers
D. continental glaciers
Answer:
|
|
sciq-7992
|
multiple_choice
|
What subphylum, which includes crabs and crayfish, represents the dominant aquatic arthropods?
|
[
"invertebrates",
"crustaceans",
"arachnids",
"sponges"
] |
B
|
Relavent Documents:
Document 0:::
Daphnia pulex is the most common species of water flea. It has a cosmopolitan distribution: the species is found throughout the Americas, Europe, and Australia. It is a model species, and was the first crustacean to have its genome sequenced.
Description
D. pulex is an arthropod whose body segments are difficult to distinguish. It can only be recognised by its appendages (only ever one pair per segment), and by studying its internal anatomy. The head is distinct and is made up of six segments, which are fused together even as an embryo. It bears the mouthparts, and two pairs of antennae, the second pair of which is enlarged into powerful organs used for swimming. No clear division is seen between the thorax and abdomen, which collectively bear five pairs of appendages. The shell surrounding the animal extends posteriorly into a spine. Like most other Daphnia species, D. pulex reproduces by cyclical parthenogenesis, alternating between sexual and asexual reproduction.
Ecology
D. pulex occurs in a wide range of aquatic habitats, although it is most closely associated with small, shaded pools. In oligotrophic lakes, D. pulex has little pigmentation, while it may become bright red in hypereutrophic waters, due to the production of haemoglobin.
Predation
Daphnia species are prey for a variety of both vertebrate and invertebrate predators. The role of predation on D. pulex population ecology is extensively studied, and has been shown to be a major axis of variation in shaping population dynamics and landscape-level distribution. In addition to the direct population ecological effects of predation, the process contributes to phenotypic evolution in contrasting ways; larger D. pulex individuals are more visible to vertebrate predators, but invertebrate predators are unable to handle larger ones. As a result, larger water fleas tend to be found with invertebrate predators, while smaller size is associated with vertebrate predators.
Similar to some other Daphnia species,
Document 1:::
Invertebrate zoology is the subdiscipline of zoology that consists of the study of invertebrates, animals without a backbone (a structure which is found only in fish, amphibians, reptiles, birds and mammals).
Invertebrates are a vast and very diverse group of animals that includes sponges, echinoderms, tunicates, numerous different phyla of worms, molluscs, arthropods and many additional phyla. Single-celled organisms or protists are usually not included within the same group as invertebrates.
Subdivisions
Invertebrates represent 97% of all named animal species, and because of that fact, this subdivision of zoology has many further
subdivisions, including but not limited to:
Arthropodology - the study of arthropods, which includes
Arachnology - the study of spiders and other arachnids
Entomology - the study of insects
Carcinology - the study of crustaceans
Myriapodology - the study of centipedes, millipedes, and other myriapods
Cnidariology - the study of Cnidaria
Helminthology - the study of parasitic worms.
Malacology - the study of mollusks, which includes
Conchology - the study of Mollusk shells.
Limacology - the study of slugs.
Teuthology - the study of cephalopods.
Invertebrate paleontology - the study of fossil invertebrates
These divisions are sometimes further divided into more specific specialties. For example, within arachnology, acarology is the study of mites and ticks; within entomology, lepidoptery is the study of butterflies and moths, myrmecology is the study of ants and so on. Marine invertebrates are all those invertebrates that exist in marine habitats.
History
Early Modern Era
In the early modern period starting in the late 16th century, invertebrate zoology saw growth in the number of publications made and improvement in the experimental practices associated with the field. (Insects are one of the most diverse groups of organisms on Earth. They play important roles in ecosystems, including pollination, natural enemies, saprophytes, and
Document 2:::
A cnidariologist is a zoologist specializing in Cnidaria, a group of freshwater and marine aquatic animals that include the sea anemones, corals, and jellyfish.
Examples
Edward Thomas Browne (1866-1937)
Henry Bryant Bigelow (1879-1967)
Randolph Kirkpatrick (1863–1950)
Kamakichi Kishinouye (1867-1929)
Paul Lassenius Kramp (1887-1975)
Alfred G. Mayer (1868-1922)
See also
Document 3:::
Animals are multicellular, eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. As of 2022, 2.16 million living animal species have been described—of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are around 7.77 million animal species. Animals range in length from to . They have complex interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology.
Most living animal species are in Bilateria, a clade whose members have a bilaterally symmetric body plan. The Bilateria include the protostomes, containing animals such as nematodes, arthropods, flatworms, annelids and molluscs, and the deuterostomes, containing the echinoderms and the chordates, the latter including the vertebrates. Life forms interpreted as early animals were present in the Ediacaran biota of the late Precambrian. Many modern animal phyla became clearly established in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago.
Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on ad
Document 4:::
Carcinology is a branch of zoology that consists of the study of crustaceans, a group of arthropods that includes lobsters, crayfish, shrimp, krill, copepods, barnacles and crabs. Other names for carcinology are malacostracology, crustaceology, and crustalogy, and a person who studies crustaceans is a carcinologist or occasionally a malacostracologist, a crustaceologist, or a crustalogist.
The word carcinology derives from Greek , karkínos, "crab"; and , -logia.
Subfields
Carcinology is a subdivision of arthropodology, the study of arthropods which includes arachnids, insects, and myriapods. Carcinology branches off into taxonomically oriented disciplines such as:
astacology – the study of crayfish
cirripedology – the study of barnacles
copepodology – the study of copepods
Journals
Scientific journals devoted to the study of crustaceans include:
Crustaceana
Journal of Crustacean Biology
''Nauplius (journal)
See also
Entomology
Publications in carcinology
List of carcinologists
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What subphylum, which includes crabs and crayfish, represents the dominant aquatic arthropods?
A. invertebrates
B. crustaceans
C. arachnids
D. sponges
Answer:
|
|
sciq-9131
|
multiple_choice
|
Where do plankton, nekton, and benthos live?
|
[
"arctic",
"forests",
"in the oceans",
"deserts"
] |
C
|
Relavent Documents:
Document 0:::
Aquatic science is the study of the various bodies of water that make up our planet including oceanic and freshwater environments. Aquatic scientists study the movement of water, the chemistry of water, aquatic organisms, aquatic ecosystems, the movement of materials in and out of aquatic ecosystems, and the use of water by humans, among other things. Aquatic scientists examine current processes as well as historic processes, and the water bodies that they study can range from tiny areas measured in millimeters to full oceans. Moreover, aquatic scientists work in Interdisciplinary groups. For example, a physical oceanographer might work with a biological oceanographer to understand how physical processes, such as tropical cyclones or rip currents, affect organisms in the Atlantic Ocean. Chemists and biologists, on the other hand, might work together to see how the chemical makeup of a certain body of water affects the plants and animals that reside there. Aquatic scientists can work to tackle global problems such as global oceanic change and local problems, such as trying to understand why a drinking water supply in a certain area is polluted.
There are two main fields of study that fall within the field of aquatic science. These fields of study include oceanography and limnology.
Oceanography
Oceanography refers to the study of the physical, chemical, and biological characteristics of oceanic environments. Oceanographers study the history, current condition, and future of the planet's oceans. They also study marine life and ecosystems, ocean circulation, plate tectonics, the geology of the seafloor, and the chemical and physical properties of the ocean.
Oceanography is interdisciplinary. For example, there are biological oceanographers and marine biologists. These scientists specialize in marine organisms. They study how these organisms develop, their relationship with one another, and how they interact and adapt to their environment. Biological oceanographers
Document 1:::
Launched in 1889, the Plankton Expedition was the first scientific effort to systematically study marine plankton—small, drifting aquatic organisms. Just as the earlier Challenger Expedition is considered to be the founding expedition of oceanography, the Plankton Expedition played a seminal role in establishing the quantitative and systematic study of plankton in the ocean.
Inspiration and aims
Victor Hensen, a physiologist from the University of Kiel in Germany, first used the word “plankton” in 1887 to refer to those organisms in the ocean that drift with the currents, rather than moving around under their own power. Hensen’s aim in organizing the Plankton Expedition was to improve understanding of the relationship between plankton and fisheries, as he believed plankton to be the main food source for fish.
Course, findings, and legacy
The Plankton Expedition was funded by the German government and private donors. In addition to Hensen, the expedition consisted of a team of five scientists and an artist: zoologist Frederick Dahl, physiologist and protistologist Karl Brandt, botanist Franz Schütt, geographer Otto Krümmel, and microbiologist Bernard Fischer, as well as artist Richard Eschke.
The crew steamed out on the National across major regions of the Atlantic (Figure 1) between July and November 1889, collecting plankton from over 100 stations down to a depth of 200m. Hensen’s major conclusions from the collections—that plankton abundance was too low to support fish populations and that plankton are evenly distributed in the ocean—have been shown to be inaccurate, being based on faulty collection methods. The expedition reports detailing the types of marine plankton collected are the legacy of the endeavor and are credited with the eventual development of the field of microbiology.
Document 2:::
Edward Brinton (January 12, 1924 – January 13, 2010) was a professor of oceanography and research biologist. His particular area of expertise was Euphausiids or krill, small shrimp-like creatures found in all the oceans of the world.
Early life
Brinton was born on January 12, 1924, in Richmond, Indiana to a Quaker couple, Howard Brinton and Anna Shipley Cox Brinton. Much of his childhood was spent on the grounds of Mills College where his mother was Dean of Faculty and his father was a professor. The family later moved to the Pendle Hill Quaker Center for Study and Contemplation, in Pennsylvania where his father and mother became directors.
Academic career
Brinton attended High School at Westtown School in Chester County, Pennsylvania. He studied at Haverford College and graduated in 1949 with a bachelor's degree in biology. He enrolled at Scripps Institution of Oceanography as a graduate student in 1950 and was awarded a Ph.D. in 1957. He continued on as a research biologist in the Marine Life Research Group, part of the CalCOFI program. He soon turned his dissertation into a major publication, The Distribution of Pacific Euphausiids. In this large monograph, he laid out the major biogeographic provinces of the Pacific (and part of the Atlantic), large-scale patterns of pelagic diversity and one of the most rational hypotheses for the mechanism of sympatric, oceanic speciation. In all of these studies the role of physical oceanography and circulation played a prominent part. His work has since been validated by others and continues, to this day, to form the basis for our attempts to understand large-scale pelagic ecology and the role of physics of the movement of water in the regulation of pelagic ecosystems. In addition to these studies he has led in the studies of how climatic variations have led to the large variations in the California Current, and its populations and communities. He has described several new species and, in collaboration with Margaret K
Document 3:::
The Station biologique de Roscoff (SBR) is a French marine biology and oceanography research and teaching center. Founded by Henri de Lacaze-Duthiers (1821–1901) in 1872, it is at the present time affiliated to the Sorbonne University (SU) and the Centre National de la Recherche Scientifique (CNRS).
Overview
The Station biologique is situated in Roscoff on the northern coast of Brittany (France) about 60 km east of Brest. Its location offers access to exceptional variety of biotopes, most of which are accessible at low tide. These biotopes support a large variety of both plant (700) and animal (3000) marine species. Founded in 1872 by Professor Henri de Lacaze-Duthiers (then Zoology Chair at the Sorbonne University ), the SBR constitutes, since March 1985, the Internal School 937 of the Pierre and Marie Curie University (UPMC). In November 1985, the SBR was given the status of Oceanographic Observatory by the Institut National des Sciences de l'Univers et de l'Environnement (National Institute for the Cosmological and Environmental Sciences; INSU). The SBR is also, since January 2001, a Research Federation within the Life Sciences Department of the CNRS.
The personnel of the SBR, which includes about 200 permanent staff, consists of scientists, teaching scientists, technicians, postdoctoral fellows, PhD students and administrative staff. These personnel is organized into various research groups within research units that are recognised by the Life Sciences Department of the CNRS (the current research units have the following codes: FR 2424, UMR 8227, UMR 7144, UMI 3614 and USR 3151). The various research groups work on a wide range of topics, ranging from investigation of the fine structure and function of biological macromolecules to global oceanic studies. Genomic approaches constitute an important part of many of the research programmes, notably via the European Network of Excellence "Marine Genomics" which is coordinated by the SBR. With the accommodation fac
Document 4:::
Espegrend (also known as Espeland) is a marine biological field station located in Bergen, Norway. The station is located close to the airport Flesland, 20 kilometers south of Bergen.
Overview
The Department of Biological Sciences at the University of Bergen has specialized laboratories and research installations in the main campus in downtown Bergen. It is also responsible for the Marine biological field station at Espeland. The Station is located in the Raunefjord, with deep sea fauna easily available. The station has good mesocosm facilities, a research vessel RV Aurelia, and good facilities for benthic and planktonic sampling. Espegrend has a number of specialised facilities. It is well known for is mesocosm facility. Espegrend has very good access to diverse and well described marine habitats and model environments. The station comprises a boarding house, boats, laboratories and basic equipment for marine research.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Where do plankton, nekton, and benthos live?
A. arctic
B. forests
C. in the oceans
D. deserts
Answer:
|
|
sciq-8802
|
multiple_choice
|
What muscle in the chest helps inflate and deflate the lungs?
|
[
"heart",
"diaphragm",
"cartilage",
"pectoral"
] |
B
|
Relavent Documents:
Document 0:::
The thoracic diaphragm, or simply the diaphragm (; ), is a sheet of internal skeletal muscle in humans and other mammals that extends across the bottom of the thoracic cavity. The diaphragm is the most important muscle of respiration, and separates the thoracic cavity, containing the heart and lungs, from the abdominal cavity: as the diaphragm contracts, the volume of the thoracic cavity increases, creating a negative pressure there, which draws air into the lungs. Its high oxygen consumption is noted by the many mitochondria and capillaries present; more than in any other skeletal muscle.
The term diaphragm in anatomy, created by Gerard of Cremona, can refer to other flat structures such as the urogenital diaphragm or pelvic diaphragm, but "the diaphragm" generally refers to the thoracic diaphragm. In humans, the diaphragm is slightly asymmetric—its right half is higher up (superior) to the left half, since the large liver rests beneath the right half of the diaphragm. There is also speculation that the diaphragm is lower on the other side due to heart's presence.
Other mammals have diaphragms, and other vertebrates such as amphibians and reptiles have diaphragm-like structures, but important details of the anatomy may vary, such as the position of the lungs in the thoracic cavity.
Structure
The diaphragm is an upward curved, c-shaped structure of muscle and fibrous tissue that separates the thoracic cavity from the abdomen. The superior surface of the dome forms the floor of the thoracic cavity, and the inferior surface the roof of the abdominal cavity.
As a dome, the diaphragm has peripheral attachments to structures that make up the abdominal and chest walls. The muscle fibres from these attachments converge in a central tendon, which forms the crest of the dome. Its peripheral part consists of muscular fibers that take origin from the circumference of the inferior thoracic aperture and converge to be inserted into a central tendon.
The muscle fibres of t
Document 1:::
Speech science refers to the study of production, transmission and perception of speech. Speech science involves anatomy, in particular the anatomy of the oro-facial region and neuroanatomy, physiology, and acoustics.
Speech production
The production of speech is a highly complex motor task that involves approximately 100 orofacial, laryngeal, pharyngeal, and respiratory muscles. Precise and expeditious timing of these muscles is essential for the production of temporally complex speech sounds, which are characterized by transitions as short as 10 ms between frequency bands and an average speaking rate of approximately 15 sounds per second. Speech production requires airflow from the lungs (respiration) to be phonated through the vocal folds of the larynx (phonation) and resonated in the vocal cavities shaped by the jaw, soft palate, lips, tongue and other articulators (articulation).
Respiration
Respiration is the physical process of gas exchange between an organism and its environment involving four steps (ventilation, distribution, perfusion and diffusion) and two processes (inspiration and expiration). Respiration can be described as the mechanical process of air flowing into and out of the lungs on the principle of Boyle's law, stating that, as the volume of a container increases, the air pressure will decrease. This relatively negative pressure will cause air to enter the container until the pressure is equalized. During inspiration of air, the diaphragm contracts and the lungs expand drawn by pleurae through surface tension and negative pressure. When the lungs expand, air pressure becomes negative compared to atmospheric pressure and air will flow from the area of higher pressure to fill the lungs. Forced inspiration for speech uses accessory muscles to elevate the rib cage and enlarge the thoracic cavity in the vertical and lateral dimensions. During forced expiration for speech, muscles of the trunk and abdomen reduce the size of the thoracic cavity by
Document 2:::
The pulmonary plexus is an autonomic plexus formed from pulmonary branches of vagus nerve and the sympathetic trunk. The plexus is in continuity with the deep cardiac plexus.
Structure
It innervates the bronchial tree and the visceral pleura. According to the relation of nerves to the root of the lung, the pulmonary plexus is divided into the anterior pulmonary plexus, which lies in front of the lung and the posterior pulmonary plexus, which lies behind the lung. The anterior pulmonary plexus is close in proximity to the pulmonary artery. The posterior pulmonary plexus is bounded by the superior edge of the pulmonary artery and the lower edge of the pulmonary vein. Both lungs are innervated primarily by the posterior pulmonary plexus; it accounts for 74–77% of the total innervation.
Function
Innervation of the bronchial tree regulates contraction of bronchial smooth muscles, mucous secretions from submucosal glands, vascular permeability, and blood flow. Sensory fiber innervation of the visceral pleura is thought to allow stretch detection.
Document 3:::
Inhalation (or inspiration) is the process of drawing air or other gases into the respiratory tract, primarily for the purpose of breathing and oxygen exchange within the body. It is a fundamental physiological function in humans and many other organisms, essential for sustaining life. Inhalation is the first phase of respiration, allowing the exchange of oxygen and carbon dioxide between the body and the environment, vital for the body's metabolic processes. This article delves into the mechanics of inhalation, its significance in various contexts, and its potential impact on health.
Physiology
The process of inhalation involves a series of coordinated movements and physiological mechanisms. The primary anatomical structures involved in inhalation are the respiratory system, which includes the nose, mouth, pharynx, larynx, trachea, bronchi, and lungs. Here is a brief overview of the inhalation process:
Inspiration: Inhalation begins with the contraction of the thoracic diaphragm, a dome-shaped muscle that separates the chest cavity from the abdominal cavity. The diaphragm contracts and moves downward, increasing the volume of the thoracic cavity.
Air entry: When a person or animal inhales, the diaphragm, located below the lungs, contracts, and the intercostal muscles between the ribs expand the chest cavity. This expansion creates a lower pressure inside the chest compared to the atmosphere, causing air to flow into the lungs.
Air filtration: The nasal passages and the mouth act as entry points for air. These passages are lined with tiny hair-like structures called cilia and mucus-producing cells that help filter and humidify the incoming air, removing particles and debris before it reaches the lungs.
Gas exchange: Once the air enters the lungs, it travels through a branching network of tubes known as the bronchial tree, ultimately reaching tiny air sacs called alveoli. In the alveoli, oxygen from the inhaled air diffuses into the bloodstream, while carbon dioxide
Document 4:::
The control of ventilation is the physiological mechanisms involved in the control of breathing, which is the movement of air into and out of the lungs. Ventilation facilitates respiration. Respiration refers to the utilization of oxygen and balancing of carbon dioxide by the body as a whole, or by individual cells in cellular respiration.
The most important function of breathing is the supplying of oxygen to the body and balancing of the carbon dioxide levels. Under most conditions, the partial pressure of carbon dioxide (PCO2), or concentration of carbon dioxide, controls the respiratory rate.
The peripheral chemoreceptors that detect changes in the levels of oxygen and carbon dioxide are located in the arterial aortic bodies and the carotid bodies. Central chemoreceptors are primarily sensitive to changes in the pH of the blood, (resulting from changes in the levels of carbon dioxide) and they are located on the medulla oblongata near to the medullar respiratory groups of the respiratory center.
Information from the peripheral chemoreceptors is conveyed along nerves to the respiratory groups of the respiratory center. There are four respiratory groups, two in the medulla and two in the pons. The two groups in the pons are known as the pontine respiratory group.
Dorsal respiratory group – in the medulla
Ventral respiratory group – in the medulla
Pneumotaxic center – various nuclei of the pons
Apneustic center – nucleus of the pons
From the respiratory center, the muscles of respiration, in particular the diaphragm, are activated to cause air to move in and out of the lungs.
Control of respiratory rhythm
Ventilatory pattern
Breathing is normally an unconscious, involuntary, automatic process. The pattern of motor stimuli during breathing can be divided into an inhalation stage and an exhalation stage. Inhalation shows a sudden, ramped increase in motor discharge to the respiratory muscles (and the pharyngeal constrictor muscles). Before the end of inh
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What muscle in the chest helps inflate and deflate the lungs?
A. heart
B. diaphragm
C. cartilage
D. pectoral
Answer:
|
|
sciq-8977
|
multiple_choice
|
What atoms make up a water molecule?
|
[
"alumhg and oxygen",
"hydrogen and oxygen",
"One Hydrogen",
"Sodium and Oxygen"
] |
B
|
Relavent Documents:
Document 0:::
Atomicity is the total number of atoms present in a molecule. For example, each molecule of oxygen (O2) is composed of two oxygen atoms. Therefore, the atomicity of oxygen is 2.
In older contexts, atomicity is sometimes equivalent to valency. Some authors also use the term to refer to the maximum number of valencies observed for an element.
Classifications
Based on atomicity, molecules can be classified as:
Monoatomic (composed of one atom). Examples include He (helium), Ne (neon), Ar (argon), and Kr (krypton). All noble gases are monoatomic.
Diatomic (composed of two atoms). Examples include H2 (hydrogen), N2 (nitrogen), O2 (oxygen), F2 (fluorine), and Cl2 (chlorine). Halogens are usually diatomic.
Triatomic (composed of three atoms). Examples include O3 (ozone).
Polyatomic (composed of three or more atoms). Examples include S8.
Atomicity may vary in different allotropes of the same element.
The exact atomicity of metals, as well as some other elements such as carbon, cannot be determined because they consist of a large and indefinite number of atoms bonded together. They are typically designated as having an atomicity of 1.
The atomicity of homonuclear molecule can be derived by dividing the molecular weight by the atomic weight. For example, the molecular weight of oxygen is 31.999, while its atomic weight is 15.879; therefore, its atomicity is approximately 2 (31.999/15.879 ≈ 2).
Examples
The most common values of atomicity for the first 30 elements in the periodic table are as follows:
Document 1:::
In chemistry, the carbon-hydrogen bond ( bond) is a chemical bond between carbon and hydrogen atoms that can be found in many organic compounds. This bond is a covalent, single bond, meaning that carbon shares its outer valence electrons with up to four hydrogens. This completes both of their outer shells, making them stable.
Carbon–hydrogen bonds have a bond length of about 1.09 Å (1.09 × 10−10 m) and a bond energy of about 413 kJ/mol (see table below). Using Pauling's scale—C (2.55) and H (2.2)—the electronegativity difference between these two atoms is 0.35. Because of this small difference in electronegativities, the bond is generally regarded as being non-polar. In structural formulas of molecules, the hydrogen atoms are often omitted. Compound classes consisting solely of bonds and bonds are alkanes, alkenes, alkynes, and aromatic hydrocarbons. Collectively they are known as hydrocarbons.
In October 2016, astronomers reported that the very basic chemical ingredients of life—the carbon-hydrogen molecule (CH, or methylidyne radical), the carbon-hydrogen positive ion () and the carbon ion ()—are the result, in large part, of ultraviolet light from stars, rather than in other ways, such as the result of turbulent events related to supernovae and young stars, as thought earlier.
Bond length
The length of the carbon-hydrogen bond varies slightly with the hybridisation of the carbon atom. A bond between a hydrogen atom and an sp2 hybridised carbon atom is about 0.6% shorter than between hydrogen and sp3 hybridised carbon. A bond between hydrogen and sp hybridised carbon is shorter still, about 3% shorter than sp3 C-H. This trend is illustrated by the molecular geometry of ethane, ethylene and acetylene.
Reactions
The C−H bond in general is very strong, so it is relatively unreactive. In several compound classes, collectively called carbon acids, the C−H bond can be sufficiently acidic for proton removal. Unactivated C−H bonds are found in alkanes and are no
Document 2:::
Water () is a simple triatomic bent molecule with C2v molecular symmetry and bond angle of 104.5° between the central oxygen atom and the hydrogen atoms. Despite being one of the simplest triatomic molecules, its chemical bonding scheme is nonetheless complex as many of its bonding properties such as bond angle, ionization energy, and electronic state energy cannot be explained by one unified bonding model. Instead, several traditional and advanced bonding models such as simple Lewis and VSEPR structure, valence bond theory, molecular orbital theory, isovalent hybridization, and Bent's rule are discussed below to provide a comprehensive bonding model for , explaining and rationalizing the various electronic and physical properties and features manifested by its peculiar bonding arrangements.
Lewis structure and valence bond theory
The Lewis structure of describes the bonds as two sigma bonds between the central oxygen atom and the two peripheral hydrogen atoms with oxygen having two lone pairs of electrons. Valence bond theory suggests that is sp3 hybridized in which the 2s atomic orbital and the three 2p orbitals of oxygen are hybridized to form four new hybridized orbitals which then participate in bonding by overlapping with the hydrogen 1s orbitals. As such, the predicted shape and bond angle of sp3 hybridization is tetrahedral and 109.5°. This is in open agreement with the true bond angle of 104.45°. The difference between the predicted bond angle and the measured bond angle is traditionally explained by the electron repulsion of the two lone pairs occupying two sp3 hybridized orbitals. While valence bond theory is suitable for predicting the geometry and bond angle of , its prediction of electronic states does not agree with the experimentally measured reality. In the valence bond model, the two sigma bonds are of identical energy and so are the two lone pairs since they both resides in the same bonding and nonbonding orbitals, thus corresponding to two en
Document 3:::
A heteronuclear molecule is a molecule composed of atoms of more than one chemical element. For example, a molecule of water (H2O) is heteronuclear because it has atoms of two different elements, hydrogen (H) and oxygen (O).
Similarly, a heteronuclear ion is an ion that contains atoms of more than one chemical element. For example, the carbonate ion () is heteronuclear because it has atoms of carbon (C) and oxygen (O). The lightest heteronuclear ion is the helium hydride ion (HeH+). This is in contrast to a homonuclear ion, which contains all the same kind of atom, such as the dihydrogen cation, or atomic ions that only contain one atom such as the hydrogen anion (H−).
Document 4:::
Water () is a polar inorganic compound that is at room temperature a tasteless and odorless liquid, which is nearly colorless apart from an inherent hint of blue. It is by far the most studied chemical compound and is described as the "universal solvent" and the "solvent of life". It is the most abundant substance on the surface of Earth and the only common substance to exist as a solid, liquid, and gas on Earth's surface. It is also the third most abundant molecule in the universe (behind molecular hydrogen and carbon monoxide).
Water molecules form hydrogen bonds with each other and are strongly polar. This polarity allows it to dissociate ions in salts and bond to other polar substances such as alcohols and acids, thus dissolving them. Its hydrogen bonding causes its many unique properties, such as having a solid form less dense than its liquid form, a relatively high boiling point of 100 °C for its molar mass, and a high heat capacity.
Water is amphoteric, meaning that it can exhibit properties of an acid or a base, depending on the pH of the solution that it is in; it readily produces both and ions. Related to its amphoteric character, it undergoes self-ionization. The product of the activities, or approximately, the concentrations of and is a constant, so their respective concentrations are inversely proportional to each other.
Physical properties
Water is the chemical substance with chemical formula ; one molecule of water has two hydrogen atoms covalently bonded to a single oxygen atom. Water is a tasteless, odorless liquid at ambient temperature and pressure. Liquid water has weak absorption bands at wavelengths of around 750 nm which cause it to appear to have a blue color. This can easily be observed in a water-filled bath or wash-basin whose lining is white. Large ice crystals, as in glaciers, also appear blue.
Under standard conditions, water is primarily a liquid, unlike other analogous hydrides of the oxygen family, which are generally gaseou
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What atoms make up a water molecule?
A. alumhg and oxygen
B. hydrogen and oxygen
C. One Hydrogen
D. Sodium and Oxygen
Answer:
|
|
sciq-6040
|
multiple_choice
|
What structure of a cell is enclosed by a membrane and contains most of the cell’s dna?
|
[
"nucleus",
"vacuole",
"ribosome",
"epidermis"
] |
A
|
Relavent Documents:
Document 0:::
Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence.
Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism.
Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry.
See also
Cell (biology)
Cell biology
Biomolecule
Organelle
Tissue (biology)
External links
https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm
Document 1:::
Cell physiology is the biological study of the activities that take place in a cell to keep it alive. The term physiology refers to normal functions in a living organism. Animal cells, plant cells and microorganism cells show similarities in their functions even though they vary in structure.
General characteristics
There are two types of cells: prokaryotes and eukaryotes.
Prokaryotes were the first of the two to develop and do not have a self-contained nucleus. Their mechanisms are simpler than later-evolved eukaryotes, which contain a nucleus that envelops the cell's DNA and some organelles.
Prokaryotes
Prokaryotes have DNA located in an area called the nucleoid, which is not separated from other parts of the cell by a membrane. There are two domains of prokaryotes: bacteria and archaea. Prokaryotes have fewer organelles than eukaryotes. Both have plasma membranes and ribosomes (structures that synthesize proteins and float free in cytoplasm). Two unique characteristics of prokaryotes are fimbriae (finger-like projections on the surface of a cell) and flagella (threadlike structures that aid movement).
Eukaryotes
Eukaryotes have a nucleus where DNA is contained. They are usually larger than prokaryotes and contain many more organelles. The nucleus, the feature of a eukaryote that distinguishes it from a prokaryote, contains a nuclear envelope, nucleolus and chromatin. In cytoplasm, endoplasmic reticulum (ER) synthesizes membranes and performs other metabolic activities. There are two types, rough ER (containing ribosomes) and smooth ER (lacking ribosomes). The Golgi apparatus consists of multiple membranous sacs, responsible for manufacturing and shipping out materials such as proteins. Lysosomes are structures that use enzymes to break down substances through phagocytosis, a process that comprises endocytosis and exocytosis. In the mitochondria, metabolic processes such as cellular respiration occur. The cytoskeleton is made of fibers that support the str
Document 2:::
Cellular compartments in cell biology comprise all of the closed parts within the cytosol of a eukaryotic cell, usually surrounded by a single or double lipid layer membrane. These compartments are often, but not always, defined as membrane-bound organelles. The formation of cellular compartments is called compartmentalization.
Both organelles, the mitochondria and chloroplasts (in photosynthetic organisms), are compartments that are believed to be of endosymbiotic origin. Other compartments such as peroxisomes, lysosomes, the endoplasmic reticulum, the cell nucleus or the Golgi apparatus are not of endosymbiotic origin. Smaller elements like vesicles, and sometimes even microtubules can also be counted as compartments.
It was thought that compartmentalization is not found in prokaryotic cells., but the discovery of carboxysomes and many other metabolosomes revealed that prokaryotic cells are capable of making compartmentalized structures, albeit these are in most cases not surrounded by a lipid bilayer, but of pure proteinaceous built.
Types
In general there are 4 main cellular compartments, they are:
The nuclear compartment comprising the nucleus
The intercisternal space which comprises the space between the membranes of the endoplasmic reticulum (which is continuous with the nuclear envelope)
Organelles (the mitochondrion in all eukaryotes and the plastid in phototrophic eukaryotes)
The cytosol
Function
Compartments have three main roles. One is to establish physical boundaries for biological processes that enables the cell to carry out different metabolic activities at the same time. This may include keeping certain biomolecules within a region, or keeping other molecules outside. Within the membrane-bound compartments, different intracellular pH, different enzyme systems, and other differences are isolated from other organelles and cytosol. With mitochondria, the cytosol has an oxidizing environment which converts NADH to NAD+. With these cases, the
Document 3:::
The cell is the basic structural and functional unit of all forms of life. Every cell consists of cytoplasm enclosed within a membrane, and contains many macromolecules such as proteins, DNA and RNA, as well as many small molecules of nutrients and metabolites. The term comes from the Latin word meaning 'small room'.
Cells can acquire specified function and carry out various tasks within the cell such as replication, DNA repair, protein synthesis, and motility. Cells are capable of specialization and mobility within the cell.
Most plant and animal cells are only visible under a light microscope, with dimensions between 1 and 100 micrometres. Electron microscopy gives a much higher resolution showing greatly detailed cell structure. Organisms can be classified as unicellular (consisting of a single cell such as bacteria) or multicellular (including plants and animals). Most unicellular organisms are classed as microorganisms.
The study of cells and how they work has led to many other studies in related areas of biology, including: discovery of DNA, cancer systems biology, aging and developmental biology.
Cell biology is the study of cells, which were discovered by Robert Hooke in 1665, who named them for their resemblance to cells inhabited by Christian monks in a monastery. Cell theory, first developed in 1839 by Matthias Jakob Schleiden and Theodor Schwann, states that all organisms are composed of one or more cells, that cells are the fundamental unit of structure and function in all living organisms, and that all cells come from pre-existing cells. Cells emerged on Earth about 4 billion years ago.
Discovery
With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as i
Document 4:::
H2.00.04.4.01001: Lymphoid tissue
H2.00.05.0.00001: Muscle tissue
H2.00.05.1.00001: Smooth muscle tissue
H2.00.05.2.00001: Striated muscle tissue
H2.00.06.0.00001: Nerve tissue
H2.00.06.1.00001: Neuron
H2.00.06.2.00001: Synapse
H2.00.06.2.00001: Neuroglia
h3.01: Bones
h3.02: Joints
h3.03: Muscles
h3.04: Alimentary system
h3.05: Respiratory system
h3.06: Urinary system
h3.07: Genital system
h3.08:
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What structure of a cell is enclosed by a membrane and contains most of the cell’s dna?
A. nucleus
B. vacuole
C. ribosome
D. epidermis
Answer:
|
|
sciq-2761
|
multiple_choice
|
What forms when a solute dissolves in a solvent?
|
[
"gas",
"solution",
"liquid",
"chemical"
] |
B
|
Relavent Documents:
Document 0:::
In chemistry, solubility is the ability of a substance, the solute, to form a solution with another substance, the solvent. Insolubility is the opposite property, the inability of the solute to form such a solution.
The extent of the solubility of a substance in a specific solvent is generally measured as the concentration of the solute in a saturated solution, one in which no more solute can be dissolved. At this point, the two substances are said to be at the solubility equilibrium. For some solutes and solvents, there may be no such limit, in which case the two substances are said to be "miscible in all proportions" (or just "miscible").
The solute can be a solid, a liquid, or a gas, while the solvent is usually solid or liquid. Both may be pure substances, or may themselves be solutions. Gases are always miscible in all proportions, except in very extreme situations, and a solid or liquid can be "dissolved" in a gas only by passing into the gaseous state first.
The solubility mainly depends on the composition of solute and solvent (including their pH and the presence of other dissolved substances) as well as on temperature and pressure. The dependency can often be explained in terms of interactions between the particles (atoms, molecules, or ions) of the two substances, and of thermodynamic concepts such as enthalpy and entropy.
Under certain conditions, the concentration of the solute can exceed its usual solubility limit. The result is a supersaturated solution, which is metastable and will rapidly exclude the excess solute if a suitable nucleation site appears.
The concept of solubility does not apply when there is an irreversible chemical reaction between the two substances, such as the reaction of calcium hydroxide with hydrochloric acid; even though one might say, informally, that one "dissolved" the other. The solubility is also not the same as the rate of solution, which is how fast a solid solute dissolves in a liquid solvent. This property de
Document 1:::
In physical chemistry, supersaturation occurs with a solution when the concentration of a solute exceeds the concentration specified by the value of solubility at equilibrium. Most commonly the term is applied to a solution of a solid in a liquid, but it can also be applied to liquids and gases dissolved in a liquid. A supersaturated solution is in a metastable state; it may return to equilibrium by separation of the excess of solute from the solution, by dilution of the solution by adding solvent, or by increasing the solubility of the solute in the solvent.
History
Early studies of the phenomenon were conducted with sodium sulfate, also known as Glauber's Salt because, unusually, the solubility of this salt in water may decrease with increasing temperature. Early studies have been summarised by Tomlinson. It was shown that the crystallization of a supersaturated solution does not simply come from its agitation, (the previous belief) but from solid matter entering and acting as a "starting" site for crystals to form, now called "seeds". Expanding upon this, Gay-Lussac brought attention to the kinematics of salt ions and the characteristics of the container having an impact on the supersaturation state. He was also able to expand upon the number of salts with which a supersaturated solution can be obtained. Later Henri Löwel came to the conclusion that both nuclei of the solution and the walls of the container have a catalyzing effect on the solution that cause crystallization. Explaining and providing a model for this phenomenon has been a task taken on by more recent research. Désiré Gernez contributed to this research by discovering that nuclei must be of the same salt that is being crystallized in order to promote crystallization.
Occurrence and examples
Solid precipitate, liquid solvent
A solution of a chemical compound in a liquid will become supersaturated when the temperature of the saturated solution is changed. In most cases solubility decreases wit
Document 2:::
Sorption is a physical and chemical process by which one substance becomes attached to another. Specific cases of sorption are treated in the following articles:
Absorption "the incorporation of a substance in one state into another of a different state" (e.g., liquids being absorbed by a solid or gases being absorbed by a liquid);
Adsorption The physical adherence or bonding of ions and molecules onto the surface of another phase (e.g., reagents adsorbed to a solid catalyst surface);
Ion exchange An exchange of ions between two electrolytes or between an electrolyte solution and a complex.
The reverse of sorption is desorption.
Sorption rate
The adsorption and absorption rate of a diluted solute in gas or liquid solution to a surface or interface can be calculated using Fick's laws of diffusion.
See also
Sorption isotherm
Document 3:::
A breakthrough curve in adsorption is the course of the effluent adsorptive concentration at the outlet of a fixed bed adsorber. Breakthrough curves are important for adsorptive separation technologies and for the characterization of porous materials.
Importance
Since almost all adsorptive separation processes are dynamic -meaning, that they are running under flow - testing porous materials for those applications for their separation performance has to be tested under flow as well. Since separation processes run with mixtures of different components, measuring several breakthrough curves results in thermodynamic mixture equilibria - mixture sorption isotherms, that are hardly accessible with static manometric sorption characterization. This enables the determination of sorption selectivities in gaseous and liquid phase.
The determination of breakthrough curves is the foundation of many other processes, like the pressure swing adsorption. Within this process, the loading of one adsorber is equivalent to a breakthrough experiment.
Measurement
A fixed bed of porous materials (e.g. activated carbons and zeolites) is pressurized and purged with a carrier gas. After becoming stationary one or more adsorptives are added to the carrier gas, resulting in a step-wise change of the inlet concentration. This is in contrast to chromatographic separation processes, where pulse-wise changes of the inlet concentrations are used. The course of the adsorptive concentrations at the outlet of the fixed bed are monitored.
Results
Integration of the area above the entire breakthrough curve gives the maximum loading of the adsorptive material. Additionally, the duration of the breakthrough experiment until a certain threshold of the adsorptive concentration at the outlet can be measured, which enables the calculation of a technically usable sorption capacity. Up to this time, the quality of the product stream can be maintained. The shape of the breakthrough curves contains informat
Document 4:::
Bitumen (, ) is an immensely viscous constituent of petroleum. Depending on its exact composition it can be a sticky, black liquid or an apparently solid mass that behaves as a liquid over very large time scales. In the U.S., the material is commonly referred to as asphalt. Whether found in natural deposits or refined from petroleum, the substance is classed as a pitch. Prior to the 20th century the term asphaltum was in general use. The word derives from the ancient Greek ἄσφαλτος ásphaltos, which referred to natural bitumen or pitch. The largest natural deposit of bitumen in the world, estimated to contain 10 million tons, is the Pitch Lake of southwest Trinidad.
70% of annual bitumen production destined for road construction, its primary use. In this application bitumen is used to bind aggregate particles like gravel and forms a substance referred to as asphalt concrete, which is colloquially termed asphalt. Its other main uses lie in bituminous waterproofing products, such as roofing felt and roof sealant.
In material sciences and engineering the terms "asphalt" and "bitumen" are often used interchangeably and refer both to natural and manufactured forms of the substance, although there is regional variation as to which term is most common. Worldwide, geologists tend to favor the term "bitumen" for the naturally occurring material. For the manufactured material, which is a refined residue from the distillation process of selected crude oils, "bitumen" is the prevalent term in much of the world; however, in American English, "asphalt" is more commonly used. To help avoid confusion, the phrases "liquid asphalt", "asphalt binder", or "asphalt cement" are used in the U.S. Colloquially, various forms of asphalt are sometimes referred to as "tar", as in the name of the La Brea Tar Pits.
Naturally occurring bitumen is sometimes specified by the term "crude bitumen". Its viscosity is similar to that of cold molasses while the material obtained from the fractional di
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What forms when a solute dissolves in a solvent?
A. gas
B. solution
C. liquid
D. chemical
Answer:
|
|
sciq-10581
|
multiple_choice
|
The short length of what in women is the best explanation for the greater incidence of uti in women?
|
[
"uterus",
"fallopian tube",
"urethra",
"vagina"
] |
C
|
Relavent Documents:
Document 0:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
Document 1:::
The visual analogue scale (VAS) is a psychometric response scale that can be used in questionnaires. It is a measurement instrument for subjective characteristics or attitudes that cannot be directly measured. When responding to a VAS item, respondents specify their level of agreement to a statement by indicating a position along a continuous line between two end points.
Comparison to other scales
This continuous (or "analogue") aspect of the scale differentiates it from discrete scales such as the Likert scale. There is evidence showing that visual analogue scales have superior metrical characteristics than discrete scales, thus a wider range of statistical methods can be applied to the measurements.
The VAS can be compared to other linear scales such as the Likert scale or Borg scale. The sensitivity and reproducibility of the results are broadly very similar, although the VAS may outperform the other scales in some cases. These advantages extend to measurement instruments made up from combinations of visual analogue scales, such as semantic differentials.
Uses
Recent advances in methodologies for Internet-based research include the development and evaluation of visual analogue scales for use in Internet-based questionnaires. One electronic version of the VAS that employs a 10 cm scale and various customizations is available on the Apple Store for use in research and workplace settings.
VAS is the most common pain scale for quantification of endometriosis-related pain and skin graft donor site-related pain. A review came to the conclusion that VAS and numerical rating scale (NRS) were the best adapted pain scales for pain measurement in endometriosis. For research purposes, and for more detailed pain measurement in clinical practice, the review suggested use of VAS or NRS for each type of typical pain related to endometriosis (dysmenorrhea, deep dyspareunia and non-menstrual chronic pelvic pain), combined with the clinical global impression (CGI) and a qualit
Document 2:::
Single Best Answer (SBA or One Best Answer) is a written examination form of multiple choice questions used extensively in medical education.
Structure
A single question is posed with typically five alternate answers, from which the candidate must choose the best answer. This method avoids the problems of past examinations of a similar form described as Single Correct Answer. The older form can produce confusion where more than one of the possible answers has some validity. The newer form makes it explicit that more than one answer may have elements that are correct, but that one answer will be superior.
Prior to the widespread introduction of SBAs into medical education, the typical form of examination was true-false multiple choice questions. But during the 2000s, educators found that SBAs would be superior.
Document 3:::
Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research.
Americas
Human Biology major at Stanford University, Palo Alto (since 1970)
Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government.
Human and Social Biology (Caribbean)
Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment.
Human Biology Program at University of Toronto
The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications.
Asia
BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002)
BSc (honours) Human Biology at AIIMS (New
Document 4:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The short length of what in women is the best explanation for the greater incidence of uti in women?
A. uterus
B. fallopian tube
C. urethra
D. vagina
Answer:
|
|
sciq-7833
|
multiple_choice
|
What is the air that remains after a forced exhalation called?
|
[
"kinetic volume",
"abundant volume",
"residual volume",
"remaining volume"
] |
C
|
Relavent Documents:
Document 0:::
Exhalation (or expiration) is the flow of the breath out of an organism. In animals, it is the movement of air from the lungs out of the airways, to the external environment during breathing.
This happens due to elastic properties of the lungs, as well as the internal intercostal muscles which lower the rib cage and decrease thoracic volume. As the thoracic diaphragm relaxes during exhalation it causes the tissue it has depressed to rise superiorly and put pressure on the lungs to expel the air. During forced exhalation, as when blowing out a candle, expiratory muscles including the abdominal muscles and internal intercostal muscles generate abdominal and thoracic pressure, which forces air out of the lungs.
Exhaled air is 4% carbon dioxide, a waste product of cellular respiration during the production of energy, which is stored as ATP. Exhalation has a complementary relationship to inhalation which together make up the respiratory cycle of a breath.
Exhalation and gas exchange
The main reason for exhalation is to rid the body of carbon dioxide, which is the waste product of gas exchange in humans. Air is brought into the body through inhalation. During this process air is taken in by the lungs. Diffusion in the alveoli allows for the exchange of O2 into the pulmonary capillaries and the removal of CO2 and other gases from the pulmonary capillaries to be exhaled. In order for the lungs to expel air the diaphragm relaxes, which pushes up on the lungs. The air then flows through the trachea then through the larynx and pharynx to the nasal cavity and oral cavity where it is expelled out of the body. Exhalation takes longer than inhalation and it is believed to facilitate better exchange of gases. Parts of the nervous system help to regulate respiration in humans. The exhaled air is not just carbon dioxide; it contains a mixture of other gases. Human breath contains volatile organic compounds (VOCs). These compounds consist of methanol, isoprene, acetone,
Document 1:::
Speech science refers to the study of production, transmission and perception of speech. Speech science involves anatomy, in particular the anatomy of the oro-facial region and neuroanatomy, physiology, and acoustics.
Speech production
The production of speech is a highly complex motor task that involves approximately 100 orofacial, laryngeal, pharyngeal, and respiratory muscles. Precise and expeditious timing of these muscles is essential for the production of temporally complex speech sounds, which are characterized by transitions as short as 10 ms between frequency bands and an average speaking rate of approximately 15 sounds per second. Speech production requires airflow from the lungs (respiration) to be phonated through the vocal folds of the larynx (phonation) and resonated in the vocal cavities shaped by the jaw, soft palate, lips, tongue and other articulators (articulation).
Respiration
Respiration is the physical process of gas exchange between an organism and its environment involving four steps (ventilation, distribution, perfusion and diffusion) and two processes (inspiration and expiration). Respiration can be described as the mechanical process of air flowing into and out of the lungs on the principle of Boyle's law, stating that, as the volume of a container increases, the air pressure will decrease. This relatively negative pressure will cause air to enter the container until the pressure is equalized. During inspiration of air, the diaphragm contracts and the lungs expand drawn by pleurae through surface tension and negative pressure. When the lungs expand, air pressure becomes negative compared to atmospheric pressure and air will flow from the area of higher pressure to fill the lungs. Forced inspiration for speech uses accessory muscles to elevate the rib cage and enlarge the thoracic cavity in the vertical and lateral dimensions. During forced expiration for speech, muscles of the trunk and abdomen reduce the size of the thoracic cavity by
Document 2:::
Vital capacity (VC) is the maximum amount of air a person can expel from the lungs after a maximum inhalation. It is equal to the sum of inspiratory reserve volume, tidal volume, and expiratory reserve volume. It is approximately equal to Forced Vital Capacity (FVC).
A person's vital capacity can be measured by a wet or regular spirometer. In combination with other physiological measurements, the vital capacity can help make a diagnosis of underlying lung disease. Furthermore, the vital capacity is used to determine the severity of respiratory muscle involvement in neuromuscular disease, and can guide treatment decisions in Guillain–Barré syndrome and myasthenic crisis.
A normal adult has a vital capacity between 3 and 5 litres. A human's vital capacity depends on age, sex, height, mass, and possibly ethnicity. However, the dependence on ethnicity is poorly understood or defined, as it was first established by studying black slaves in the 19th century and may be the result of conflation with environmental factors.
Lung volumes and lung capacities refer to the volume of air associated with different phases of the respiratory cycle. Lung volumes are directly measured, whereas lung capacities are inferred from volumes.
Role in diagnosis
The vital capacity can be used to help differentiate causes of lung disease. In restrictive lung disease the vital capacity is decreased. In obstructive lung disease it is usually normal or only slightly decreased.
Estimated vital capacities
Formulas
Vital capacity increases with height and decreases with age. Formulas to estimate vital capacity are:
where is approximate vital capacity in cm3, is age in years, and is height in cm.
Document 3:::
In physiology, respiration is the movement of oxygen from the outside environment to the cells within tissues, and the removal of carbon dioxide in the opposite direction that's to the environment.
The physiological definition of respiration differs from the biochemical definition, which refers to a metabolic process by which an organism obtains energy (in the form of ATP and NADPH) by oxidizing nutrients and releasing waste products. Although physiologic respiration is necessary to sustain cellular respiration and thus life in animals, the processes are distinct: cellular respiration takes place in individual cells of the organism, while physiologic respiration concerns the diffusion and transport of metabolites between the organism and the external environment.
Gas exchanges in the lung occurs by ventilation and perfusion. Ventilation refers to the in and out movement of air of the lungs and perfusion is the circulation of blood in the pulmonary capillaries. In mammals, physiological respiration involves respiratory cycles of inhaled and exhaled breaths. Inhalation (breathing in) is usually an active movement that brings air into the lungs where the process of gas exchange takes place between the air in the alveoli and the blood in the pulmonary capillaries. Contraction of the diaphragm muscle cause a pressure variation, which is equal to the pressures caused by elastic, resistive and inertial components of the respiratory system. In contrast, exhalation (breathing out) is usually a passive process, though there are many exceptions: when generating functional overpressure (speaking, singing, humming, laughing, blowing, snorting, sneezing, coughing, powerlifting); when exhaling underwater (swimming, diving); at high levels of physiological exertion (running, climbing, throwing) where more rapid gas exchange is necessitated; or in some forms of breath-controlled meditation. Speaking and singing in humans requires sustained breath control that many mammals are not
Document 4:::
Functional residual capacity (FRC) is the volume of air present in the lungs at the end of passive expiration. At FRC, the opposing elastic recoil forces of the lungs and chest wall are in equilibrium and there is no exertion by the diaphragm or other respiratory muscles.
Measurement
FRC is the sum of expiratory reserve volume (ERV) and residual volume (RV) and measures approximately 3000 mL in a 70 kg, average-sized male. It cannot be estimated through spirometry, since it includes the residual volume. In order to measure RV precisely, one would need to perform a test such as nitrogen washout, helium dilution or body plethysmography.
Positioning plays a significant role in altering FRC. It is highest when in an upright position and decreases as one moves from upright to supine/prone or Trendelenburg position. The greatest decrease in FRC occurs when going from 60° to totally supine at 0°. There is no significant change in FRC as position changes from 0° to Trendelenburg of up to −30°. However, beyond −30°, the drop in FRC is considerable.
Clinical significance
A lowered or elevated FRC is often an indication of some form of respiratory disease. In restrictive diseases, the decreased total lung capacity leads to a lower FRC. In turn in obstructive diseases, the FRC is increased.
For instance, in emphysema, FRC is increased, because the lungs are more compliant and the equilibrium between the inward recoil of the lungs and outward recoil of the chest wall is disturbed. As such, patients with emphysema often have noticeably broader chests due to the relatively unopposed outward recoil of the chest wall. Total lung capacity also increases, largely as a result of increased functional residual capacity.
Obese and pregnant patients will have a lower FRC in the supine position due to the added tissue weight opposing the outward recoil of the chest wall thus reducing chest wall compliance. In pregnancy, this starts at about the fifth month and reaches 10-20% decrease a
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the air that remains after a forced exhalation called?
A. kinetic volume
B. abundant volume
C. residual volume
D. remaining volume
Answer:
|
|
sciq-1304
|
multiple_choice
|
Two important types of energy that can be converted to one another include potential and what?
|
[
"thermal",
"physical",
"kinetic",
"magnetic"
] |
C
|
Relavent Documents:
Document 0:::
Energy transformation, also known as energy conversion, is the process of changing energy from one form to another. In physics, energy is a quantity that provides the capacity to perform work or moving (e.g. lifting an object) or provides heat. In addition to being converted, according to the law of conservation of energy, energy is transferable to a different location or object, but it cannot be created or destroyed.
The energy in many of its forms may be used in natural processes, or to provide some service to society such as heating, refrigeration, lighting or performing mechanical work to operate machines. For example, to heat a home, the furnace burns fuel, whose chemical potential energy is converted into thermal energy, which is then transferred to the home's air to raise its temperature.
Limitations in the conversion of thermal energy
Conversions to thermal energy from other forms of energy may occur with 100% efficiency. Conversion among non-thermal forms of energy may occur with fairly high efficiency, though there is always some energy dissipated thermally due to friction and similar processes. Sometimes the efficiency is close to 100%, such as when potential energy is converted to kinetic energy as an object falls in a vacuum. This also applies to the opposite case; for example, an object in an elliptical orbit around another body converts its kinetic energy (speed) into gravitational potential energy (distance from the other object) as it moves away from its parent body. When it reaches the furthest point, it will reverse the process, accelerating and converting potential energy into kinetic. Since space is a near-vacuum, this process has close to 100% efficiency.
Thermal energy is unique because it in most cases (willow) cannot be converted to other forms of energy. Only a difference in the density of thermal/heat energy (temperature) can be used to perform work, and the efficiency of this conversion will be (much) less than 100%. This is because t
Document 1:::
In electrical engineering, electric machine is a general term for machines using electromagnetic forces, such as electric motors, electric generators, and others. They are electromechanical energy converters: an electric motor converts electricity to mechanical power while an electric generator converts mechanical power to electricity. The moving parts in a machine can be rotating (rotating machines) or linear (linear machines). Besides motors and generators, a third category often included is transformers, which although they do not have any moving parts are also energy converters, changing the voltage level of an alternating current.
Electric machines, in the form of synchronous and induction generators, produce about 95% of all electric power on Earth (as of early 2020s), and in the form of electric motors consume approximately 60% of all electric power produced. Electric machines were developed beginning in the mid 19th century and since that time have been a ubiquitous component of the infrastructure. Developing more efficient electric machine technology is crucial to any global conservation, green energy, or alternative energy strategy.
Generator
An electric generator is a device that converts mechanical energy to electrical energy. A generator forces electrons to flow through an external electrical circuit. It is somewhat analogous to a water pump, which creates a flow of water but does not create the water inside. The source of mechanical energy, the prime mover, may be a reciprocating or turbine steam engine, water falling through a turbine or waterwheel, an internal combustion engine, a wind turbine, a hand crank, compressed air or any other source of mechanical energy.
The two main parts of an electrical machine can be described in either mechanical or electrical terms. In mechanical terms, the rotor is the rotating part, and the stator is the stationary part of an electrical machine. In electrical terms, the armature is the power-producing compo
Document 2:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 3:::
This is a list of topics that are included in high school physics curricula or textbooks.
Mathematical Background
SI Units
Scalar (physics)
Euclidean vector
Motion graphs and derivatives
Pythagorean theorem
Trigonometry
Motion and forces
Motion
Force
Linear motion
Linear motion
Displacement
Speed
Velocity
Acceleration
Center of mass
Mass
Momentum
Newton's laws of motion
Work (physics)
Free body diagram
Rotational motion
Angular momentum (Introduction)
Angular velocity
Centrifugal force
Centripetal force
Circular motion
Tangential velocity
Torque
Conservation of energy and momentum
Energy
Conservation of energy
Elastic collision
Inelastic collision
Inertia
Moment of inertia
Momentum
Kinetic energy
Potential energy
Rotational energy
Electricity and magnetism
Ampère's circuital law
Capacitor
Coulomb's law
Diode
Direct current
Electric charge
Electric current
Alternating current
Electric field
Electric potential energy
Electron
Faraday's law of induction
Ion
Inductor
Joule heating
Lenz's law
Magnetic field
Ohm's law
Resistor
Transistor
Transformer
Voltage
Heat
Entropy
First law of thermodynamics
Heat
Heat transfer
Second law of thermodynamics
Temperature
Thermal energy
Thermodynamic cycle
Volume (thermodynamics)
Work (thermodynamics)
Waves
Wave
Longitudinal wave
Transverse waves
Transverse wave
Standing Waves
Wavelength
Frequency
Light
Light ray
Speed of light
Sound
Speed of sound
Radio waves
Harmonic oscillator
Hooke's law
Reflection
Refraction
Snell's law
Refractive index
Total internal reflection
Diffraction
Interference (wave propagation)
Polarization (waves)
Vibrating string
Doppler effect
Gravity
Gravitational potential
Newton's law of universal gravitation
Newtonian constant of gravitation
See also
Outline of physics
Physics education
Document 4:::
In physics, energy density is the amount of energy stored in a given system or region of space per unit volume. It is sometimes confused with energy per unit mass which is properly called specific energy or .
Often only the useful or extractable energy is measured, which is to say that inaccessible energy (such as rest mass energy) is ignored. In cosmological and other general relativistic contexts, however, the energy densities considered are those that correspond to the elements of the stress-energy tensor and therefore do include mass energy as well as energy densities associated with pressure.
Energy per unit volume has the same physical units as pressure and in many situations is synonymous. For example, the energy density of a magnetic field may be expressed as and behaves like a physical pressure. Likewise, the energy required to compress a gas to a certain volume may be determined by multiplying the difference between the gas pressure and the external pressure by the change in volume. A pressure gradient describes the potential to perform work on the surroundings by converting internal energy to work until equilibrium is reached.
Overview
There are different types of energy stored in materials, and it takes a particular type of reaction to release each type of energy. In order of the typical magnitude of the energy released, these types of reactions are: nuclear, chemical, electrochemical, and electrical.
Nuclear reactions take place in stars and nuclear power plants, both of which derive energy from the binding energy of nuclei. Chemical reactions are used by organisms to derive energy from food and by automobiles to derive energy from gasoline. Liquid hydrocarbons (fuels such as gasoline, diesel and kerosene) are today the densest way known to economically store and transport chemical energy at a large scale (1 kg of diesel fuel burns with the oxygen contained in ≈15 kg of air). Electrochemical reactions are used by most mobile devices such as laptop
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Two important types of energy that can be converted to one another include potential and what?
A. thermal
B. physical
C. kinetic
D. magnetic
Answer:
|
|
sciq-2406
|
multiple_choice
|
Geologists found that the youngest rocks on the seafloor were where?
|
[
"seabed floor",
"mid - ocean ridges",
"late - ocean ridges",
"early - ocean ridges"
] |
B
|
Relavent Documents:
Document 0:::
The Vine–Matthews–Morley hypothesis, also known as the Morley–Vine–Matthews hypothesis, was the first key scientific test of the seafloor spreading theory of continental drift and plate tectonics. Its key impact was that it allowed the rates of plate motions at mid-ocean ridges to be computed. It states that the Earth's oceanic crust acts as a recorder of reversals in the geomagnetic field direction as seafloor spreading takes place.
History
Harry Hess proposed the seafloor spreading hypothesis in 1960 (published in 1962); the term "spreading of the seafloor" was introduced by geophysicist Robert S. Dietz in 1961. According to Hess, seafloor was created at mid-oceanic ridges by the convection of the earth's mantle, pushing and spreading the older crust away from the ridge. Geophysicist Frederick John Vine and the Canadian geologist Lawrence W. Morley independently realized that if Hess's seafloor spreading theory was correct, then the rocks surrounding the mid-oceanic ridges should show symmetric patterns of magnetization reversals using newly collected magnetic surveys. Both of Morley's letters to Nature (February 1963) and Journal of Geophysical Research (April 1963) were rejected, hence Vine and his PhD adviser at Cambridge University, Drummond Hoyle Matthews, were first to publish the theory in September 1963. Some colleagues were skeptical of the hypothesis because of the numerous assumptions made—seafloor spreading, geomagnetic reversals, and remanent magnetism—all hypotheses that were still not widely accepted. The Vine–Matthews–Morley hypothesis describes the magnetic reversals of oceanic crust. Further evidence for this hypothesis came from Allan V. Cox and colleagues (1964) when they measured the remanent magnetization of lavas from land sites. Walter C. Pitman and J. R. Heirtzler offered further evidence with a remarkably symmetric magnetic anomaly profile from the Pacific-Antarctic Ridge.
Marine magnetic anomalies
The Vine–Matthews-Morley hypothesis
Document 1:::
Akilia Island is an island in southwestern Greenland, about 22 kilometers south of Nuuk. Akilia is the location of a rock formation that has been proposed to contain the oldest known sedimentary rocks on Earth, and perhaps the oldest evidence of life on Earth.
Geology
The rocks in question are part of a metamorphosed supracrustal sequence located at the south-western tip of the island. The sequence has been dated as no younger than 3.85 billion years old - that is, in the Hadean eon - based on the age of an igneous band that cuts the rock. The supracrustal sequence contains layers rich in iron and silica, which are variously interpreted as banded iron formation, chemical sediments from submarine hot springs, or hydrothermal vein deposits. Carbon in the rock, present as graphite, shows low levels of carbon-13, which may suggest an origin as isotopically light organic matter derived from living organisms.
However, this interpretation is complicated because of high-grade metamorphism that affected the Akilia rocks after their formation. The sedimentary origin, age and the carbon content of the rocks have been questioned.
If the Akilia rocks do show evidence of life by 3.85 Ga, it would challenge models which suggest that Earth would not be hospitable to life at this time.
See also
List of islands of Greenland
Origin of life
Document 2:::
The Index to Marine & Lacustrine Geological Samples is a collaboration between multiple institutions and agencies that operate geological sample repositories. The purpose of the database is to help researchers locate sea floor and lakebed cores, grabs, dredges, and drill samples in their collections.
Sample material is available from participating institutions unless noted as unavailable.
Data include basic collection and storage information. Lithology, texture, age, principal investigator, province, weathering/metamorphism, glass remarks, and descriptive comments are included for some samples. Links are provided to related data and information at the institutions and at NCEI.
Data are coded by individual institutions, several of which receive funding from the US National Science Foundation. For more information see the NSF Division of Ocean Sciences Data and Sample Policy.
The Index is endorsed by the Intergovernmental Oceanographic Commission, Committee on International Oceanographic Data and Information Exchange (IODE-XIV.2).
The index is maintained by the National Centers for Environmental Information (NCEI), formerly the National Geophysical Data Center (NGDC), and collocated World Data Center for Geophysics, Boulder, Colorado. NCEI is part of the National Environmental Satellite, Data and Information Service of the National Oceanic & Atmospheric Administration, U. S. Department of Commerce.
Searches and data downloads are available via a JSP and an ArcIMS interface. Data selections can be downloaded in tab-delimited or shapefile form, depending on the interface used. Both WMS and WFS interfaces are also available.
The Index was created in 1977 in response to a meeting of Curators of Marine Geological Samples, sponsored by the U.S. National Science Foundation. The Curators' group continues to meet every 2–3 years.
Dataset Digital Object Identifier
DOI:10.7289/V5H41PB8
Web site
The Index to Marine and Lacustrine Geological Samples
Participating Ins
Document 3:::
Tollmann's bolide hypothesis is a hypothesis presented by Austrian palaeontologist Edith Kristan-Tollmann and geologist Alexander Tollmann in 1994. The hypothesis postulates that one or several bolides (asteroids or comets) struck the Earth around 7640 ± 200 years BCE, and a much smaller one approximately 3150 ± 200 BCE. The hypothesis tries to explain early Holocene extinctions and possibly legends of the Universal Deluge.
The claimed evidence for the event includes stratigraphic studies of tektites, dendrochronology, and ice cores (from Camp Century, Greenland) containing hydrochloric acid and sulfuric acid (indicating an energetic ocean strike) as well as nitric acids (caused by extreme heating of air).
Christopher Knight and Robert Lomas in their book, Uriel's Machine, argue that the 7640 BCE evidence is consistent with the dates of formation of a number of extant salt flats and lakes in dry areas of North America and Asia. They argue that these lakes are the remains of multiple-kilometer-high waves that penetrated deeply into continents as the result of oceanic strikes that they proposed occurred. Research by Quaternary geologists, palynologists, and others has been unable to confirm the validity of the hypothesis and proposes more frequently occurring geological processes for some of the data used for the hypothesis. The dating of ice cores and Australasian tektites has shown long time span differences between the proposed impact times and the impact ejecta products.
Scientific evaluation
Quaternary geologists, paleoclimatologists, and planetary geologists specialising in meteorite and comet impacts have rejected Tollmann's bolide hypothesis. They reject this hypothesis because:
The evidence offered to support the hypothesis can more readily be explained by more mundane and less dramatic geologic processes
Many of the events alleged to be associated with this impact occurred at the wrong time (i.e., many of the events occurred hundreds to thousands of y
Document 4:::
Blood Falls is an outflow of an iron oxide–tainted plume of saltwater, flowing from the tongue of Taylor Glacier onto the ice-covered surface of West Lake Bonney in the Taylor Valley of the McMurdo Dry Valleys in Victoria Land, East Antarctica.
Iron-rich hypersaline water sporadically emerges from small fissures in the ice cascades. The saltwater source is a subglacial pool of unknown size overlain by about of ice several kilometers from its tiny outlet at Blood Falls.
The reddish deposit was found in 1911 by the Australian geologist Thomas Griffith Taylor, who first explored the valley that bears his name. The Antarctica pioneers first attributed the red color to red algae, but later it was proven to be due to iron oxides.
Geochemistry
Poorly soluble hydrous ferric oxides are deposited at the surface of ice after the ferrous ions present in the unfrozen saltwater are oxidized in contact with atmospheric oxygen. The more soluble ferrous ions initially are dissolved in old seawater trapped in an ancient pocket remaining from the Antarctic Ocean when a fjord was isolated by the glacier in its progression during the Miocene period, some 5 million years ago, when the sea level was higher than today.
Unlike most Antarctic glaciers, the Taylor Glacier is not frozen to the bedrock, probably because of the presence of salts concentrated by the crystallization of the ancient seawater imprisoned below it. Salt cryo-concentration occurred in the deep relict seawater when pure ice crystallized and expelled its dissolved salts as it cooled down because of the heat exchange of the captive liquid seawater with the enormous ice mass of the glacier. As a consequence, the trapped seawater was concentrated in brines with a salinity two to three times that of the mean ocean water. A second mechanism sometimes also explaining the formation of hypersaline brines is the water evaporation of surface lakes directly exposed to the very dry polar atmosphere in the McMurdo Dry Valleys. Th
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Geologists found that the youngest rocks on the seafloor were where?
A. seabed floor
B. mid - ocean ridges
C. late - ocean ridges
D. early - ocean ridges
Answer:
|
|
sciq-1454
|
multiple_choice
|
What term simply means sensitive beyond normal levels of activation?
|
[
"isosensitivity",
"monosensitivity",
"hyposensitivity",
"hypersensitivity"
] |
D
|
Relavent Documents:
Document 0:::
In immunology, the term sensitization is used for the following concepts:
Immunization by inducing an adaptive response in the immune system. In this sense, sensitization is the term more often in usage for induction of allergic responses.
To bind antibodies to cells such as erythrocytes in advance of performing an immunological test such as a complement-fixation test or a Coombs test. The antibodies are bound to the cells in their Fab regions in the preparation.
To bind antibodies or soluble antigens chemically or by adsorption to appropriate biological entities such as erythrocytes or particles made of gelatin or latex for passive aggregation tests.
Those particles themselves are biologically inactive except for serving as antigens against the primary antibodies or as carriers of the antigens. When antibodies are used in the preparation, they are bound to the erythrocyte or particles in their Fab regions. Thus the step follows requires the secondary antibodies against those primary antibodies, that is, the secondary antibodies must have binding specificity to the primary antibodies including to their Fc regions.
Document 1:::
Hypersensitivity (also called hypersensitivity reaction or intolerance) is an abnormal physiological condition in which there is an undesirable and adverse immune response to antigen. It is an abnormality in the immune system that causes immune diseases including allergies and autoimmunity. It is caused by many types of particles and substances from the external environment or from within the body that are recognized by the immune cells as antigens. The immune reactions are usually referred to as an over-reaction of the immune system and they are often damaging and uncomfortable.
In 1963, Philip George Houthem Gell and Robin Coombs introduced a systematic classification of the different types of hypersensitivity based on the types of antigens and immune responses involved. According to this system, known as the Gell and Coombs classification or Gell-Coombs's classification, there are four types of hypersensitivity, namely, type I, which as an IgE mediated immediate reaction; type II, an antibody-mediated reaction mainly involving IgG or IgM; type III, an immune complex-mediated reaction involving IgG, complement system and phagocytes; type IV, a cytotoxic, cell-mediated, delayed hypersensitivity reaction involving T cells.
The first three types are considered immediate hypersensitivity reactions because they occur within 24 hours. The fourth type is considered a delayed hypersensitivity reaction because it usually occurs more than 12 hours after exposure to the allergen, with a maximal reaction time between 48 and 72 hours. Hypersensitivity is a common occurrence, it is estimated that about 15% of humans are having at least one type during their lives, and has increased since the latter half of the 20th century.
Gell and Coombs classification
The Gell and Coombs classification of hypersensitivity is the most widely used, and distinguishes four types of immune response which result in bystander tissue damage.
Type I hypersensitivity
Etiology
Type I hypersensi
Document 2:::
Irritation, in biology and physiology, is a state of inflammation or painful reaction to allergy or cell-lining damage. A stimulus or agent which induces the state of irritation is an irritant. Irritants are typically thought of as chemical agents (for example phenol and capsaicin) but mechanical, thermal (heat), and radiative stimuli (for example ultraviolet light or ionising radiations) can also be irritants. Irritation also has non-clinical usages referring to bothersome physical or psychological pain or discomfort.
Irritation can also be induced by some allergic response due to exposure of some allergens for example contact dermatitis, irritation of mucosal membranes and pruritus. Mucosal membrane is the most common site of irritation because it contains secretory glands that release mucous which attracts the allergens due to its sticky nature.
Chronic irritation is a medical term signifying that afflictive health conditions have been present for a while. There are many disorders that can cause chronic irritation, the majority involve the skin, vagina, eyes and lungs.
Irritation in organisms
In higher organisms, an allergic response may be the cause of irritation. An allergen is defined distinctly from an irritant, however, as allergy requires a specific interaction with the immune system and is thus dependent on the (possibly unique) sensitivity of the organism involved while an irritant, classically, acts in a non-specific manner.
It is a form of stress, but conversely, if one is stressed by unrelated matters, mild imperfections can cause more irritation than usual: one is irritable; see also sensitivity (human).
In more basic organisms, the status of pain is the perception of the being stimulated, which is not observable although it may be shared (see gate control theory of pain).
It is not proven that oysters can feel pain, but it is known that they react to irritants. When an irritating object becomes trapped within an oyster's shell, it deposits laye
Document 3:::
The adequate stimulus is a property of a sensory receptor that determines the type of energy to which a sensory receptor responds with the initiation of sensory transduction. Sensory receptor are specialized to respond to certain types of stimuli. The adequate stimulus is the amount and type of energy required to stimulate a specific sensory organ.
Many of the sensory stimuli are categorized by the mechanics by which they are able to function and their purpose. Sensory receptors that are present within the body typically are made to respond to a single stimulus. Sensory receptors are present all throughout the body, and they take a certain amount of a stimulus to trigger these receptors. The use of these sensory receptors allows the brain to interpret the signals to the body which allow a person to respond to the stimulus if the stimulus reaches a minimum threshold to signal the brain. The sensory receptors will activate the sensory transduction system which will in turn send an electrical or chemical stimulus to a cell, and the cell will then respond with electrical signals to the brain which were produced from action potentials. The minuscule signals, which result from the stimuli, enter the cells must be amplified and turned into an sufficient signal that will be sent to the brain.
A sensory receptor's adequate stimulus is determined by the signal transduction mechanisms and ion channels incorporated in the sensory receptor's plasma membrane. Adequate stimulus are often used in relation with sensory thresholds and absolute thresholds to describe the smallest amount of a stimulus needed to activate a feeling within the sensory organ.
Categorizations of receptors
They are categorized through the stimuli to which they respond. Adequate stimulus are also often categorized based on their purpose and locations within the body. The following are the categorizations of receptors within the body:
Visual – These are found in the visual organs of species and are respon
Document 4:::
Food intolerance is a detrimental reaction, often delayed, to a food, beverage, food additive, or compound found in foods that produces symptoms in one or more body organs and systems, but generally refers to reactions other than food allergy. Food hypersensitivity is used to refer broadly to both food intolerances and food allergies.
Food allergies are immune reactions, typically an IgE reaction caused by the release of histamine but also encompassing non-IgE immune responses. This mechanism causes allergies to typically give immediate reaction (a few minutes to a few hours) to foods.
Food intolerances can be classified according to their mechanism. Intolerance can result from the absence of specific chemicals or enzymes needed to digest a food substance, as in hereditary fructose intolerance. It may be a result of an abnormality in the body's ability to absorb nutrients, as occurs in fructose malabsorption. Food intolerance reactions can occur to naturally occurring chemicals in foods, as in salicylate sensitivity. Drugs sourced from plants, such as aspirin, can also cause these kinds of reactions.
Definitions
Food hypersensitivity is used to refer broadly to both food intolerances and food allergies. There are a variety of earlier terms which are no longer in use such as "pseudo-allergy".
Food intolerance reactions can include pharmacologic, metabolic, and gastro-intestinal responses to foods or food compounds. Food intolerance does not include either psychological responses or foodborne illness.
A non-allergic food hypersensitivity is an abnormal physiological response. It can be difficult to determine the poorly tolerated substance as reactions can be delayed, dose-dependent, and a particular reaction-causing compound may be found in many foods.
Metabolic food reactions are due to inborn or acquired errors of metabolism of nutrients, such as in lactase deficiency, phenylketonuria and favism.
Pharmacological reactions are generally due to low-molecular-we
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What term simply means sensitive beyond normal levels of activation?
A. isosensitivity
B. monosensitivity
C. hyposensitivity
D. hypersensitivity
Answer:
|
|
sciq-9018
|
multiple_choice
|
The metacarpophalangeal joints in the finger are examples of what kind of joints?
|
[
"fibrous",
"saddle",
"hinge",
"condyloid"
] |
D
|
Relavent Documents:
Document 0:::
The metacarpophalangeal joints (MCP) are situated between the metacarpal bones and the proximal phalanges of the fingers. These joints are of the condyloid kind, formed by the reception of the rounded heads of the metacarpal bones into shallow cavities on the proximal ends of the proximal phalanges. Being condyloid, they allow the movements of flexion, extension, abduction, adduction and circumduction (see anatomical terms of motion) at the joint.
Structure
Ligaments
Each joint has:
palmar ligaments of metacarpophalangeal articulations
collateral ligaments of metacarpophalangeal articulations
Dorsal surfaces
The dorsal surfaces of these joints are covered by the expansions of the Extensor tendons, together with some loose areolar tissue which connects the deep surfaces of the tendons to the bones.
Function
The movements which occur in these joints are flexion, extension, adduction, abduction, and circumduction; the movements of abduction and adduction are very limited, and cannot be performed while the fingers form a fist.
The muscles of flexion and extension are as follows:
Clinical significance
Arthritis of the MCP is a distinguishing feature of rheumatoid arthritis, as opposed to the distal interphalangeal joint in osteoarthritis.
Other animals
In many quadrupeds, particularly horses and other larger animals, the metacarpophalangeal joint is referred to as the "fetlock". This term is translated literally as "foot-lock". In fact, although the term fetlock does not specifically apply to other species' metacarpophalangeal joints (for instance, humans), the "second" or "mid-finger" knuckle of the human hand does anatomically correspond to the fetlock on larger quadrupeds. For lack of a better term, the shortened name may seem more practical.
Document 1:::
In human anatomy, the metacarpal bones or metacarpus, also known as the "palm bones", are the appendicular bones that form the intermediate part of the hand between the phalanges (fingers) and the carpal bones (wrist bones), which articulate with the forearm. The metacarpal bones are homologous to the metatarsal bones in the foot.
Structure
The metacarpals form a transverse arch to which the rigid row of distal carpal bones are fixed. The peripheral metacarpals (those of the thumb and little finger) form the sides of the cup of the palmar gutter and as they are brought together they deepen this concavity. The index metacarpal is the most firmly fixed, while the thumb metacarpal articulates with the trapezium and acts independently from the others. The middle metacarpals are tightly united to the carpus by intrinsic interlocking bone elements at their bases. The ring metacarpal is somewhat more mobile while the fifth metacarpal is semi-independent.
Each metacarpal bone consists of a body or shaft, and two extremities: the head at the distal or digital end (near the fingers), and the base at the proximal or carpal end (close to the wrist).
Body
The body (shaft) is prismoid in form, and curved, so as to be convex in the longitudinal direction behind, concave in front. It presents three surfaces: medial, lateral, and dorsal.
The medial and lateral surfaces are concave, for the attachment of the interosseus muscles, and separated from one another by a prominent anterior ridge.
The dorsal surface presents in its distal two-thirds a smooth, triangular, flattened area which is covered in by the tendons of the extensor muscles. This surface is bounded by two lines, which commence in small tubercles situated on either side of the digital extremity, and, passing upward, converge and meet some distance above the center of the bone and form a ridge which runs along the rest of the dorsal surface to the carpal extremity. This ridge separates two sloping surfaces for the a
Document 2:::
The intermetacarpal joints are in the hand formed between the metacarpal bones. The bases of the second, third, fourth and fifth metacarpal bones articulate with one another by small surfaces covered with cartilage. The metacarpal bones are connected together by dorsal, palmar, and interosseous ligaments.
The dorsal metacarpal ligaments (ligamenta metacarpalia dorsalia) and palmar metacarpal ligaments (ligamenta metacarpalia palmaria) pass transversely from one bone to another on the dorsal and palmar surfaces.
The interosseous metacarpal ligaments (ligamenta metacarpalia interossea) connect their contiguous surfaces, just distal to their collateral articular facets.
The synovial membrane for these joints is continuous with that of the carpometacarpal joints.
Additional images
See also
Transverse metacarpal ligament
Document 3:::
The metatarsophalangeal joints (MTP joints), also informally known as toe knuckles, are the joints between the metatarsal bones of the foot and the proximal bones (proximal phalanges) of the toes. They are condyloid joints, meaning that an elliptical or rounded surface (of the metatarsal bones) comes close to a shallow cavity (of the proximal phalanges).
The ligaments are the plantar and two collateral.
Movements
The movements permitted in the metatarsophalangeal joints are flexion, extension, abduction, adduction and circumduction.
See also
Bunion
Hallux rigidus (stiff big toe)
Metatarsophalangeal joint sprain (turf toe)
Document 4:::
The collateral ligaments of metatarsophalangeal joints are strong, rounded cords, placed one on either side of each joint, and attached, by one end, to the posterior tubercle on the side of the head of the metatarsal bone, and, by the other, to the contiguous extremity of the phalanx.
The place of dorsal ligaments is supplied by the extensor tendons on the dorsal surfaces of the joints.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The metacarpophalangeal joints in the finger are examples of what kind of joints?
A. fibrous
B. saddle
C. hinge
D. condyloid
Answer:
|
|
scienceQA-1621
|
multiple_choice
|
Complete the sentence.
an egg is fertilized, it can become a ().
|
[
"After . . . cone",
"After . . . seed",
"Before . . . cone",
"Before . . . seed"
] |
B
|
Fertilized eggs grow into seeds. An egg cannot become a seed until after it is fertilized.
A seed can grow into a new plant, which can grow cones. But a fertilized egg does not become a cone.
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 2:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
Document 3:::
Progress tests are longitudinal, feedback oriented educational assessment tools for the evaluation of development and sustainability of cognitive knowledge during a learning process. A progress test is a written knowledge exam (usually involving multiple choice questions) that is usually administered to all students in the "A" program at the same time and at regular intervals (usually twice to four times yearly) throughout the entire academic program. The test samples the complete knowledge domain expected of new graduates upon completion of their courses, regardless of the year level of the student). The differences between students’ knowledge levels show in the test scores; the further a student has progressed in the curriculum the higher the scores. As a result, these resultant scores provide a longitudinal, repeated measures, curriculum-independent assessment of the objectives (in knowledge) of the entire programme.
History
Since its inception in the late 1970s at both Maastricht University and the University of Missouri–Kansas City independently, the progress test of applied knowledge has been increasingly used in medical and health sciences programs across the globe. They are well established and increasingly used in medical education in both undergraduate and postgraduate medical education. They are used formatively and summatively.
Use in academic programs
The progress test is currently used by national progress test consortia in the United Kingdom, Italy, The Netherlands, in Germany (including Austria), and in individual schools in Africa, Saudi Arabia, South East Asia, the Caribbean, Australia, New Zealand, Sweden, Finland, UK, and the USA. The National Board of Medical Examiners in the USA also provides progress testing in various countries The feasibility of an international approach to progress testing has been recently acknowledged and was first demonstrated by Albano et al. in 1996, who compared test scores across German, Dutch and Italian medi
Document 4:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Complete the sentence.
an egg is fertilized, it can become a ().
A. After . . . cone
B. After . . . seed
C. Before . . . cone
D. Before . . . seed
Answer:
|
sciq-161
|
multiple_choice
|
What type of pressure is the pressure exerted by gas particles in earth’s atmosphere as those particles collide with objects?
|
[
"vertical pressure",
"atmospheric pressure",
"adjacent pressure",
"adjacent pressure"
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Advanced Placement (AP) Physics C: Mechanics (also known as AP Mechanics) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a one-semester calculus-based university course in mechanics. The content of Physics C: Mechanics overlaps with that of AP Physics 1, but Physics 1 is algebra-based, while Physics C is calculus-based. Physics C: Mechanics may be combined with its electricity and magnetism counterpart to form a year-long course that prepares for both exams.
Course content
Intended to be equivalent to an introductory college course in mechanics for physics or engineering majors, the course modules are:
Kinematics
Newton's laws of motion
Work, energy and power
Systems of particles and linear momentum
Circular motion and rotation
Oscillations and gravitation.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a Calculus I class.
This course is often compared to AP Physics 1: Algebra Based for its similar course material involving kinematics, work, motion, forces, rotation, and oscillations. However, AP Physics 1: Algebra Based lacks concepts found in Calculus I, like derivatives or integrals.
This course may be combined with AP Physics C: Electricity and Magnetism to make a unified Physics C course that prepares for both exams.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Mechanics is separate from the AP examination for AP Physics C: Electricity and Magnetism. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday aftern
Document 2:::
There are four Advanced Placement (AP) Physics courses administered by the College Board as part of its Advanced Placement program: the algebra-based Physics 1 and Physics 2 and the calculus-based Physics C: Mechanics and Physics C: Electricity and Magnetism. All are intended to be at the college level. Each AP Physics course has an exam for which high-performing students may receive credit toward their college coursework.
AP Physics 1 and 2
AP Physics 1 and AP Physics 2 were introduced in 2015, replacing AP Physics B. The courses were designed to emphasize critical thinking and reasoning as well as learning through inquiry. They are algebra-based and do not require any calculus knowledge.
AP Physics 1
AP Physics 1 covers Newtonian mechanics, including:
Unit 1: Kinematics
Unit 2: Dynamics
Unit 3: Circular Motion and Gravitation
Unit 4: Energy
Unit 5: Momentum
Unit 6: Simple Harmonic Motion
Unit 7: Torque and Rotational Motion
Until 2020, the course also covered topics in electricity (including Coulomb's Law and resistive DC circuits), mechanical waves, and sound. These units were removed because they are included in AP Physics 2.
AP Physics 2
AP Physics 2 covers the following topics:
Unit 1: Fluids
Unit 2: Thermodynamics
Unit 3: Electric Force, Field, and Potential
Unit 4: Electric Circuits
Unit 5: Magnetism and Electromagnetic Induction
Unit 6: Geometric and Physical Optics
Unit 7: Quantum, Atomic, and Nuclear Physics
AP Physics C
From 1969 to 1972, AP Physics C was a single course with a single exam that covered all standard introductory university physics topics, including mechanics, fluids, electricity and magnetism, optics, and modern physics. In 1973, the College Board split the course into AP Physics C: Mechanics and AP Physics C: Electricity and Magnetism. The exam was also split into two separate 90-minute tests, each equivalent to a semester-length calculus-based college course. Until 2006, both exams could be taken for a single
Document 3:::
The SAT Subject Test in Physics, Physics SAT II, or simply the Physics SAT, was a one-hour multiple choice test on physics administered by the College Board in the United States. A high school student generally chose to take the test to fulfill college entrance requirements for the schools at which the student was planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; until January 2005, they were known as SAT IIs; they are still well known by this name.
The material tested on the Physics SAT was supposed to be equivalent to that taught in a junior- or senior-level high school physics class. It required critical thinking and test-taking strategies, at which high school freshmen or sophomores may have been inexperienced. The Physics SAT tested more than what normal state requirements were; therefore, many students prepared for the Physics SAT using a preparatory book or by taking an AP course in physics.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Physics. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
The SAT Subject Test in Physics had 75 questions and consisted of two parts: Part A and Part B.
Part A:
First 12 or 13 questions
4 groups of two to four questions each
The questions within any one group all relate to a single situation.
Five possible answer choices are given before the question.
An answer choice can be used once, more than once, or not at all in each group.
Part B:
Last 62 or 63 questions
Each question has five possible answer choice with one correct answer.
Some questions may be in groups of two or three.
Topics
Scoring
The test had 75 multiple choice questions that were to be answered in one hour. All questions had f
Document 4:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of pressure is the pressure exerted by gas particles in earth’s atmosphere as those particles collide with objects?
A. vertical pressure
B. atmospheric pressure
C. adjacent pressure
D. adjacent pressure
Answer:
|
|
ai2_arc-737
|
multiple_choice
|
A group of Canada geese left a Florida lake in the spring. The geese arrived at a Maine lake 2,000 km away in 40 days. If the geese traveled at a constant rate, how far did the geese travel on the first day?
|
[
"5 km",
"20 km",
"40 km",
"50 km"
] |
D
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
Document 2:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 3:::
Advanced Placement (AP) Calculus (also known as AP Calc, Calc AB / Calc BC or simply AB / BC) is a set of two distinct Advanced Placement calculus courses and exams offered by the American nonprofit organization College Board. AP Calculus AB covers basic introductions to limits, derivatives, and integrals. AP Calculus BC covers all AP Calculus AB topics plus additional topics (including integration by parts, Taylor series, parametric equations, vector calculus, and polar coordinate functions).
AP Calculus AB
AP Calculus AB is an Advanced Placement calculus course. It is traditionally taken after precalculus and is the first calculus course offered at most schools except for possibly a regular calculus class. The Pre-Advanced Placement pathway for math helps prepare students for further Advanced Placement classes and exams.
Purpose
According to the College Board:
Topic outline
The material includes the study and application of differentiation and integration, and graphical analysis including limits, asymptotes, and continuity. An AP Calculus AB course is typically equivalent to one semester of college calculus.
Analysis of graphs (predicting and explaining behavior)
Limits of functions (one and two sided)
Asymptotic and unbounded behavior
Continuity
Derivatives
Concept
At a point
As a function
Applications
Higher order derivatives
Techniques
Integrals
Interpretations
Properties
Applications
Techniques
Numerical approximations
Fundamental theorem of calculus
Antidifferentiation
L'Hôpital's rule
Separable differential equations
AP Calculus BC
AP Calculus BC is equivalent to a full year regular college course, covering both Calculus I and II. After passing the exam, students may move on to Calculus III (Multivariable Calculus).
Purpose
According to the College Board,
Topic outline
AP Calculus BC includes all of the topics covered in AP Calculus AB, as well as the following:
Convergence tests for series
Taylor series
Parametric equations
Polar functions (inclu
Document 4:::
Progress tests are longitudinal, feedback oriented educational assessment tools for the evaluation of development and sustainability of cognitive knowledge during a learning process. A progress test is a written knowledge exam (usually involving multiple choice questions) that is usually administered to all students in the "A" program at the same time and at regular intervals (usually twice to four times yearly) throughout the entire academic program. The test samples the complete knowledge domain expected of new graduates upon completion of their courses, regardless of the year level of the student). The differences between students’ knowledge levels show in the test scores; the further a student has progressed in the curriculum the higher the scores. As a result, these resultant scores provide a longitudinal, repeated measures, curriculum-independent assessment of the objectives (in knowledge) of the entire programme.
History
Since its inception in the late 1970s at both Maastricht University and the University of Missouri–Kansas City independently, the progress test of applied knowledge has been increasingly used in medical and health sciences programs across the globe. They are well established and increasingly used in medical education in both undergraduate and postgraduate medical education. They are used formatively and summatively.
Use in academic programs
The progress test is currently used by national progress test consortia in the United Kingdom, Italy, The Netherlands, in Germany (including Austria), and in individual schools in Africa, Saudi Arabia, South East Asia, the Caribbean, Australia, New Zealand, Sweden, Finland, UK, and the USA. The National Board of Medical Examiners in the USA also provides progress testing in various countries The feasibility of an international approach to progress testing has been recently acknowledged and was first demonstrated by Albano et al. in 1996, who compared test scores across German, Dutch and Italian medi
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A group of Canada geese left a Florida lake in the spring. The geese arrived at a Maine lake 2,000 km away in 40 days. If the geese traveled at a constant rate, how far did the geese travel on the first day?
A. 5 km
B. 20 km
C. 40 km
D. 50 km
Answer:
|
|
sciq-1956
|
multiple_choice
|
Ingestive protists extend their cell wall and cell membrane around the food item, forming a what?
|
[
"food pocket",
"fuel vacuole",
"food vacuole",
"protective bubble"
] |
C
|
Relavent Documents:
Document 0:::
A hypha (; : hyphae) is a long, branching, filamentous structure of a fungus, oomycete, or actinobacterium. In most fungi, hyphae are the main mode of vegetative growth, and are collectively called a mycelium.
Structure
A hypha consists of one or more cells surrounded by a tubular cell wall. In most fungi, hyphae are divided into cells by internal cross-walls called "septa" (singular septum). Septa are usually perforated by pores large enough for ribosomes, mitochondria, and sometimes nuclei to flow between cells. The major structural polymer in fungal cell walls is typically chitin, in contrast to plants and oomycetes that have cellulosic cell walls. Some fungi have aseptate hyphae, meaning their hyphae are not partitioned by septa.
Hyphae have an average diameter of 4–6 µm.
Growth
Hyphae grow at their tips. During tip growth, cell walls are extended by the external assembly and polymerization of cell wall components, and the internal production of new cell membrane. The Spitzenkörper is an intracellular organelle associated with tip growth. It is composed of an aggregation of membrane-bound vesicles containing cell wall components. The Spitzenkörper is part of the endomembrane system of fungi, holding and releasing vesicles it receives from the Golgi apparatus. These vesicles travel to the cell membrane via the cytoskeleton and release their contents (including various cysteine-rich proteins including cerato-platanins and hydrophobins) outside the cell by the process of exocytosis, where they can then be transported to where they are needed. Vesicle membranes contribute to growth of the cell membrane while their contents form new cell wall. The Spitzenkörper moves along the apex of the hyphal strand and generates apical growth and branching; the apical growth rate of the hyphal strand parallels and is regulated by the movement of the Spitzenkörper.
As a hypha extends, septa may be formed behind the growing tip to partition each hypha into individual cells.
Document 1:::
Cell physiology is the biological study of the activities that take place in a cell to keep it alive. The term physiology refers to normal functions in a living organism. Animal cells, plant cells and microorganism cells show similarities in their functions even though they vary in structure.
General characteristics
There are two types of cells: prokaryotes and eukaryotes.
Prokaryotes were the first of the two to develop and do not have a self-contained nucleus. Their mechanisms are simpler than later-evolved eukaryotes, which contain a nucleus that envelops the cell's DNA and some organelles.
Prokaryotes
Prokaryotes have DNA located in an area called the nucleoid, which is not separated from other parts of the cell by a membrane. There are two domains of prokaryotes: bacteria and archaea. Prokaryotes have fewer organelles than eukaryotes. Both have plasma membranes and ribosomes (structures that synthesize proteins and float free in cytoplasm). Two unique characteristics of prokaryotes are fimbriae (finger-like projections on the surface of a cell) and flagella (threadlike structures that aid movement).
Eukaryotes
Eukaryotes have a nucleus where DNA is contained. They are usually larger than prokaryotes and contain many more organelles. The nucleus, the feature of a eukaryote that distinguishes it from a prokaryote, contains a nuclear envelope, nucleolus and chromatin. In cytoplasm, endoplasmic reticulum (ER) synthesizes membranes and performs other metabolic activities. There are two types, rough ER (containing ribosomes) and smooth ER (lacking ribosomes). The Golgi apparatus consists of multiple membranous sacs, responsible for manufacturing and shipping out materials such as proteins. Lysosomes are structures that use enzymes to break down substances through phagocytosis, a process that comprises endocytosis and exocytosis. In the mitochondria, metabolic processes such as cellular respiration occur. The cytoskeleton is made of fibers that support the str
Document 2:::
A microbial cyst is a resting or dormant stage of a microorganism, usually a bacterium or a protist or rarely an invertebrate animal, that helps the organism to survive in unfavorable environmental conditions. It can be thought of as a state of suspended animation in which the metabolic processes of the cell are slowed and the cell ceases all activities like feeding and locomotion. Encystment, the formation of the cyst, also helps the microbe to disperse easily, from one host to another or to a more favorable environment. When the encysted microbe reaches an environment favorable to its growth and survival, the cyst wall breaks down by a process known as excystation. In excystment, the exact stimulus is unknown for most protists.
Unfavorable environmental conditions such as lack of nutrients or oxygen, extreme temperatures, lack of moisture and presence of toxic chemicals, which are not conducive for the growth of the microbe trigger the formation of a cyst.
The main functions of cysts are to protect against adverse changes in the environment such as nutrient deficiency, desiccation, adverse pH, and low levels of oxygen, they are sites for nuclear reorganization and cell division, and in parasitic species they are the infectious stage between hosts.
Cyst formation across species
In bacteria
In bacteria (for instance, Azotobacter sp.), encystment occurs by changes in the cell wall; the cytoplasm contracts and the cell wall thickens. Bacterial cysts differ from endospores in the way they are formed and also the degree of resistance to unfavorable conditions. Endospores are much more resistant than cysts.
Bacteria do not always form a single cyst. Varieties of cysts formation events are known. As an example Rhodospirillium centenum can change the number of cell per cyst, usually ranging from four to ten cells per cyst depending on environment.
In protists
Protists, especially protozoan parasites, are often exposed to very harsh conditions at various stages in t
Document 3:::
Biology
When Cordyceps attacks a host, the mycelium invades and eventually replaces the host tissue, while the elongated fruit body (ascocarp) may be cylindrical, branched, or of complex shape. The ascocarp bears many small, flask-shaped perithecia containing asci. These, in turn, contain
Document 4:::
A polyp in zoology is one of two forms found in the phylum Cnidaria, the other being the medusa. Polyps are roughly cylindrical in shape and elongated at the axis of the vase-shaped body. In solitary polyps, the aboral (opposite to oral) end is attached to the substrate by means of a disc-like holdfast called a pedal disc, while in colonies of polyps it is connected to other polyps, either directly or indirectly. The oral end contains the mouth, and is surrounded by a circlet of tentacles.
Classes
In the class Anthozoa, comprising the sea anemones and corals, the individual is always a polyp; in the class Hydrozoa, however, the individual may be either a polyp or a medusa, with most species undergoing a life cycle with both a polyp stage and a medusa stage. In class Scyphozoa, the medusa stage is dominant, and the polyp stage may or may not be present, depending on the family. In those scyphozoans that have the larval planula metamorphose into a polyp, the polyp, also called a "scyphistoma," grows until it develops a stack of plate-like medusae that pinch off and swim away in a process known as strobilation. Once strobilation is complete, the polyp may die, or regenerate itself to repeat the process again later. With Cubozoans, the planula settles onto a suitable surface, and develops into a polyp. The cubozoan polyp then eventually metamorphoses directly into a Medusa.
Anatomy
The body of the polyp may be roughly compared in a structure to a sac, the wall of which is composed of two layers of cells. The outer layer is known technically as the ectoderm, the inner layer as the endoderm (or gastroderm). Between ectoderm and endoderm is a supporting layer of structureless gelatinous substance termed mesoglea, secreted by the cell layers of the body wall. The mesoglea can be thinner than the endoderm or ectoderm or comprise the bulk of the body as in larger jellyfish. The mesoglea can contain skeletal elements derived from cells migrated from ectoderm.
Th
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Ingestive protists extend their cell wall and cell membrane around the food item, forming a what?
A. food pocket
B. fuel vacuole
C. food vacuole
D. protective bubble
Answer:
|
|
sciq-5593
|
multiple_choice
|
Chordates are defined by a set of four characteristics that are shared by these animals at some point during their?
|
[
"birth",
"response",
"development",
"death"
] |
C
|
Relavent Documents:
Document 0:::
An equivalence group is a set of unspecified cells that have the same developmental potential or ability to adopt various fates. Our current understanding suggests that equivalence groups are limited to cells of the same ancestry, also known as sibling cells. Often, cells of an equivalence group adopt different fates from one another.
Equivalence groups assume various potential fates in two general, non-mutually exclusive ways. One mechanism, induction, occurs when a signal originating from outside of the equivalence group specifies a subset of the naïve cells. Another mode, known as lateral inhibition, arises when a signal within an equivalence group causes one cell to adopt a dominant fate while others in the group are inhibited from doing so. In many examples of equivalence groups, both induction and lateral inhibition are used to define patterns of distinct cell types.
Cells of an equivalence group that do not receive a signal adopt a default fate. Alternatively, cells that receive a signal take on different fates. At a certain point, the fates of cells within an equivalence group become irreversibly determined, thus they lose their multipotent potential. The following provides examples of equivalence groups studied in nematodes and ascidians.
Vulva Precursor Cell Equivalence Group
Introduction
A classic example of an equivalence group is the vulva precursor cells (VPCs) of nematodes. In Caenorhabditis elegans self-fertilized eggs exit the body through the vulva. This organ develops from a subset of cell of an equivalence group consisting of six VPCs, P3.p-P8.p, which lie ventrally along the anterior-posterior axis. In this example a single overlying somatic cells, the anchor cell, induces nearby VPCs to take on vulva fates 1° (P6.p) and 2° (P5.p and P7.p). VPCs that are not induced form the 3° lineage (P3.p, P4.p and P8.p), which make epidermal cells that fuse to a large syncytial epidermis (see image).
The six VPCs form an equivalence group beca
Document 1:::
Segmentation in biology is the division of some animal and plant body plans into a linear series of repetitive segments that may or may not be interconnected to each other. This article focuses on the segmentation of animal body plans, specifically using the examples of the taxa Arthropoda, Chordata, and Annelida. These three groups form segments by using a "growth zone" to direct and define the segments. While all three have a generally segmented body plan and use a growth zone, they use different mechanisms for generating this patterning. Even within these groups, different organisms have different mechanisms for segmenting the body. Segmentation of the body plan is important for allowing free movement and development of certain body parts. It also allows for regeneration in specific individuals.
Definition
Segmentation is a difficult process to satisfactorily define. Many taxa (for example the molluscs) have some form of serial repetition in their units but are not conventionally thought of as segmented. Segmented animals are those considered to have organs that were repeated, or to have a body composed of self-similar units, but usually it is the parts of an organism that are referred to as being segmented.
Embryology
Segmentation in animals typically falls into three types, characteristic of different arthropods, vertebrates, and annelids. Arthropods such as the fruit fly form segments from a field of equivalent cells based on transcription factor gradients. Vertebrates like the zebrafish use oscillating gene expression to define segments known as somites. Annelids such as the leech use smaller blast cells budded off from large teloblast cells to define segments.
Arthropods
Although Drosophila segmentation is not representative of the arthropod phylum in general, it is the most highly studied. Early screens to identify genes involved in cuticle development led to the discovery of a class of genes that was necessary for proper segmentation of the Drosophila
Document 2:::
In the field of developmental biology, regional differentiation is the process by which different areas are identified in the development of the early embryo. The process by which the cells become specified differs between organisms.
Cell fate determination
In terms of developmental commitment, a cell can either be specified or it can be determined. Specification is the first stage in differentiation. A cell that is specified can have its commitment reversed while the determined state is irreversible. There are two main types of specification: autonomous and conditional. A cell specified autonomously will develop into a specific fate based upon cytoplasmic determinants with no regard to the environment the cell is in. A cell specified conditionally will develop into a specific fate based upon other surrounding cells or morphogen gradients. Another type of specification is syncytial specification, characteristic of most insect classes.
Specification in sea urchins uses both autonomous and conditional mechanisms to determine the anterior/posterior axis. The anterior/posterior axis lies along the animal/vegetal axis set up during cleavage. The micromeres induce the nearby tissue to become endoderm while the animal cells are specified to become ectoderm. The animal cells are not determined because the micromeres can induce the animal cells to also take on mesodermal and endodermal fates. It was observed that β-catenin was present in the nuclei at the vegetal pole of the blastula. Through a series of experiments, one study confirmed the role of β-catenin in the cell-autonomous specification of vegetal cell fates and the micromeres inducing ability. Treatments of lithium chloride sufficient to vegetalize the embryo resulted in increases in nuclearly localized b-catenin. Reduction of expression of β-catenin in the nucleus correlated with loss of vegetal cell fates. Transplants of micromeres lacking nuclear accumulation of β-catenin were unable to induce a second axis.
Document 3:::
Gerd B. Müller (born 1953) is an Austrian biologist who is emeritus professor at the University of Vienna where he was the head of the Department of Theoretical Biology in the Center for Organismal Systems Biology. His research interests focus on vertebrate limb development, evolutionary novelties, evo-devo theory, and the Extended Evolutionary Synthesis. He is also concerned with the development of 3D based imaging tools in developmental biology.
Biography
Müller received an M.D. in 1979 and a Ph.D. in zoology in 1985, both from the University of Vienna. He has been a sabbatical fellow at the Department of Developmental Biology, Dalhousie University, Canada, (1988) and a visiting scholar at the Museum of Comparative Zoology, Harvard University, and received his Habilitation in Anatomy and Embryology in 1989. He is a founding member of the Konrad Lorenz Institute for Evolution and Cognition Research, Klosterneuburg, Austria, of which he has been President since 1997. Müller is on the editorial boards of several scientific journals, including Biological Theory where he serves as an associate editor. He is editor-in-chief of the Vienna Series in Theoretical Biology, a book series devoted to theoretical developments in the biosciences, published by MIT Press.
Scientific contribution
Müller has published on developmental imaging, vertebrate limb development, the origins of phenotypic novelty, EvoDevo theory, and evolutionary theory.
With the cell and developmental biologist Stuart Newman, Müller co-edited the book Origination of Organismal Form (MIT Press, 2003). This book on evolutionary developmental biology is a collection of papers on generative mechanisms that were plausibly involved in the origination of disparate body forms during early periods of organismal life. Particular attention is given to epigenetic factors, such as physical determinants and environmental parameters, that may have led to the spontaneous emergence of bodyplans and organ forms during a
Document 4:::
Animals are multicellular, eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. As of 2022, 2.16 million living animal species have been described—of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are around 7.77 million animal species. Animals range in length from to . They have complex interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology.
Most living animal species are in Bilateria, a clade whose members have a bilaterally symmetric body plan. The Bilateria include the protostomes, containing animals such as nematodes, arthropods, flatworms, annelids and molluscs, and the deuterostomes, containing the echinoderms and the chordates, the latter including the vertebrates. Life forms interpreted as early animals were present in the Ediacaran biota of the late Precambrian. Many modern animal phyla became clearly established in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago.
Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on ad
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Chordates are defined by a set of four characteristics that are shared by these animals at some point during their?
A. birth
B. response
C. development
D. death
Answer:
|
|
sciq-1903
|
multiple_choice
|
What is the term for cellular eating?
|
[
"ancylosis",
"Pinocytosis",
"consumption",
"phagocytosis"
] |
D
|
Relavent Documents:
Document 0:::
Relatively speaking, the brain consumes an immense amount of energy in comparison to the rest of the body. The mechanisms involved in the transfer of energy from foods to neurons are likely to be fundamental to the control of brain function. Human bodily processes, including the brain, all require both macronutrients, as well as micronutrients.
Insufficient intake of selected vitamins, or certain metabolic disorders, may affect cognitive processes by disrupting the nutrient-dependent processes within the body that are associated with the management of energy in neurons, which can subsequently affect synaptic plasticity, or the ability to encode new memories.
Macronutrients
The human brain requires nutrients obtained from the diet to develop and sustain its physical structure and cognitive functions. Additionally, the brain requires caloric energy predominately derived from the primary macronutrients to operate. The three primary macronutrients include carbohydrates, proteins, and fats. Each macronutrient can impact cognition through multiple mechanisms, including glucose and insulin metabolism, neurotransmitter actions, oxidative stress and inflammation, and the gut-brain axis. Inadequate macronutrient consumption or proportion could impair optimal cognitive functioning and have long-term health implications.
Carbohydrates
Through digestion, dietary carbohydrates are broken down and converted into glucose, which is the sole energy source for the brain. Optimal brain function relies on adequate carbohydrate consumption, as carbohydrates provide the quickest source of glucose for the brain. Glucose deficiencies such as hypoglycaemia reduce available energy for the brain and impair all cognitive processes and performance. Additionally, situations with high cognitive demand, such as learning a new task, increase brain glucose utilization, depleting blood glucose stores and initiating the need for supplementation.
Complex carbohydrates, especially those with high d
Document 1:::
The cell is the basic structural and functional unit of all forms of life. Every cell consists of cytoplasm enclosed within a membrane, and contains many macromolecules such as proteins, DNA and RNA, as well as many small molecules of nutrients and metabolites. The term comes from the Latin word meaning 'small room'.
Cells can acquire specified function and carry out various tasks within the cell such as replication, DNA repair, protein synthesis, and motility. Cells are capable of specialization and mobility within the cell.
Most plant and animal cells are only visible under a light microscope, with dimensions between 1 and 100 micrometres. Electron microscopy gives a much higher resolution showing greatly detailed cell structure. Organisms can be classified as unicellular (consisting of a single cell such as bacteria) or multicellular (including plants and animals). Most unicellular organisms are classed as microorganisms.
The study of cells and how they work has led to many other studies in related areas of biology, including: discovery of DNA, cancer systems biology, aging and developmental biology.
Cell biology is the study of cells, which were discovered by Robert Hooke in 1665, who named them for their resemblance to cells inhabited by Christian monks in a monastery. Cell theory, first developed in 1839 by Matthias Jakob Schleiden and Theodor Schwann, states that all organisms are composed of one or more cells, that cells are the fundamental unit of structure and function in all living organisms, and that all cells come from pre-existing cells. Cells emerged on Earth about 4 billion years ago.
Discovery
With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as i
Document 2:::
Holozoic nutrition (Greek: holo-whole ; zoikos-of animals) is a type of heterotrophic nutrition that is characterized by the internalization (ingestion) and internal processing of liquids or solid food particles. Protozoa, such as amoebas, and most of the free living animals, such as humans, exhibit this type of nutrition where food is taken into the body as a liquid or solid and then further broken down is known as holozoic nutrition. Most animals exhibit this kind of nutrition.
In Holozoic nutrition, the energy and organic building blocks are obtained by ingesting and then digesting other organisms or pieces of other organisms, including blood and decaying organic matter. This contrasts with holophytic nutrition, in which energy and organic building blocks are obtained through photosynthesis or chemosynthesis, and with saprozoic nutrition, in which digestive enzymes are released externally and the resulting monomers (small organic molecules) are absorbed directly from the environment.
There are several stages of holozoic nutrition, which often occur in separate compartments within an organism (such as the stomach and intestines):
Ingestion: In animals, this is merely taking food in through the mouth. In protozoa, this most commonly occurs through phagocytosis.
Digestion: The physical breakdown of complex large and organic food particles and the enzymatic breakdown of complex organic compounds into small, simple molecules.
Absorption: The active and passive transport of the chemical products of digestion out of the food-containing compartment and into the body
4. Assimilation: The chemical products used up for various metabolic processes
Document 3:::
Every organism requires energy to be active. However, to obtain energy from its outside environment, cells must not only retrieve molecules from their surroundings but also break them down. This process is known as intracellular digestion. In its broadest sense, intracellular digestion is the breakdown of substances within the cytoplasm of a cell. In detail, a phagocyte's duty is obtaining food particles and digesting it in a vacuole. For example, following phagocytosis, the ingested particle (or phagosome) fuses with a lysosome containing hydrolytic enzymes to form a phagolysosome; the pathogens or food particles within the phagosome are then digested by the lysosome's enzymes.
Intracellular digestion can also refer to the process in which animals that lack a digestive tract bring food items into the cell for the purposes of digestion for nutritional needs. This kind of intracellular digestion occurs in many unicellular protozoans, in Pycnogonida, in some molluscs, Cnidaria and Porifera. There is another type of digestion, called extracellular digestion. In amphioxus, digestion is both extracellular and intracellular.
Function
Intracellular digestion is divided into heterophagic digestion and autophagic digestion. These two types take place in the lysosome and they both have very specific functions. Heterophagic intracellular digestion has an important job which is to break down all molecules that are brought into a cell by endocytosis. The degraded molecules need to be delivered to the cytoplasm; however, this will not be possible if the molecules are not hydrolyzed in the lysosome. Autophagic intracellular digestion is processed in the cell, which means it digests the internal molecules.
Autophagy
Generally, autophagy includes three small branches, which are macroautophagy, microautophagy, and chaperone-mediated autophagy.
Occurrence
Most organisms that use intracellular digestion belong to Kingdom Protista, such as amoeba and paramecium.
Amoeba
Amoeba u
Document 4:::
Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence.
Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism.
Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry.
See also
Cell (biology)
Cell biology
Biomolecule
Organelle
Tissue (biology)
External links
https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the term for cellular eating?
A. ancylosis
B. Pinocytosis
C. consumption
D. phagocytosis
Answer:
|
|
sciq-7866
|
multiple_choice
|
If the electromagnetic repulsion between protons is greater than the strong nuclear force of attraction between them, what do they become?
|
[
"destroyed",
"slow , or radioactive",
"unstable, or radioactive",
"unstable , or experimental"
] |
C
|
Relavent Documents:
Document 0:::
The nuclear force (or nucleon–nucleon interaction, residual strong force, or, historically, strong nuclear force) is a force that acts between hadrons, most commonly observed between protons and neutrons of atoms. Neutrons and protons, both nucleons, are affected by the nuclear force almost identically. Since protons have charge +1 e, they experience an electric force that tends to push them apart, but at short range the attractive nuclear force is strong enough to overcome the electrostatic force. The nuclear force binds nucleons into atomic nuclei.
The nuclear force is powerfully attractive between nucleons at distances of about 0.8 femtometre (fm, or 0.8×10−15 metre), but it rapidly decreases to insignificance at distances beyond about 2.5 fm. At distances less than 0.7 fm, the nuclear force becomes repulsive. This repulsion is responsible for the size of nuclei, since nucleons can come no closer than the force allows. (The size of an atom, measured in angstroms (Å, or 10−10 m), is five orders of magnitude larger). The nuclear force is not simple, though, as it depends on the nucleon spins, has a tensor component, and may depend on the relative momentum of the nucleons.
The nuclear force has an essential role in storing energy that is used in nuclear power and nuclear weapons. Work (energy) is required to bring charged protons together against their electric repulsion. This energy is stored when the protons and neutrons are bound together by the nuclear force to form a nucleus. The mass of a nucleus is less than the sum total of the individual masses of the protons and neutrons. The difference in masses is known as the mass defect, which can be expressed as an energy equivalent. Energy is released when a heavy nucleus breaks apart into two or more lighter nuclei. This energy is the internucleon potential energy that is released when the nuclear force no longer holds the charged nuclear fragments together.
A quantitative description of the nuclear force relies
Document 1:::
The EMC effect is the surprising observation that the cross section for deep inelastic scattering from an atomic nucleus is different from that of the same number of free protons and neutrons (collectively referred to as nucleons). From this observation, it can be inferred that the quark momentum distributions in nucleons bound inside nuclei are different from those of free nucleons. This effect was first observed in 1983 at CERN by the European Muon Collaboration, hence the name "EMC effect". It was unexpected, since the average binding energy of protons and neutrons inside nuclei is insignificant when compared to the energy transferred in deep inelastic scattering reactions that probe quark distributions. While over 1000 scientific papers have been written on the topic and numerous hypotheses have been proposed, no definitive explanation for the cause of the effect has been confirmed. Determining the origin of the EMC effect is one of the major unsolved problems in the field of nuclear physics.
Background
Protons and neutrons, collectively referred to as nucleons, are the constituents of atomic nuclei, and nuclear matter such as that in neutron stars. Protons and neutrons themselves are composite particles made up of quarks and gluons, a discovery made at SLAC in the late 1960s using deep inelastic scattering (DIS) experiments (1990 Nobel Prize).
In the DIS reaction, a probe (typically an accelerated electron) scatters from an individual quark inside a nucleon. By measuring the cross section of the DIS process, the distribution of quarks inside the nucleon can be determined. These distributions are effectively functions of a single variable, known as Bjorken-, which is a measure of the fraction of the momentum of the quark stricken by the electron.
Experiments using DIS from protons by electrons and other probes have allowed physicists to measure the proton's quark distribution over a wide range of Bjorken-, i.e. the probability of finding a quark with momentu
Document 2:::
In nuclear physics and chemistry, the value for a reaction is the amount of energy absorbed or released during the nuclear reaction. The value relates to the enthalpy of a chemical reaction or the energy of radioactive decay products. It can be determined from the masses of reactants and products. values affect reaction rates. In general, the larger the positive value for the reaction, the faster the reaction proceeds, and the more likely the reaction is to "favor" the products.
where the masses are in atomic mass units. Also, both and are the sums of the reactant and product masses respectively.
Definition
The conservation of energy, between the initial and final energy of a nuclear process enables the general definition of based on the mass–energy equivalence. For any radioactive particle decay, the kinetic energy difference will be given by:
where denotes the kinetic energy of the mass .
A reaction with a positive value is exothermic, i.e. has a net release of energy, since the kinetic energy of the final state is greater than the kinetic energy of the initial state.
A reaction with a negative value is endothermic, i.e. requires a net energy input, since the kinetic energy of the final state is less than the kinetic energy of the initial state. Observe that a chemical reaction is exothermic when it has a negative enthalpy of reaction, in contrast a positive value in a nuclear reaction.
The value can also be expressed in terms of the Mass excess of the nuclear species as:
Proof The mass of a nucleus can be written as where is the mass number (sum of number of protons and neutrons) and MeV/c. Note that the count of nucleons is conserved in a nuclear reaction. Hence, and .
Applications
Chemical values are measurement in calorimetry. Exothermic chemical reactions tend to be more spontaneous and can emit light or heat, resulting in runaway feedback(i.e. explosions).
values are also featured in particle physics. For example,
Document 3:::
Radiation chemistry is a subdivision of nuclear chemistry which studies the chemical effects of ionizing radiation on matter. This is quite different from radiochemistry, as no radioactivity needs to be present in the material which is being chemically changed by the radiation. An example is the conversion of water into hydrogen gas and hydrogen peroxide.
Radiation interactions with matter
As ionizing radiation moves through matter its energy is deposited through interactions with the electrons of the absorber. The result of an interaction between the radiation and the absorbing species is removal of an electron from an atom or molecular bond to form radicals and excited species. The radical species then proceed to react with each other or with other molecules in their vicinity. It is the reactions of the radical species that are responsible for the changes observed following irradiation of a chemical system.
Charged radiation species (α and β particles) interact through Coulombic forces between the charges of the electrons in the absorbing medium and the charged radiation particle. These interactions occur continuously along the path of the incident particle until the kinetic energy of the particle is sufficiently depleted. Uncharged species (γ photons, x-rays) undergo a single event per photon, totally consuming the energy of the photon and leading to the ejection of an electron from a single atom. Electrons with sufficient energy proceed to interact with the absorbing medium identically to β radiation.
An important factor that distinguishes different radiation types from one another is the linear energy transfer (LET), which is the rate at which the radiation loses energy with distance traveled through the absorber. Low LET species are usually low mass, either photons or electron mass species (β particles, positrons) and interact sparsely along their path through the absorber, leading to isolated regions of reactive radical species. High LET species are usuall
Document 4:::
In nuclear and materials physics, stopping power is the retarding force acting on charged particles, typically alpha and beta particles, due to interaction with matter, resulting in loss of particle kinetic energy.
Stopping power is also interpreted as the rate at which a material absorbs the kinetic energy of a charged particle. Its application is important in a wide range of thermodynamic areas such as radiation protection, ion implantation and nuclear medicine.
Definition and Bragg curve
Both charged and uncharged particles lose energy while passing through matter. Positive ions are considered in most cases below.
The stopping power depends on the type and energy of the radiation and on the properties of the material it passes. Since the production of an ion pair (usually a positive ion and a (negative) electron) requires a fixed amount of energy (for example, 33.97 eV in dry air), the number of ionizations per path length is proportional to the stopping power. The stopping power of the material is numerically equal to the loss of energy per unit path length, :
The minus sign makes positive.
The force usually increases toward the end of range and reaches a maximum, the Bragg peak, shortly before the energy drops to zero. The curve that describes the force as function of the material depth is called the Bragg curve. This is of great practical importance for radiation therapy.
The equation above defines the linear stopping power which in the international system is expressed in N but is usually indicated in other units like MeV/mm or similar. If a substance is compared in gaseous and solid form, then the linear stopping powers of the two states are very different just because of the different density. One therefore often divides the force by the density of the material to obtain the mass stopping power which in the international system is expressed in m4/s2 but is usually found in units like MeV/(mg/cm2) or similar. The mass stopping power then depends
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
If the electromagnetic repulsion between protons is greater than the strong nuclear force of attraction between them, what do they become?
A. destroyed
B. slow , or radioactive
C. unstable, or radioactive
D. unstable , or experimental
Answer:
|
|
ai2_arc-959
|
multiple_choice
|
What do scientists mean when they refer to a population?
|
[
"all the organisms in an ecosystem",
"all the species that share similar anatomical features",
"all the animals that acquire resources through similar methods",
"all the interbreeding members of a certain species in an ecosystem"
] |
D
|
Relavent Documents:
Document 0:::
The term population biology has been used with different meanings.
In 1971 Edward O. Wilson et al. used the term in the sense of applying mathematical models to population genetics, community ecology, and population dynamics. Alan Hastings used the term in 1997 as the title of his book on the mathematics used in population dynamics. The name was also used for a course given at UC Davis in the late 2010s, which describes it as an interdisciplinary field combining the areas of ecology and evolutionary biology. The course includes mathematics, statistics, ecology, genetics, and systematics. Numerous types of organisms are studied.
The journal Theoretical Population Biology is published.
See also
Document 1:::
Molecular ecology is a field of evolutionary biology that is concerned with applying molecular population genetics, molecular phylogenetics, and more recently genomics to traditional ecological questions (e.g., species diagnosis, conservation and assessment of biodiversity, species-area relationships, and many questions in behavioral ecology). It is virtually synonymous with the field of "Ecological Genetics" as pioneered by Theodosius Dobzhansky, E. B. Ford, Godfrey M. Hewitt, and others. These fields are united in their attempt to study genetic-based questions "out in the field" as opposed to the laboratory. Molecular ecology is related to the field of conservation genetics.
Methods frequently include using microsatellites to determine gene flow and hybridization between populations. The development of molecular ecology is also closely related to the use of DNA microarrays, which allows for the simultaneous analysis of the expression of thousands of different genes. Quantitative PCR may also be used to analyze gene expression as a result of changes in environmental conditions or different responses by differently adapted individuals.
Molecular ecology uses molecular genetic data to answer ecological question related to biogeography, genomics, conservation genetics, and behavioral ecology. Studies mostly use data based on deoxyribonucleic acid sequences (DNA). This approach has been enhanced over a number of years to allow researchers to sequence thousands of genes from a small amount of starting DNA. Allele sizes are another way researchers are able to compare individuals and populations which allows them to quantify the genetic diversity within a population and the genetic similarities among populations.
Bacterial diversity
Molecular ecological techniques are used to study in situ questions of bacterial diversity. Many microorganisms are not easily obtainable as cultured strains in the laboratory, which would allow for identification and characterization. I
Document 2:::
Population ecology is a sub-field of ecology that deals with the dynamics of species populations and how these populations interact with the environment, such as birth and death rates, and by immigration and emigration.
The discipline is important in conservation biology, especially in the development of population viability analysis which makes it possible to predict the long-term probability of a species persisting in a given patch of habitat. Although population ecology is a subfield of biology, it provides interesting problems for mathematicians and statisticians who work in population dynamics.
History
In the 1940s ecology was divided into autecology—the study of individual species in relation to the environment—and synecology—the study of groups of species in relation to the environment. The term autecology (from Ancient Greek: αὐτο, aúto, "self"; οίκος, oíkos, "household"; and λόγος, lógos, "knowledge"), refers to roughly the same field of study as concepts such as life cycles and behaviour as adaptations to the environment by individual organisms. Eugene Odum, writing in 1953, considered that synecology should be divided into population ecology, community ecology and ecosystem ecology, renaming autecology as 'species ecology' (Odum regarded "autecology" as an archaic term), thus that there were four subdivisions of ecology.
Terminology
A population is defined as a group of interacting organisms of the same species. A demographic structure of a population is how populations are often quantified. The total number of individuals in a population is defined as a population size, and how dense these individuals are is defined as population density. There is also a population’s geographic range, which has limits that a species can tolerate (such as temperature).
Population size can be influenced by the per capita population growth rate (rate at which the population size changes per individual in the population.) Births, deaths, emigration, and immigration rates
Document 3:::
Microbial population biology is the application of the principles of population biology to microorganisms.
Distinguishing from other biological disciplines
Microbial population biology, in practice, is the application of population ecology and population genetics toward understanding the ecology and evolution of bacteria, archaebacteria, microscopic fungi (such as yeasts), additional microscopic eukaryotes (e.g., "protozoa" and algae), and viruses.
Microbial population biology also encompasses the evolution and ecology of community interactions (community ecology) between microorganisms, including microbial coevolution and predator-prey interactions. In addition, microbial population biology considers microbial interactions with more macroscopic organisms (e.g., host-parasite interactions), though strictly this should be more from the perspective of the microscopic rather than the macroscopic organism. A good deal of microbial population biology may be described also as microbial evolutionary ecology. On the other hand, typically microbial population biologists (unlike microbial ecologists) are less concerned with questions of the role of microorganisms in ecosystem ecology, which is the study of nutrient cycling and energy movement between biotic as well as abiotic components of ecosystems.
Microbial population biology can include aspects of molecular evolution or phylogenetics. Strictly, however, these emphases should be employed toward understanding issues of microbial evolution and ecology rather than as a means of understanding more universal truths applicable to both microscopic and macroscopic organisms. The microorganisms in such endeavors consequently should be recognized as organisms rather than simply as molecular or evolutionary reductionist model systems. Thus, the study of RNA in vitro evolution is not microbial population biology and nor is the in silico generation of phylogenies of otherwise non-microbial sequences, even if aspects of either may
Document 4:::
The Institute for Biodiversity and Ecosystem Dynamics (IBED) is one of the ten research institutes of the Faculty of Science of the Universiteit van Amsterdam. IBED employs more than 100 researchers, with PhD students and Postdocs forming a majority, and 30 supporting staff. The total annual budget is around 10 m€, of which more than 40 per cent comes from external grants and contracts. The main output consist of publications in peer reviewed journals and books (on average 220 per year). Each year around 15 PhD students defend their thesis and obtain their degree from the Universiteit van Amsterdam. The institute is managed by a general director appointed by the Dean of the Faculty for a period of five years, assisted by a business manager.
Mission statement
The mission of the Institute for Biodiversity and Ecosystem Dynamics is to increase our insights in the functioning and biodiversity of ecosystems in all their complexity. Knowledge of the interactions between living organisms and processes in their physical and chemical environment is essential for a better understanding of the dynamics of ecosystems at different temporal and spatial scales.
Organization of IBED Research
IBED research is organized in the following three themes:
Theme I: Biodiversity and Evolution
The main question of Theme I research is how patterns in biodiversity can be explained from underlying processes: speciation and extinction, dispersal and the (dis)appearance of geographical barriers, reproductive isolation and hybridisation of taxa. Modern reconstructions of the history of life on earth rely heavily on analyses of DNA data that contain the footprints of the past. Research related to human-made effects on biodiversity includes the identification of endangered biodiversity hotspots affected by global change, potential risks of an escape of transgenes from crops to wild species, and the consequences of habitat fragmentation for the viability and genetic diversity of populations and
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do scientists mean when they refer to a population?
A. all the organisms in an ecosystem
B. all the species that share similar anatomical features
C. all the animals that acquire resources through similar methods
D. all the interbreeding members of a certain species in an ecosystem
Answer:
|
|
sciq-5184
|
multiple_choice
|
What is a compound in which all of the atoms are connected to one another by covalent bonds?
|
[
"covalent bond element",
"covalent mixture",
"covalent network solid",
"compound metal"
] |
C
|
Relavent Documents:
Document 0:::
Steudel R 2020, Chemistry of the Non-metals: Syntheses - Structures - Bonding - Applications, in collaboration with D Scheschkewitz, Berlin, Walter de Gruyter, . ▲
An updated translation of the 5th German edition of 2013, incorporating the literature up to Spring 2019. Twenty-three nonmetals, including B, Si, Ge, As, Se, Te, and At but not Sb (nor Po). The nonmetals are identified on the basis of their electrical conductivity at absolute zero putatively being close to zero, rather than finite as in the case of metals. That does not work for As however, which has the electronic structure of a semimetal (like Sb).
Halka M & Nordstrom B 2010, "Nonmetals", Facts on File, New York,
A reading level 9+ book covering H, C, N, O, P, S, Se. Complementary books by the same authors examine (a) the post-transition metals (Al, Ga, In, Tl, Sn, Pb and Bi) and metalloids (B, Si, Ge, As, Sb, Te and Po); and (b) the halogens and noble gases.
Woolins JD 1988, Non-Metal Rings, Cages and Clusters, John Wiley & Sons, Chichester, .
A more advanced text that covers H; B; C, Si, Ge; N, P, As, Sb; O, S, Se and Te.
Steudel R 1977, Chemistry of the Non-metals: With an Introduction to Atomic Structure and Chemical Bonding, English edition by FC Nachod & JJ Zuckerman, Berlin, Walter de Gruyter, . ▲
Twenty-four nonmetals, including B, Si, Ge, As, Se, Te, Po and At.
Powell P & Timms PL 1974, The Chemistry of the Non-metals, Chapman & Hall, London, . ▲
Twenty-two nonmetals including B, Si, Ge, As and Te. Tin and antimony are shown as being intermediate between metals and nonmetals; they are later shown as either metals or nonmetals. Astatine is counted as a metal.
Document 1:::
A chemical bonding model is a theoretical model used to explain atomic bonding structure, molecular geometry, properties, and reactivity of physical matter. This can refer to:
VSEPR theory, a model of molecular geometry.
Valence bond theory, which describes molecular electronic structure with localized bonds and lone pairs.
Molecular orbital theory, which describes molecular electronic structure with delocalized molecular orbitals.
Crystal field theory, an electrostatic model for transition metal complexes.
Ligand field theory, the application of molecular orbital theory to transition metal complexes.
Chemical bonding
Document 2:::
Molecular binding is an attractive interaction between two molecules that results in a stable association in which the molecules are in close proximity to each other. It is formed when atoms or molecules bind together by sharing of electrons. It often, but not always, involves some chemical bonding.
In some cases, the associations can be quite strong—for example, the protein streptavidin and the vitamin biotin have a dissociation constant (reflecting the ratio between bound and free biotin) on the order of 10−14—and so the reactions are effectively irreversible. The result of molecular binding is sometimes the formation of a molecular complex in which the attractive forces holding the components together are generally non-covalent, and thus are normally energetically weaker than covalent bonds.
Molecular binding occurs in biological complexes (e.g., between pairs or sets of proteins, or between a protein and a small molecule ligand it binds) and also in abiologic chemical systems, e.g. as in cases of coordination polymers and coordination networks such as metal-organic frameworks.
Types
Molecular binding can be classified into the following types:
Non-covalent – no chemical bonds are formed between the two interacting molecules hence the association is fully reversible
Reversible covalent – a chemical bond is formed, however the free energy difference separating the noncovalently-bonded reactants from bonded product is near equilibrium and the activation barrier is relatively low such that the reverse reaction which cleaves the chemical bond easily occurs
Irreversible covalent – a chemical bond is formed in which the product is thermodynamically much more stable than the reactants such that the reverse reaction does not take place.
Bound molecules are sometimes called a "molecular complex"—the term generally refers to non-covalent associations. Non-covalent interactions can effectively become irreversible; for example, tight binding inhibitors of enzymes
Document 3:::
An intramolecular force (or primary forces) is any force that binds together the atoms making up a molecule or compound, not to be confused with intermolecular forces, which are the forces present between molecules. The subtle difference in the name comes from the Latin roots of English with inter meaning between or among and intra meaning inside. Chemical bonds are considered to be intramolecular forces which are often stronger than intermolecular forces present between non-bonding atoms or molecules.
Types
The classical model identifies three main types of chemical bonds — ionic, covalent, and metallic — distinguished by the degree of charge separation between participating atoms. The characteristics of the bond formed can be predicted by the properties of constituent atoms, namely electronegativity. They differ in the magnitude of their bond enthalpies, a measure of bond strength, and thus affect the physical and chemical properties of compounds in different ways. % of ionic character is directly proportional difference in electronegitivity of bonded atom.
Ionic bond
An ionic bond can be approximated as complete transfer of one or more valence electrons of atoms participating in bond formation, resulting in a positive ion and a negative ion bound together by electrostatic forces. Electrons in an ionic bond tend to be mostly found around one of the two constituent atoms due to the large electronegativity difference between the two atoms, generally more than 1.9, (greater difference in electronegativity results in a stronger bond); this is often described as one atom giving electrons to the other. This type of bond is generally formed between a metal and nonmetal, such as sodium and chlorine in NaCl. Sodium would give an electron to chlorine, forming a positively charged sodium ion and a negatively charged chloride ion.
Covalent bond
In a true covalent bond, the electrons are shared evenly between the two atoms of the bond; there is little or no charge separa
Document 4:::
A bonding electron is an electron involved in chemical bonding. This can refer to:
Chemical bond, a lasting attraction between atoms, ions or molecules
Covalent bond or molecular bond, a sharing of electron pairs between atoms
Bonding molecular orbital, an attraction between the atomic orbitals of atoms in a molecule
Chemical bonding
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is a compound in which all of the atoms are connected to one another by covalent bonds?
A. covalent bond element
B. covalent mixture
C. covalent network solid
D. compound metal
Answer:
|
|
sciq-1508
|
multiple_choice
|
What does some amphibians have as juveniles but not as adults living on land?
|
[
"microscopic line system",
"vertical line system",
"kinetic line system",
"lateral line system"
] |
D
|
Relavent Documents:
Document 0:::
Advanced Placement (AP) Biology (also known as AP Bio) is an Advanced Placement biology course and exam offered by the College Board in the United States. For the 2012–2013 school year, the College Board unveiled a new curriculum with a greater focus on "scientific practices".
This course is designed for students who wish to pursue an interest in the life sciences. The College Board recommends successful completion of high school biology and high school chemistry before commencing AP Biology, although the actual prerequisites vary from school to school and from state to state. This course, nevertheless, is considered very challenging and one of the most difficult AP classes, as shown with AP Finals grade distributions.
Topic outline
The exam covers the following 8 units. The percentage indicates the portion of the multiple-choice section of the exam focused on each content area:
The course is based on and tests six skills, called scientific practices which include:
In addition to the topics above, students are required to be familiar with general lab procedure. Students should know how to collect data, analyze data to form conclusions, and apply those conclusions.
Exam
Students are allowed to use a four-function, scientific, or graphing calculator.
The exam has two sections: a 90 minute multiple choice section and a 90 minute free response section. There are 60 multiple choice questions and six free responses, two long and four short. Both sections are worth 50% of the score.
Score distribution
Commonly used textbooks
Biology, AP Edition by Sylvia Mader (2012, hardcover )
Life: The Science of Biology (Sadava, Heller, Orians, Purves, and Hillis, )
Campbell Biology AP Ninth Edition (Reece, Urry, Cain, Wasserman, Minorsky, and Andrew Jackson )
See also
Glossary of biology
A.P Bio (TV Show)
Document 1:::
Animal science is described as "studying the biology of animals that are under the control of humankind". It can also be described as the production and management of farm animals. Historically, the degree was called animal husbandry and the animals studied were livestock species, like cattle, sheep, pigs, poultry, and horses. Today, courses available look at a broader area, including companion animals, like dogs and cats, and many exotic species. Degrees in Animal Science are offered at a number of colleges and universities. Animal science degrees are often offered at land-grant universities, which will often have on-campus farms to give students hands-on experience with livestock animals.
Education
Professional education in animal science prepares students for careers in areas such as animal breeding, food and fiber production, nutrition, animal agribusiness, animal behavior, and welfare. Courses in a typical Animal Science program may include genetics, microbiology, animal behavior, nutrition, physiology, and reproduction. Courses in support areas, such as genetics, soils, agricultural economics and marketing, legal aspects, and the environment also are offered.
Bachelor degree
At many universities, a Bachelor of Science (BS) degree in Animal Science allows emphasis in certain areas. Typical areas are species-specific or career-specific. Species-specific areas of emphasis prepare students for a career in dairy management, beef management, swine management, sheep or small ruminant management, poultry production, or the horse industry. Other career-specific areas of study include pre-veterinary medicine studies, livestock business and marketing, animal welfare and behavior, animal nutrition science, animal reproduction science, or genetics. Youth programs are also an important part of animal science programs.
Pre-veterinary emphasis
Many schools that offer a degree option in Animal Science also offer a pre-veterinary emphasis such as Iowa State University, th
Document 2:::
Roshd Biological Education is a quarterly science educational magazine covering recent developments in biology and biology education for a biology teacher Persian -speaking audience. Founded in 1985, it is published by The Teaching Aids Publication Bureau, Organization for Educational Planning and Research, Ministry of Education, Iran. Roshd Biological Education has an editorial board composed of Iranian biologists, experts in biology education, science journalists and biology teachers.
It is read by both biology teachers and students, as a way of launching innovations and new trends in biology education, and helping biology teachers to teach biology in better and more effective ways.
Magazine layout
As of Autumn 2012, the magazine is laid out as follows:
Editorial—often offering a view of point from editor in chief on an educational and/or biological topics.
Explore— New research methods and results on biology and/or education.
World— Reports and explores on biological education worldwide.
In Brief—Summaries of research news and discoveries.
Trends—showing how new technology is altering the way we live our lives.
Point of View—Offering personal commentaries on contemporary topics.
Essay or Interview—often with a pioneer of a biological and/or educational researcher or an influential scientific educational leader.
Muslim Biologists—Short histories of Muslim Biologists.
Environment—An article on Iranian environment and its problems.
News and Reports—Offering short news and reports events on biology education.
In Brief—Short articles explaining interesting facts.
Questions and Answers—Questions about biology concepts and their answers.
Book and periodical Reviews—About new publication on biology and/or education.
Reactions—Letter to the editors.
Editorial staff
Mohammad Karamudini, editor in chief
History
Roshd Biological Education started in 1985 together with many other magazines in other science and art. The first editor was Dr. Nouri-Dalooi, th
Document 3:::
Myomeres are blocks of skeletal muscle tissue arranged in sequence, commonly found in aquatic chordates. Myomeres are separated from adjacent myomeres by connective fascia (myosepta) and most easily seen in larval fishes or in the olm. Myomere counts are sometimes used for identifying specimens, since their number corresponds to the number of vertebrae in the adults. Location varies, with some species containing these only near the tails, while some have them located near the scapular or pelvic girdles. Depending on the species, myomeres could be arranged in an epaxial or hypaxial manner. Hypaxial refers to ventral muscles and related structures while epaxial refers to more dorsal muscles. The horizontal septum divides these two regions in vertebrates from cyclostomes to gnathostomes. In terrestrial chordates, the myomeres become fused as well as indistinct, due to the disappearance of myosepta.
Shape
The shape of myomeres varies by species. Myomeres are commonly zig-zag, "V" (lancelets), "W" (fishes), or straight (tetrapods)– shaped muscle fibers. Generally, cyclostome myomeres are arranged in vertical strips while those of jawed fishes are folded in a complex matter due to swimming capability evolution. Specifically, myomeres of elasmobranchs and eels are “W”-shaped. Contrastingly, myomeres of tetrapods run vertically and do not display complex folding. Another species with simply-lain myomeres are mudpuppies. Myomeres overlap each other in succession, meaning myomere activation also allows neighboring myomeres to activate.
Myomeres are made up of myoglobin-rich dark muscle as well as white muscle. Dark muscle, generally, functions as slow-twitch muscle fibers while white muscle is composed of fast-twitch fibers.
Function
Specifically, three types of myomeres in fish-like chordates include amphioxine (lancelet), cyclostomine (jawless fish), and gnathostomine (jawed fish). A common function shared by all of these is that they function to flex the body lateral
Document 4:::
Vertebrate zoology is the biological discipline that consists of the study of Vertebrate animals, i.e., animals with a backbone, such as fish, amphibians, reptiles, birds and mammals. Many natural history museums have departments named Vertebrate Zoology. In some cases whole museums bear this name, e.g. the Museum of Vertebrate Zoology at the University of California, Berkeley.
Subdivisions
This subdivision of zoology has many further subdivisions, including:
Ichthyology - the study of fishes.
Mammalogy - the study of mammals.
Chiropterology - the study of bats.
Primatology - the study of primates.
Ornithology - the study of birds.
Herpetology - the study of reptiles.
Batrachology - the study of amphibians.
These divisions are sometimes further divided into more specific specialties.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What does some amphibians have as juveniles but not as adults living on land?
A. microscopic line system
B. vertical line system
C. kinetic line system
D. lateral line system
Answer:
|
|
sciq-6551
|
multiple_choice
|
Vertebrates also require relatively large quantities of calcium and phosphorus for building and maintaining what?
|
[
"brain cells",
"bone",
"metabolism",
"blood"
] |
B
|
Relavent Documents:
Document 0:::
Animal nutrition focuses on the dietary nutrients needs of animals, primarily those in agriculture and food production, but also in zoos, aquariums, and wildlife management.
Constituents of diet
Macronutrients (excluding fiber and water) provide structural material (amino acids from which proteins are built, and lipids from which cell membranes and some signaling molecules are built) and energy. Some of the structural material can be used to generate energy internally, though the net energy depends on such factors as absorption and digestive effort, which vary substantially from instance to instance. Vitamins, minerals, fiber, and water do not provide energy, but are required for other reasons. A third class dietary material, fiber (i.e., non-digestible material such as cellulose), seems also to be required, for both mechanical and biochemical reasons, though the exact reasons remain unclear.
Molecules of carbohydrates and fats consist of carbon, hydrogen, and oxygen atoms. Carbohydrates range from simple monosaccharides (glucose, fructose, galactose) to complex polysaccharides (starch). Fats are triglycerides, made of assorted fatty acid monomers bound to glycerol backbone. Some fatty acids, but not all, are essential in the diet: they cannot be synthesized in the body. Protein molecules contain nitrogen atoms in addition to carbon, oxygen, and hydrogen. The fundamental components of protein are nitrogen-containing amino acids. Essential amino acids cannot be made by the animal. Some of the amino acids are convertible (with the expenditure of energy) to glucose and can be used for energy production just as ordinary glucose. By breaking down existing protein, some glucose can be produced internally; the remaining amino acids are discarded, primarily as urea in urine. This occurs normally only during prolonged starvation.
Other dietary substances found in plant foods (phytochemicals, polyphenols) are not identified as essential nutrients but appear to impact healt
Document 1:::
There is much to be discovered about the evolution of the brain and the principles that govern it. While much has been discovered, not everything currently known is well understood. The evolution of the brain has appeared to exhibit diverging adaptations within taxonomic classes such as Mammalia and more vastly diverse adaptations across other taxonomic classes.
Brain to body size scales allometrically. This means as body size changes, so do other physiological, anatomical, and biochemical constructs connecting the brain to the body. Small bodied mammals have relatively large brains compared to their bodies whereas large mammals (such as whales) have a smaller brain to body ratios. If brain weight is plotted against body weight for primates, the regression line of the sample points can indicate the brain power of a primate species. Lemurs for example fall below this line which means that for a primate of equivalent size, we would expect a larger brain size. Humans lie well above the line indicating that humans are more encephalized than lemurs. In fact, humans are more encephalized compared to all other primates. This means that human brains have exhibited a larger evolutionary increase in its complexity relative to its size. Some of these evolutionary changes have been found to be linked to multiple genetic factors, such as proteins and other organelles.
Early history of brain development
One approach to understanding overall brain evolution is to use a paleoarchaeological timeline to trace the necessity for ever increasing complexity in structures that allow for chemical and electrical signaling. Because brains and other soft tissues do not fossilize as readily as mineralized tissues, scientists often look to other structures as evidence in the fossil record to get an understanding of brain evolution. This, however, leads to a dilemma as the emergence of organisms with more complex nervous systems with protective bone or other protective tissues that can then
Document 2:::
Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research.
Americas
Human Biology major at Stanford University, Palo Alto (since 1970)
Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government.
Human and Social Biology (Caribbean)
Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment.
Human Biology Program at University of Toronto
The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications.
Asia
BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002)
BSc (honours) Human Biology at AIIMS (New
Document 3:::
The European Calcium Society is a non-profit society that aims to develop relationships between different generations of scientists in Europe working in the field of calcium signaling and the proteins involved in the Calcium Toolkit.
Origin
The First European Symposium took place in 1989 and covered calcium binding proteins in normal and transformed cells. The symposium resulted from a 30-month gestation.
The symposium filled a gap given the lack of European fora in which young European researchers could participate (the International Symposium was held in Asilomar, CA 1986, in Nagoya in 1988, in Banff, Canada, etc.)
A European Union grant called Stimulation Action was awarded to Roland Pochet in November 1986. Long discussions in 1988 between Pochet and Jacques Haiech at Mont Sainte-Odile who pointed out the importance of European researchers in calcium binding proteins (Hamoir, Liége, 1955, Pechere, Montpellier, 1965, Drabikowski, Varsovie, 1970) and the strong support received from Claus Heizmann.
History
1997 was important because the “European Calcium Society” was registered under E.U. guidelines, which had earlier rejected a proposal to finance the fourth symposium because of lack of structure. In 1997 they created the group's first ECS Web site, logo, newsletter and a set of statutes published in the “Moniteur belge” as an “Arrêté Royal du 22 septembre 1997” signed by King Albert II.
1998-2005
1998-2005 was a consolidation period. Since 2000, ECS has been selected as an EU High-level Scientific Conference allowing it to offer grants to young European researchers. The board was enlarged to include Volker Gerke and Steve Moss. ECS provided posters, prizes and recently special grants for young researchers.
Youth emphasis
Since its creation, 30 to 35% of the participants at ECS symposia were young researchers (below 35 years old). Encouraging young researchers to participate has always been one of the main objectives.
Publication
Since 1992 Heizmann
Document 4:::
Relatively speaking, the brain consumes an immense amount of energy in comparison to the rest of the body. The mechanisms involved in the transfer of energy from foods to neurons are likely to be fundamental to the control of brain function. Human bodily processes, including the brain, all require both macronutrients, as well as micronutrients.
Insufficient intake of selected vitamins, or certain metabolic disorders, may affect cognitive processes by disrupting the nutrient-dependent processes within the body that are associated with the management of energy in neurons, which can subsequently affect synaptic plasticity, or the ability to encode new memories.
Macronutrients
The human brain requires nutrients obtained from the diet to develop and sustain its physical structure and cognitive functions. Additionally, the brain requires caloric energy predominately derived from the primary macronutrients to operate. The three primary macronutrients include carbohydrates, proteins, and fats. Each macronutrient can impact cognition through multiple mechanisms, including glucose and insulin metabolism, neurotransmitter actions, oxidative stress and inflammation, and the gut-brain axis. Inadequate macronutrient consumption or proportion could impair optimal cognitive functioning and have long-term health implications.
Carbohydrates
Through digestion, dietary carbohydrates are broken down and converted into glucose, which is the sole energy source for the brain. Optimal brain function relies on adequate carbohydrate consumption, as carbohydrates provide the quickest source of glucose for the brain. Glucose deficiencies such as hypoglycaemia reduce available energy for the brain and impair all cognitive processes and performance. Additionally, situations with high cognitive demand, such as learning a new task, increase brain glucose utilization, depleting blood glucose stores and initiating the need for supplementation.
Complex carbohydrates, especially those with high d
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Vertebrates also require relatively large quantities of calcium and phosphorus for building and maintaining what?
A. brain cells
B. bone
C. metabolism
D. blood
Answer:
|
|
sciq-8665
|
multiple_choice
|
What are broken down during digestion to provide the amino acids needed for protein synthesis?
|
[
"dietary proteins",
"carbohydrates",
"metabolytes",
"sugars"
] |
A
|
Relavent Documents:
Document 0:::
An essential amino acid, or indispensable amino acid, is an amino acid that cannot be synthesized from scratch by the organism fast enough to supply its demand, and must therefore come from the diet. Of the 21 amino acids common to all life forms, the nine amino acids humans cannot synthesize are valine, isoleucine, leucine, methionine, phenylalanine, tryptophan, threonine, histidine, and lysine.
Six other amino acids are considered conditionally essential in the human diet, meaning their synthesis can be limited under special pathophysiological conditions, such as prematurity in the infant or individuals in severe catabolic distress. These six are arginine, cysteine, glycine, glutamine, proline, and tyrosine. Six amino acids are non-essential (dispensable) in humans, meaning they can be synthesized in sufficient quantities in the body. These six are alanine, aspartic acid, asparagine, glutamic acid, serine, and selenocysteine (considered the 21st amino acid). Pyrrolysine (considered the 22nd amino acid), which is proteinogenic only in certain microorganisms, is not used by and therefore non-essential for most organisms, including humans.
The limiting amino acid is the essential amino acid which is furthest from meeting nutritional requirements. This concept is important when determining the selection, number, and amount of foods to consume because even when total protein and all other essential amino acids are satisfied if the limiting amino acid is not satisfied then the meal is considered to be nutritionally limited by that amino acid.
Essentiality in humans
Of the twenty amino acids common to all life forms (not counting selenocysteine), humans cannot synthesize nine: histidine, isoleucine, leucine, lysine, methionine, phenylalanine, threonine, tryptophan and valine. Additionally, the amino acids arginine, cysteine, glutamine, glycine, proline and tyrosine are considered conditionally essential, which means that specific populations who do not synthesize it i
Document 1:::
In molecular biology, protein catabolism is the breakdown of proteins into smaller peptides and ultimately into amino acids. Protein catabolism is a key function of digestion process. Protein catabolism often begins with pepsin, which converts proteins into polypeptides. These polypeptides are then further degraded. In humans, the pancreatic proteases include trypsin, chymotrypsin, and other enzymes. In the intestine, the small peptides are broken down into amino acids that can be absorbed into the bloodstream. These absorbed amino acids can then undergo amino acid catabolism, where they are utilized as an energy source or as precursors to new proteins.
The amino acids produced by catabolism may be directly recycled to form new proteins, converted into different amino acids, or can undergo amino acid catabolism to be converted to other compounds via the Krebs cycle.
Interface with other metabolic and salvage pathways
Protein catabolism produces amino acids that are used to form bacterial proteins or oxidized to meet the energy needs of the cell. The amino acids that are produced by protein catabolism can then be further catabolized in amino acid catabolism. Among the several degradative processes for amino acids are Deamination (removal of an amino group), transamination (transfer of amino group), decarboxylation (removal of carboxyl group), and dehydrogenation (removal of hydrogen). Degradation of amino acids can function as part of a salvage pathway, whereby parts of degraded amino acids are used to create new amino acids, or as part of a metabolic pathway whereby the amino acid is broken down to release or recapture chemical energy. For example, the chemical energy that is released by oxidization in a dehydrogenation reaction can be used to reduce NAD+ to NADH, which can then be fed directly into the Krebs/Citric Acid (TCA) Cycle.
Protein degradation
Protein degradation differs from protein catabolism. Proteins are produced and destroyed routinely as par
Document 2:::
Animal nutrition focuses on the dietary nutrients needs of animals, primarily those in agriculture and food production, but also in zoos, aquariums, and wildlife management.
Constituents of diet
Macronutrients (excluding fiber and water) provide structural material (amino acids from which proteins are built, and lipids from which cell membranes and some signaling molecules are built) and energy. Some of the structural material can be used to generate energy internally, though the net energy depends on such factors as absorption and digestive effort, which vary substantially from instance to instance. Vitamins, minerals, fiber, and water do not provide energy, but are required for other reasons. A third class dietary material, fiber (i.e., non-digestible material such as cellulose), seems also to be required, for both mechanical and biochemical reasons, though the exact reasons remain unclear.
Molecules of carbohydrates and fats consist of carbon, hydrogen, and oxygen atoms. Carbohydrates range from simple monosaccharides (glucose, fructose, galactose) to complex polysaccharides (starch). Fats are triglycerides, made of assorted fatty acid monomers bound to glycerol backbone. Some fatty acids, but not all, are essential in the diet: they cannot be synthesized in the body. Protein molecules contain nitrogen atoms in addition to carbon, oxygen, and hydrogen. The fundamental components of protein are nitrogen-containing amino acids. Essential amino acids cannot be made by the animal. Some of the amino acids are convertible (with the expenditure of energy) to glucose and can be used for energy production just as ordinary glucose. By breaking down existing protein, some glucose can be produced internally; the remaining amino acids are discarded, primarily as urea in urine. This occurs normally only during prolonged starvation.
Other dietary substances found in plant foods (phytochemicals, polyphenols) are not identified as essential nutrients but appear to impact healt
Document 3:::
This is a list of articles that describe particular biomolecules or types of biomolecules.
A
For substances with an A- or α- prefix such as
α-amylase, please see the parent page (in this case Amylase).
A23187 (Calcimycin, Calcium Ionophore)
Abamectine
Abietic acid
Acetic acid
Acetylcholine
Actin
Actinomycin D
Adenine
Adenosmeme
Adenosine diphosphate (ADP)
Adenosine monophosphate (AMP)
Adenosine triphosphate (ATP)
Adenylate cyclase
Adiponectin
Adonitol
Adrenaline, epinephrine
Adrenocorticotropic hormone (ACTH)
Aequorin
Aflatoxin
Agar
Alamethicin
Alanine
Albumins
Aldosterone
Aleurone
Alpha-amanitin
Alpha-MSH (Melaninocyte stimulating hormone)
Allantoin
Allethrin
α-Amanatin, see Alpha-amanitin
Amino acid
Amylase (also see α-amylase)
Anabolic steroid
Anandamide (ANA)
Androgen
Anethole
Angiotensinogen
Anisomycin
Antidiuretic hormone (ADH)
Anti-Müllerian hormone (AMH)
Arabinose
Arginine
Argonaute
Ascomycin
Ascorbic acid (vitamin C)
Asparagine
Aspartic acid
Asymmetric dimethylarginine
ATP synthase
Atrial-natriuretic peptide (ANP)
Auxin
Avidin
Azadirachtin A – C35H44O16
B
Bacteriocin
Beauvericin
beta-Hydroxy beta-methylbutyric acid
beta-Hydroxybutyric acid
Bicuculline
Bilirubin
Biopolymer
Biotin (Vitamin H)
Brefeldin A
Brassinolide
Brucine
Butyric acid
C
Document 4:::
Protein digestibility refers to how well a given protein is digested. Along with the amino acid score, protein digestibility determines the values for PDCAAS and DIAAS.
See also
Biological value
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are broken down during digestion to provide the amino acids needed for protein synthesis?
A. dietary proteins
B. carbohydrates
C. metabolytes
D. sugars
Answer:
|
|
sciq-10034
|
multiple_choice
|
When resources become limiting, populations follow a logistic growth curve in which the size will level off at a point called what?
|
[
"containing capacity",
"full capacity",
"carrying capacity",
"believed capacity"
] |
C
|
Relavent Documents:
Document 0:::
The Hubbert curve is an approximation of the production rate of a resource over time. It is a symmetric logistic distribution curve, often confused with the "normal" gaussian function. It first appeared in "Nuclear Energy and the Fossil Fuels," geologist M. King Hubbert's 1956 presentation to the American Petroleum Institute, as an idealized symmetric curve, during his tenure at the Shell Oil Company. It has gained a high degree of popularity in the scientific community for predicting the depletion of various natural resources. The curve is the main component of Hubbert peak theory, which has led to the rise of peak oil concerns. Basing his calculations on the peak of oil well discovery in 1948, Hubbert used his model in 1956 to create a curve which predicted that oil production in the contiguous United States would peak around 1970.
Shape
The prototypical Hubbert curve is a probability density function of a logistic distribution curve. It is not a gaussian function (which is used to plot normal distributions), but the two have a similar appearance. The density of a Hubbert curve approaches zero more slowly than a gaussian function:
The graph of a Hubbert curve consists of three key elements:
a gradual rise from zero resource production that then increases quickly
a "Hubbert peak", representing the maximum production level
a drop from the peak that then follows a steep production decline.
The actual shape of a graph of real world production trends is determined by various factors, such as development of enhanced production techniques, availability of competing resources, and government regulations on production or consumption. Because of such factors, real world Hubbert curves are often not symmetrical.
Application
Peak oil
Using the curve, Hubbert modeled the rate of petroleum production for several regions, determined by the rate of new oil well discovery, and extrapolated a world production curve. The relative steepness of decline in this proje
Document 1:::
A logistic function or logistic curve is a common S-shaped curve (sigmoid curve) with the equation
where
For values of in the domain of real numbers from to , the S-curve shown on the right is obtained, with the graph of approaching as approaches and approaching zero as approaches .
The logistic function finds applications in a range of fields, including biology (especially ecology), biomathematics, chemistry, demography, economics, geoscience, mathematical psychology, probability, sociology, political science, linguistics, statistics, and artificial neural networks. A generalization of the logistic function is the hyperbolastic function of type I.
The standard logistic function, where , is sometimes simply called the sigmoid. It is also sometimes called the expit, being the inverse of the logit.
History
The logistic function was introduced in a series of three papers by Pierre François Verhulst between 1838 and 1847, who devised it as a model of population growth by adjusting the exponential growth model, under the guidance of Adolphe Quetelet. Verhulst first devised the function in the mid 1830s, publishing a brief note in 1838, then presented an expanded analysis and named the function in 1844 (published 1845); the third paper adjusted the correction term in his model of Belgian population growth.
The initial stage of growth is approximately exponential (geometric); then, as saturation begins, the growth slows to linear (arithmetic), and at maturity, growth stops.
Verhulst did not explain the choice of the term "logistic" (), but it is presumably in contrast to the logarithmic curve, and by analogy with arithmetic and geometric. His growth model is preceded by a discussion of arithmetic growth and geometric growth (whose curve he calls a logarithmic curve, instead of the modern term exponential curve), and thus "logistic growth" is presumably named by analogy, logistic being from , a traditional division of Greek mathematics.
The term is unrela
Document 2:::
Bounded growth occurs when the growth rate of a mathematical function is constantly increasing at a decreasing rate. Asymptotically, bounded growth approaches a fixed value. This contrasts with exponential growth, which is constantly increasing at an accelerating rate, and therefore approaches infinity in the limit.
An example of bounded growth is the logistic function.
Document 3:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 4:::
The Limits to Growth (LTG) is a 1972 report that discussed the possibility of exponential economic and population growth with finite supply of resources, studied by computer simulation. The study used the World3 computer model to simulate the consequence of interactions between the Earth and human systems. The model was based on the work of Jay Forrester of MIT, as described in his book World Dynamics.
Commissioned by the Club of Rome, the findings of the study were first presented at international gatherings in Moscow and Rio de Janeiro in the summer of 1971. The report's authors are Donella H. Meadows, Dennis L. Meadows, Jørgen Randers, and William W. Behrens III, representing a team of 17 researchers.
The report's findings suggest that in the absence of significant alterations in resource utilization, it is highly likely that there would be an abrupt and unmanageable decrease in both population and industrial capacity. Despite facing severe criticism and scrutiny upon its initial release, subsequent research aimed at verifying its predictions consistently supports the notion that there have been inadequate modifications made since 1972 to substantially alter its essence.
Since its publication, some 30 million copies of the book in 30 languages have been purchased. It continues to generate debate and has been the subject of several subsequent publications.
Beyond the Limits and The Limits to Growth: The 30-Year Update were published in 1992 and 2004 respectively, in 2012, a 40-year forecast from Jørgen Randers, one of the book's original authors, was published as 2052: A Global Forecast for the Next Forty Years, and in 2022 two of the original Limits to Growth authors, Dennis Meadows and Jørgen Randers, joined 19 other contributors to produce Limits and Beyond.
Purpose
In commissioning the MIT team to undertake the project that resulted in LTG, the Club of Rome had three objectives:
Gain insights into the limits of our world system and the constraints it put
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
When resources become limiting, populations follow a logistic growth curve in which the size will level off at a point called what?
A. containing capacity
B. full capacity
C. carrying capacity
D. believed capacity
Answer:
|
|
sciq-5837
|
multiple_choice
|
How many groups of leaves does poison ivy typically have?
|
[
"three",
"ten",
"six",
"four"
] |
A
|
Relavent Documents:
Document 0:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
Document 1:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
Document 2:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 3:::
Poison ivy is a type of allergenic plant in the genus Toxicodendron native to Asia and North America. Formerly considered a single species, Toxicodendron radicans, poison ivies are now generally treated as a complex of three separate species: Toxicodendron radicans, Toxicodendron rydbergii, and Toxicodendron orientale. They are well known for causing urushiol-induced contact dermatitis, an itchy, irritating, and sometimes painful rash, in most people who touch them. The rash is caused by urushiol, a clear liquid compound in the plant's sap. They are variable in appearance and habit, and despite its common name, it is not a "true" ivy (Hedera), but rather a member of the cashew and pistachio family (Anacardiaceae). T. radicans is commonly eaten by many animals, and the seeds are consumed by birds, but poison ivy is most often thought of as an unwelcome weed.
Species
Three species of poison ivy are generally recognised; they are sometimes considered subspecies of Toxicodendron radicans:
Toxicodendron orientale: found in East Asia.
Toxicodendron radicans: found throughout eastern Canada and the United States, Mexico and Central America, Bermuda and the Bahamas.
Toxicodendron rydbergii: found throughout Canada and much of the United States except the southeast.
Description
Poison ivies can grow as small plants, shrubs, or climbing vines. They are commonly characterized by clusters of leaves, each containing three leaflets, hence the common expression "leaves of three, let it be". These leaves can vary between an elliptic to egg shape and will have either smooth, lobed, or toothed margins. Additionally, the leaf clusters are alternate on the stem. Clusters of small, greenish flowers bloom from May to July and produce white berries in the fall a few millimeters in diameter.
Health effects
Urushiol-induced contact dermatitis is the allergic reaction caused by poison ivy. In extreme cases, a reaction can progress to anaphylaxis. Around 15 to 25 percent of people ha
Document 4:::
Advanced Placement (AP) Biology (also known as AP Bio) is an Advanced Placement biology course and exam offered by the College Board in the United States. For the 2012–2013 school year, the College Board unveiled a new curriculum with a greater focus on "scientific practices".
This course is designed for students who wish to pursue an interest in the life sciences. The College Board recommends successful completion of high school biology and high school chemistry before commencing AP Biology, although the actual prerequisites vary from school to school and from state to state. This course, nevertheless, is considered very challenging and one of the most difficult AP classes, as shown with AP Finals grade distributions.
Topic outline
The exam covers the following 8 units. The percentage indicates the portion of the multiple-choice section of the exam focused on each content area:
The course is based on and tests six skills, called scientific practices which include:
In addition to the topics above, students are required to be familiar with general lab procedure. Students should know how to collect data, analyze data to form conclusions, and apply those conclusions.
Exam
Students are allowed to use a four-function, scientific, or graphing calculator.
The exam has two sections: a 90 minute multiple choice section and a 90 minute free response section. There are 60 multiple choice questions and six free responses, two long and four short. Both sections are worth 50% of the score.
Score distribution
Commonly used textbooks
Biology, AP Edition by Sylvia Mader (2012, hardcover )
Life: The Science of Biology (Sadava, Heller, Orians, Purves, and Hillis, )
Campbell Biology AP Ninth Edition (Reece, Urry, Cain, Wasserman, Minorsky, and Andrew Jackson )
See also
Glossary of biology
A.P Bio (TV Show)
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How many groups of leaves does poison ivy typically have?
A. three
B. ten
C. six
D. four
Answer:
|
|
sciq-8151
|
multiple_choice
|
What is the process in which a liquid boils and changes to a gas?
|
[
"vaporization",
"sublimation",
"freezing",
"melting"
] |
A
|
Relavent Documents:
Document 0:::
Boiling is the rapid phase transition from liquid to gas or vapor; the reverse of boiling is condensation. Boiling occurs when a liquid is heated to its boiling point, so that the vapour pressure of the liquid is equal to the pressure exerted on the liquid by the surrounding atmosphere. Boiling and evaporation are the two main forms of liquid vapourization.
There are two main types of boiling: nucleate boiling where small bubbles of vapour form at discrete points, and critical heat flux boiling where the boiling surface is heated above a certain critical temperature and a film of vapour forms on the surface. Transition boiling is an intermediate, unstable form of boiling with elements of both types. The boiling point of water is 100 °C or 212 °F but is lower with the decreased atmospheric pressure found at higher altitudes.
Boiling water is used as a method of making it potable by killing microbes and viruses that may be present. The sensitivity of different micro-organisms to heat varies, but if water is held at for one minute, most micro-organisms and viruses are inactivated. Ten minutes at a temperature of 70 °C (158 °F) is also sufficient to inactivate most bacteria.
Boiling water is also used in several cooking methods including boiling, steaming, and poaching.
Types
Free convection
The lowest heat flux seen in boiling is only sufficient to cause [natural convection], where the warmer fluid rises due to its slightly lower density. This condition occurs only when the superheat is very low, meaning that the hot surface near the fluid is nearly the same temperature as the boiling point.
Nucleate
Nucleate boiling is characterised by the growth of bubbles or pops on a heated surface (heterogeneous nucleation), which rises from discrete points on a surface, whose temperature is only slightly above the temperature of the liquid. In general, the number of nucleation sites is increased by an increasing surface temperature.
An irregular surface of the boiling
Document 1:::
Vaporization (or vaporisation) of an element or compound is a phase transition from the liquid phase to vapor. There are two types of vaporization: Evaporation and boiling. Evaporation is a surface phenomenon, where as boiling is a bulk phenomenon.
Evaporation is a phase transition from the liquid phase to vapor (a state of substance below critical temperature) that occurs at temperatures below the boiling temperature at a given pressure. Evaporation occurs on the surface. Evaporation only occurs when the partial pressure of vapor of a substance is less than the equilibrium vapor pressure. For example, due to constantly decreasing pressures, vapor pumped out of a solution will eventually leave behind a cryogenic liquid.
Boiling is also a phase transition from the liquid phase to gas phase, but boiling is the formation of vapor as bubbles of vapor below the surface of the liquid. Boiling occurs when the equilibrium vapor pressure of the substance is greater than or equal to the atmospheric pressure. The temperature at which boiling occurs is the boiling temperature, or boiling point. The boiling point varies with the pressure of the environment.
Sublimation is a direct phase transition from the solid phase to the gas phase, skipping the intermediate liquid phase. Because it does not involve the liquid phase, it is not a form of vaporization.
The term vaporization has also been used in a colloquial or hyperbolic way to refer to the physical destruction of an object that is exposed to intense heat or explosive force, where the object is actually blasted into small pieces rather than literally converted to gaseous form. Examples of this usage include the "vaporization" of the uninhabited Marshall Island of Elugelab in the 1952 Ivy Mike thermonuclear test. Many other examples can be found throughout the various MythBusters episodes that have involved explosives, chief among them being Cement Mix-Up, where they "vaporized" a cement truck with ANFO.
At the moment o
Document 2:::
In materials science, liquefaction is a process that generates a liquid from a solid or a gas or that generates a non-liquid phase which behaves in accordance with fluid dynamics.
It occurs both naturally and artificially. As an example of the latter, a "major commercial application of liquefaction is the liquefaction of air to allow separation of the constituents, such as oxygen, nitrogen, and the noble gases." Another is the conversion of solid coal into a liquid form usable as a substitute for liquid fuels.
Geology
In geology, soil liquefaction refers to the process by which water-saturated, unconsolidated sediments are transformed into a substance that acts like a liquid, often in an earthquake. Soil liquefaction was blamed for building collapses in the city of Palu, Indonesia in October 2018.
In a related phenomenon, liquefaction of bulk materials in cargo ships may cause a dangerous shift in the load.
Physics and chemistry
In physics and chemistry, the phase transitions from solid and gas to liquid (melting and condensation, respectively) may be referred to as liquefaction. The melting point (sometimes called liquefaction point) is the temperature and pressure at which a solid becomes a liquid. In commercial and industrial situations, the process of condensing a gas to liquid is sometimes referred to as liquefaction of gases.
Coal
Coal liquefaction is the production of liquid fuels from coal using a variety of industrial processes.
Dissolution
Liquefaction is also used in commercial and industrial settings to refer to mechanical dissolution of a solid by mixing, grinding or blending with a liquid.
Food preparation
In kitchen or laboratory settings, solids may be chopped into smaller parts sometimes in combination with a liquid, for example in food preparation or laboratory use. This may be done with a blender, or liquidiser in British English.
Irradiation
Liquefaction of silica and silicate glasses occurs on electron beam irradiation of nanos
Document 3:::
Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.
Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate.
A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density.
An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge.
Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change.
Examples
Heating and cooling
Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation.
Magnetism
Ferro-magnetic materials can become magnetic. The process is reve
Document 4:::
A boiling chip, boiling stone, porous bit anti-bumping granule is a tiny, unevenly shaped piece of substance added to liquids to make them boil more calmly. Boiling chips are frequently employed in distillation and heating. When a liquid becomes superheated, a speck of dust or a stirring rod can cause violent flash boiling. Boiling chips provide nucleation sites so the liquid boils smoothly without becoming superheated or bumping.
Use
Boiling chips should not be added to liquid that is already near its boiling point, as this could also induce flash boiling.
The structure of a boiling chip traps liquid while in use, meaning that they cannot be re-used in laboratory setups. They also don't work well under vacuum; if a solution is boiling under vacuum, it is best to constantly stir it instead.
Materials
Boiling chips are typically made of a porous material, such as alumina, silicon carbide, calcium carbonate, calcium sulfate, porcelain or carbon, and often have a nonreactive coating of PTFE. This ensures that the boiling chips will provide effective nucleation sites, yet are chemically inert. In less demanding situations, like school laboratories, pieces of broken porcelainware or glassware are often used.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the process in which a liquid boils and changes to a gas?
A. vaporization
B. sublimation
C. freezing
D. melting
Answer:
|
|
sciq-4052
|
multiple_choice
|
Name the two types of nucleic acids.
|
[
"dna (deoxyribonucleic acid) and rna (ribonucleic acid)",
"dna ( trigraph acid ) and rna ( ribonucleic acid )",
"isoleucin and leucine",
"lysine and methionine"
] |
A
|
Relavent Documents:
Document 0:::
A nucleic acid sequence is a succession of bases within the nucleotides forming alleles within a DNA (using GACT) or RNA (GACU) molecule. This succession is denoted by a series of a set of five different letters that indicate the order of the nucleotides. By convention, sequences are usually presented from the 5' end to the 3' end. For DNA, with its double helix, there are two possible directions for the notated sequence; of these two, the sense strand is used. Because nucleic acids are normally linear (unbranched) polymers, specifying the sequence is equivalent to defining the covalent structure of the entire molecule. For this reason, the nucleic acid sequence is also termed the primary structure.
The sequence represents biological information. Biological deoxyribonucleic acid represents the information which directs the functions of an organism.
Nucleic acids also have a secondary structure and tertiary structure. Primary structure is sometimes mistakenly referred to as "primary sequence". However there is no parallel concept of secondary or tertiary sequence.
Nucleotides
Nucleic acids consist of a chain of linked units called nucleotides. Each nucleotide consists of three subunits: a phosphate group and a sugar (ribose in the case of RNA, deoxyribose in DNA) make up the backbone of the nucleic acid strand, and attached to the sugar is one of a set of nucleobases. The nucleobases are important in base pairing of strands to form higher-level secondary and tertiary structures such as the famed double helix.
The possible letters are A, C, G, and T, representing the four nucleotide bases of a DNA strand – adenine, cytosine, guanine, thymine – covalently linked to a phosphodiester backbone. In the typical case, the sequences are printed abutting one another without gaps, as in the sequence AAAGTCTGAC, read left to right in the 5' to 3' direction. With regards to transcription, a sequence is on the coding strand if it has the same order as the transcribed RNA.
Document 1:::
Experimental approaches of determining the structure of nucleic acids, such as RNA and DNA, can be largely classified into biophysical and biochemical methods. Biophysical methods use the fundamental physical properties of molecules for structure determination, including X-ray crystallography, NMR and cryo-EM. Biochemical methods exploit the chemical properties of nucleic acids using specific reagents and conditions to assay the structure of nucleic acids. Such methods may involve chemical probing with specific reagents, or rely on native or analogue chemistry. Different experimental approaches have unique merits and are suitable for different experimental purposes.
Biophysical methods
X-ray crystallography
X-ray crystallography is not common for nucleic acids alone, since neither DNA nor RNA readily form crystals. This is due to the greater degree of intrinsic disorder and dynamism in nucleic acid structures and the negatively charged (deoxy)ribose-phosphate backbones, which repel each other in close proximity. Therefore, crystallized nucleic acids tend to be complexed with a protein of interest to provide structural order and neutralize the negative charge.
Nuclear magnetic resonance spectroscopy (NMR)
Nucleic acid NMR is the use of NMR spectroscopy to obtain information about the structure and dynamics of nucleic acid molecules, such as DNA or RNA. As of 2003, nearly half of all known RNA structures had been determined by NMR spectroscopy.
Nucleic acid NMR uses similar techniques as protein NMR, but has several differences. Nucleic acids have a smaller percentage of hydrogen atoms, which are the atoms usually observed in NMR, and because nucleic acid double helices are stiff and roughly linear, they do not fold back on themselves to give "long-range" correlations. The types of NMR usually done with nucleic acids are 1H or proton NMR, 13C NMR, 15N NMR, and 31P NMR. Two-dimensional NMR methods are almost always used, such as correlation spectroscopy (COSY
Document 2:::
Nucleic acid analogues are compounds which are analogous (structurally similar) to naturally occurring RNA and DNA, used in medicine and in molecular biology research.
Nucleic acids are chains of nucleotides, which are composed of three parts: a phosphate backbone, a pentose sugar, either ribose or deoxyribose, and one of four nucleobases.
An analogue may have any of these altered. Typically the analogue nucleobases confer, among other things, different base pairing and base stacking properties. Examples include universal bases, which can pair with all four canonical bases, and phosphate-sugar backbone analogues such as PNA, which affect the properties of the chain (PNA can even form a triple helix).
Nucleic acid analogues are also called Xeno Nucleic Acid and represent one of the main pillars of xenobiology, the design of new-to-nature forms of life based on alternative biochemistries.
Artificial nucleic acids include peptide nucleic acid (PNA), Morpholino and locked nucleic acid (LNA), as well as glycol nucleic acid (GNA), threose nucleic acid (TNA) and hexitol nucleic acids (HNA). Each of these is distinguished from naturally occurring DNA or RNA by changes to the backbone of the molecule.
In May 2014, researchers announced that they had successfully introduced two new artificial nucleotides into bacterial DNA, and by including individual artificial nucleotides in the culture media, were able to passage the bacteria 24 times; they did not create mRNA or proteins able to use the artificial nucleotides. The artificial nucleotides featured 2 fused aromatic rings.
Medicine
Several nucleoside analogues are used as antiviral or anticancer agents. The viral polymerase incorporates these compounds with non-canonical bases. These compounds are activated in the cells by being converted into nucleotides, they are administered as nucleosides since charged nucleotides cannot easily cross cell membranes.
Molecular biology
Nucleic acid analogues are used in molecular b
Document 3:::
"Desoxyribonucleic acid" and "desoxyribonucleate" are archaic terms for DNA, deoxyribonucleic acid, and its salts, respectively. The terms are used in this sense in various classic papers in genetics, such as Avery, MacLeod, and McCarty (1944).
Document 4:::
Biomolecular structure is the intricate folded, three-dimensional shape that is formed by a molecule of protein, DNA, or RNA, and that is important to its function. The structure of these molecules may be considered at any of several length scales ranging from the level of individual atoms to the relationships among entire protein subunits. This useful distinction among scales is often expressed as a decomposition of molecular structure into four levels: primary, secondary, tertiary, and quaternary. The scaffold for this multiscale organization of the molecule arises at the secondary level, where the fundamental structural elements are the molecule's various hydrogen bonds. This leads to several recognizable domains of protein structure and nucleic acid structure, including such secondary-structure features as alpha helixes and beta sheets for proteins, and hairpin loops, bulges, and internal loops for nucleic acids.
The terms primary, secondary, tertiary, and quaternary structure were introduced by Kaj Ulrik Linderstrøm-Lang in his 1951 Lane Medical Lectures at Stanford University.
Primary structure
The primary structure of a biopolymer is the exact specification of its atomic composition and the chemical bonds connecting those atoms (including stereochemistry). For a typical unbranched, un-crosslinked biopolymer (such as a molecule of a typical intracellular protein, or of DNA or RNA), the primary structure is equivalent to specifying the sequence of its monomeric subunits, such as amino acids or nucleotides.
The primary structure of a protein is reported starting from the amino N-terminus to the carboxyl C-terminus, while the primary structure of DNA or RNA molecule is known as the nucleic acid sequence reported from the 5' end to the 3' end.
The nucleic acid sequence refers to the exact sequence of nucleotides that comprise the whole molecule. Often, the primary structure encodes sequence motifs that are of functional importance. Some examples of such motif
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Name the two types of nucleic acids.
A. dna (deoxyribonucleic acid) and rna (ribonucleic acid)
B. dna ( trigraph acid ) and rna ( ribonucleic acid )
C. isoleucin and leucine
D. lysine and methionine
Answer:
|
|
scienceQA-3647
|
multiple_choice
|
What do these two changes have in common?
dust settling out of the air
a puddle freezing into ice on a cold night
|
[
"Both are caused by cooling.",
"Both are only physical changes.",
"Both are chemical changes.",
"Both are caused by heating."
] |
B
|
Step 1: Think about each change.
Dust settling out of the air is a physical change. As the dust settles, or falls, it might land on furniture or the ground. This separates dust particles from the air, but does not form a different type of matter.
A puddle freezing into ice on a cold night is a change of state. So, it is a physical change. Liquid water freezes and becomes solid, but it is still made of water. A different type of matter is not formed.
Step 2: Look at each answer choice.
Both are only physical changes.
Both changes are physical changes. No new matter is created.
Both are chemical changes.
Both changes are physical changes. They are not chemical changes.
Both are caused by heating.
Neither change is caused by heating.
Both are caused by cooling.
A puddle freezing is caused by cooling. But dust settling out of the air is not.
|
Relavent Documents:
Document 0:::
Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.
Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate.
A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density.
An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge.
Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change.
Examples
Heating and cooling
Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation.
Magnetism
Ferro-magnetic materials can become magnetic. The process is reve
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
The Stefan flow, occasionally called Stefan's flow, is a transport phenomenon concerning the movement of a chemical species by a flowing fluid (typically in the gas phase) that is induced to flow by the production or removal of the species at an interface. Any process that adds the species of interest to or removes it from the flowing fluid may cause the Stefan flow, but the most common processes include evaporation, condensation, chemical reaction, sublimation, ablation, adsorption, absorption, and desorption. It was named after the Slovenian physicist, mathematician, and poet Josef Stefan for his early work on calculating evaporation rates.
The Stefan flow is distinct from diffusion as described by Fick's law, but diffusion almost always also occurs in multi-species systems that are experiencing the Stefan flow. In systems undergoing one of the species addition or removal processes mentioned previously, the addition or removal generates a mean flow in the flowing fluid as the fluid next to the interface is displaced by the production or removal of additional fluid by the processes occurring at the interface. The transport of the species by this mean flow is the Stefan flow. When concentration gradients of the species are also present, diffusion transports the species relative to the mean flow. The total transport rate of the species is then given by a summation of the Stefan flow and diffusive contributions.
An example of the Stefan flow occurs when a droplet of liquid evaporates in air. In this case, the vapor/air mixture surrounding the droplet is the flowing fluid, and liquid/vapor boundary of the droplet is the interface. As heat is absorbed by the droplet from the environment, some of the liquid evaporates into vapor at the surface of the droplet, and flows away from the droplet as it is displaced by additional vapor evaporating from the droplet. This process causes the flowing medium to move away from the droplet at some mean speed that is dependent on
Document 3:::
In physics, a dynamical system is said to be mixing if the phase space of the system becomes strongly intertwined, according to at least one of several mathematical definitions. For example, a measure-preserving transformation T is said to be strong mixing if
whenever A and B are any measurable sets and μ is the associated measure. Other definitions are possible, including weak mixing and topological mixing.
The mathematical definition of mixing is meant to capture the notion of physical mixing. A canonical example is the Cuba libre: suppose one is adding rum (the set A) to a glass of cola. After stirring the glass, the bottom half of the glass (the set B) will contain rum, and it will be in equal proportion as it is elsewhere in the glass. The mixing is uniform: no matter which region B one looks at, some of A will be in that region. A far more detailed, but still informal description of mixing can be found in the article on mixing (mathematics).
Every mixing transformation is ergodic, but there are ergodic transformations which are not mixing.
Physical mixing
The mixing of gases or liquids is a complex physical process, governed by a convective diffusion equation that may involve non-Fickian diffusion as in spinodal decomposition. The convective portion of the governing equation contains fluid motion terms that are governed by the Navier–Stokes equations. When fluid properties such as viscosity depend on composition, the governing equations may be coupled. There may also be temperature effects. It is not clear that fluid mixing processes are mixing in the mathematical sense.
Small rigid objects (such as rocks) are sometimes mixed in a rotating drum or tumbler. The 1969 Selective Service draft lottery was carried out by mixing plastic capsules which contained a slip of paper (marked with a day of the year).
See also
Miscibility
Document 4:::
In chemistry, coalescence is a process in which two phase domains of the same composition come together and form a larger phase domain. In other words, the process by which two or more separate masses of miscible substances
seem to "pull" each other together should they make the slightest
contact.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do these two changes have in common?
dust settling out of the air
a puddle freezing into ice on a cold night
A. Both are caused by cooling.
B. Both are only physical changes.
C. Both are chemical changes.
D. Both are caused by heating.
Answer:
|
sciq-10989
|
multiple_choice
|
What type of polarization does a negative object create>
|
[
"simple polarization",
"negative polarization",
"opposite polarization",
"common polarization"
] |
C
|
Relavent Documents:
Document 0:::
The Stokes parameters are a set of values that describe the polarization state of electromagnetic radiation. They were defined by George Gabriel Stokes in 1852,<ref>S. Chandrasekhar 'Radiative Transfer, Dover Publications, New York, 1960, , page 25</ref> as a mathematically convenient alternative to the more common description of incoherent or partially polarized radiation in terms of its total intensity (I), (fractional) degree of polarization (p), and the shape parameters of the polarization ellipse. The effect of an optical system on the polarization of light can be determined by constructing the Stokes vector for the input light and applying Mueller calculus, to obtain the Stokes vector of the light leaving the system. The original Stokes paper was discovered independently by Francis Perrin in 1942 and by Subrahamanyan Chandrasekhar in 1947,Chandrasekhar, S. (1947). The transfer of radiation in stellar atmospheres. Bulletin of the American Mathematical Society, 53(7), 641-711. who named it as the Stokes parameters.
Definitions
The relationship of the Stokes parameters S0, S1, S2, S3 to intensity and polarization ellipse parameters is shown in the equations below and the figure on the right.
Here , and are the spherical coordinates of the three-dimensional vector of cartesian coordinates . is the total intensity of the beam, and is the degree of polarization, constrained by . The factor of two before represents the fact that any polarization ellipse is indistinguishable from one rotated by 180°, while the factor of two before indicates that an ellipse is indistinguishable from one with the semi-axis lengths swapped accompanied by a 90° rotation. The phase information of the polarized light is not recorded in the Stokes parameters. The four Stokes parameters are sometimes denoted I, Q, U and V, respectively.
Given the Stokes parameters, one can solve for the spherical coordinates with the following equations:
Stokes vectors
The Stokes parameters are oft
Document 1:::
In geometry, a polar point group is a point group in which there is more than one point that every symmetry operation leaves unmoved. The unmoved points will constitute a line, a plane, or all of space.
While the simplest point group, C1, leaves all points invariant, most polar point groups will move some, but not all points. To describe the points which are unmoved by the symmetry operations of the point group, we draw a straight line joining two unmoved points. This line is called a polar direction. The electric polarization must be parallel to a polar direction. In polar point groups of high symmetry, the polar direction can be a unique axis of rotation, but if the symmetry operations do not allow any rotation at all, such as mirror symmetry, there can be an infinite number of such axes: in that case the only restriction on the polar direction is that it must be parallel to any mirror planes.
A point group with more than one axis of rotation or with a mirror plane perpendicular to an axis of rotation cannot be polar.
Polar crystallographic point group
Of the 32 crystallographic point groups, 10 are polar:
The space groups associated with a polar point group do not have a discrete set of possible origin points that are unambiguously determined by symmetry elements.
When materials having a polar point group crystal structure are heated or cooled, they may temporarily generate a voltage called pyroelectricity.
Molecular crystals which have symmetry described by one of the polar space groups, such as sucrose, may exhibit triboluminescence.
Document 2:::
A Polaroid synthetic plastic sheet is a brand name product trademarked and produced by the Polaroid Corporation used as a polarizer or polarizing filter. The term “Polaroid” entered the common vocabulary with the early 1960s introduction of patented film and cameras manufactured by the corporation that produced “instant photos”.
Patent
The original material, patented in 1929 and further developed in 1932 by Edwin H. Land, consists of many microscopic crystals of iodoquinine sulphate (herapathite) embedded in a transparent nitrocellulose polymer film. The needle-like crystals are aligned during the manufacture of the film by stretching or by applying electric or magnetic fields. With the crystals aligned, the sheet is dichroic: it tends to absorb light which is polarized parallel to the direction of crystal alignment but to transmit light which is polarized perpendicular to it.
The resultant electric field of an electromagnetic wave (such as light) determines its polarization. If the wave interacts with a line of crystals as in a sheet of polaroid, any varying electric field in the direction parallel to the line of the crystals will cause a current to flow along this line. The electrons moving in this current will collide with other particles and re-emit the light backwards and forwards. This will cancel the incident wave causing little or no transmission through the sheet.
The component of the electric field perpendicular to the line of crystals, however, can cause only small movements in the electrons as they cannot move very much from side to side. This means there will be little change in the perpendicular component of the field leading to transmission of the part of the light wave polarized perpendicular to the crystals only, hence allowing the material to be used as a light polarizer.
This material, known as J-sheet, was later replaced by the improved H-sheet Polaroid, invented in 1938 by Land. H-sheet is a polyvinyl alcohol (PVA) polymer impregnated with i
Document 3:::
A polarimeter is a scientific instrument used to measure the angle of rotation caused by passing polarized light through an optically active substance.
Some chemical substances are optically active, and polarized (uni-directional) light will rotate either to the left (counter-clockwise) or right (clockwise) when passed through these substances. The amount by which the light is rotated is known as the angle of rotation. The direction (clockwise or counterclockwise) and magnitude of the rotation reveals information about the sample's chiral properties such as the relative concentration of enantiomers present in the sample.
History
Polarization by reflection was discovered in 1808 by Étienne-Louis Malus (1775–1812).
Measuring principle
The ratio, the purity, and the concentration of two enantiomers can be measured via polarimetry. Enantiomers are characterized by their property to rotate the plane of linear polarized light. Therefore, those compounds are called optically active and their property is referred to as optical rotation. Light sources such as a light bulb, Tungsten Halogen, or the sun emit electromagnetic waves at the frequency of visible light. Their electric field oscillates in all possible planes relative to their direction of propagation. In contrast to that, the waves of linear-polarized light oscillate in parallel planes.
If light encounters a polarizer, only the part of the light that oscillates in the defined plane of the polarizer may pass through. That plane is called the plane of polarization. The plane of polarization is turned by optically active compounds. According to the direction in which the light is rotated, the enantiomer is referred to as dextro-rotatory or levo-rotatory.
The optical activity of enantiomers is additive. If different enantiomers exist together in one solution, their optical activity adds up. That is why racemates are optically inactive, as they nullify their clockwise and counter clockwise optical activities. The o
Document 4:::
Polarization is an important phenomenon in astronomy.
Stars
The polarization of starlight was first observed by the astronomers William Hiltner and John S. Hall in 1949. Subsequently, Jesse Greenstein and Leverett Davis, Jr. developed theories allowing the use of polarization data to trace interstellar magnetic fields.
Though the integrated thermal radiation of stars is not usually appreciably polarized at source, scattering by interstellar dust can impose polarization on starlight over long distances. Net polarization at the source can occur if the photosphere itself is asymmetric, due to limb polarization. Plane polarization of starlight generated at the star itself is observed for Ap stars (peculiar A type stars).
Sun
Both circular and linear polarization of sunlight has been measured. Circular polarization is mainly due to transmission and absorption effects in strongly magnetic regions of the Sun's surface. Another mechanism that gives rise to circular polarization is the so-called "alignment-to-orientation mechanism". Continuum light is linearly polarized at different locations across the face of the Sun (limb polarization) though taken as a whole, this polarization cancels. Linear polarization in spectral lines is usually created by anisotropic scattering of photons on atoms and ions which can themselves be polarized by this interaction. The linearly polarized spectrum of the Sun is often called the second solar spectrum. Atomic polarization can be modified in weak magnetic fields by the Hanle effect. As a result, polarization of the scattered photons is also modified providing a diagnostics tool for understanding stellar magnetic fields.
Other sources
Polarization is also present in radiation from coherent astronomical sources due to the Zeeman effect (e.g. hydroxyl or methanol masers).
The large radio lobes in active galaxies and pulsar radio radiation (which may, it is speculated, sometimes be coherent) also show polarization.
Apart from providing in
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of polarization does a negative object create>
A. simple polarization
B. negative polarization
C. opposite polarization
D. common polarization
Answer:
|
|
sciq-8725
|
multiple_choice
|
What process allows particles too large to move along the stream bed?
|
[
"channelization",
"diffusion",
"impaction",
"saltation"
] |
D
|
Relavent Documents:
Document 0:::
Large woody debris (LWD) are the logs, sticks, branches, and other wood that falls into streams and rivers. This debris can influence the flow and the shape of the stream channel. Large woody debris, grains, and the shape of the bed of the stream are the three main providers of flow resistance, and are thus, a major influence on the shape of the stream channel. Some stream channels have less LWD than they would naturally because of removal by watershed managers for flood control and aesthetic reasons.
The study of woody debris is important for its forestry management implications. Plantation thinning can reduce the potential for recruitment of LWD into proximal streams. The presence of large woody debris is important in the formation of pools which serve as salmon habitat in the Pacific Northwest. Entrainment of the large woody debris in a stream can also cause erosion and scouring around and under the LWD. The amount of scouring and erosion is determined by the ratio of the diameter of the piece, to the depth of the stream, and the embedding and orientation of the piece.
Influence on stream flow around bends
Large woody debris slow the flow through a bend in the stream, while accelerating flow in the constricted area downstream of the obstruction.
See also
Beaver dam
Coarse woody debris
Driftwood
Log jam
Stream restoration
Document 1:::
Dispersive mass transfer, in fluid dynamics, is the spreading of mass from highly concentrated areas to less concentrated areas. It is one form of mass transfer.
Dispersive mass flux is analogous to diffusion, and it can also be described using Fick's first law:
where c is mass concentration of the species being dispersed, E is the dispersion coefficient, and x is the position in the direction of the concentration gradient. Dispersion can be differentiated from diffusion in that it is caused by non-ideal flow patterns (i.e. deviations from plug flow) and is a macroscopic phenomenon, whereas diffusion is caused by random molecular motions (i.e. Brownian motion) and is a microscopic phenomenon. Dispersion is often more significant than diffusion in convection-diffusion problems. The dispersion coefficient is frequently modeled as the product of the fluid velocity, U, and some characteristic length scale, α:
Transport phenomena
Document 2:::
The Stefan flow, occasionally called Stefan's flow, is a transport phenomenon concerning the movement of a chemical species by a flowing fluid (typically in the gas phase) that is induced to flow by the production or removal of the species at an interface. Any process that adds the species of interest to or removes it from the flowing fluid may cause the Stefan flow, but the most common processes include evaporation, condensation, chemical reaction, sublimation, ablation, adsorption, absorption, and desorption. It was named after the Slovenian physicist, mathematician, and poet Josef Stefan for his early work on calculating evaporation rates.
The Stefan flow is distinct from diffusion as described by Fick's law, but diffusion almost always also occurs in multi-species systems that are experiencing the Stefan flow. In systems undergoing one of the species addition or removal processes mentioned previously, the addition or removal generates a mean flow in the flowing fluid as the fluid next to the interface is displaced by the production or removal of additional fluid by the processes occurring at the interface. The transport of the species by this mean flow is the Stefan flow. When concentration gradients of the species are also present, diffusion transports the species relative to the mean flow. The total transport rate of the species is then given by a summation of the Stefan flow and diffusive contributions.
An example of the Stefan flow occurs when a droplet of liquid evaporates in air. In this case, the vapor/air mixture surrounding the droplet is the flowing fluid, and liquid/vapor boundary of the droplet is the interface. As heat is absorbed by the droplet from the environment, some of the liquid evaporates into vapor at the surface of the droplet, and flows away from the droplet as it is displaced by additional vapor evaporating from the droplet. This process causes the flowing medium to move away from the droplet at some mean speed that is dependent on
Document 3:::
Colloid-facilitated transport designates a transport process by which colloidal particles serve as transport vector
of diverse contaminants in the surface water (sea water, lakes, rivers, fresh water bodies) and in underground water circulating in fissured rocks
(limestone, sandstone, granite, ...). The transport of colloidal particles in surface soils and in the ground can also occur, depending on the soil structure, soil compaction, and the particles size, but the importance of colloidal transport was only given sufficient attention during the 1980 years.
Radionuclides, heavy metals, and organic pollutants, easily sorb onto colloids suspended in water and that can easily act as contaminant carrier.
Various types of colloids are recognised: inorganic colloids (clay particles, silicates, iron oxy-hydroxides, ...), organic colloids (humic and fulvic substances). When heavy metals or radionuclides form their own pure colloids, the term "Eigencolloid" is used to designate pure phases, e.g., Tc(OH)4, Th(OH)4, U(OH)4, Am(OH)3. Colloids have been suspected for the long range transport of plutonium on the Nevada Nuclear Test Site. They have been the subject of detailed studies for many years. However, the mobility of inorganic colloids is very low in compacted bentonites and in deep clay formations
because of the process of ultrafiltration occurring in dense clay membrane.
The question is less clear for small organic colloids often mixed in porewater with truly dissolved organic molecules.
See also
Colloid
Dispersion
DLVO theory (from Derjaguin, Landau, Verwey and Overbeek)
Double layer (electrode)
Double layer (interfacial)
Double layer forces
Gouy-Chapman model
Eigencolloid
Electrical double layer (EDL)
Flocculation
Hydrosol
Interface
Interface and colloid science
Nanoparticle
Peptization (the inverse of flocculation)
Sol (colloid)
Sol-gel
Streaming potential
Suspension
Zeta potential
Document 4:::
The Knudsen paradox has been observed in experiments of channel flow with varying channel width or equivalently different pressures. If the normalized mass flux through the channel is plotted over the Knudsen number based on the channel width a distinct minimum is observed around . This is a paradoxical behaviour because, based on the Navier–Stokes equations, one would expect the mass flux to decrease with increasing the Knudsen number. The minimum can be understood intuitively by considering the two extreme cases of very small and very large Knudsen number. For very small Kn the viscosity vanishes and a fully developed steady state channel flow shows infinite flux. On the other hand, the particles stop interacting for large Knudsen numbers. Because of the constant acceleration due to the external force, the steady state again will show infinite flux.
See also
Vlasov equation
Fokker–Planck equation
Navier–Stokes equations
Vlasov–Poisson equation
Lattice Boltzmann methods
List of paradoxes
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What process allows particles too large to move along the stream bed?
A. channelization
B. diffusion
C. impaction
D. saltation
Answer:
|
|
sciq-10710
|
multiple_choice
|
What type of science is the application of science to answer questions related to the law?
|
[
"ecology",
"forensic",
"physics",
"biologic"
] |
B
|
Relavent Documents:
Document 0:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 1:::
Forensic science, also known as criminalistics, is the application of science to criminal and civil laws. During criminal investigation in particular, it is governed by the legal standards of admissible evidence and criminal procedure. It is a broad field utilizing numerous practices such as the analysis of DNA, fingerprints, bloodstain patterns, firearms, ballistics, toxicology, and fire debris analysis.
Forensic scientists collect, preserve, and analyze scientific evidence during the course of an investigation. While some forensic scientists travel to the scene of the crime to collect the evidence themselves, others occupy a laboratory role, performing analysis on objects brought to them by other individuals. Others are involved in analysis of financial, banking, or other numerical data for use in financial crime investigation, and can be employed as consultants from private firms, academia, or as government employees.
In addition to their laboratory role, forensic scientists testify as expert witnesses in both criminal and civil cases and can work for either the prosecution or the defense. While any field could technically be forensic, certain sections have developed over time to encompass the majority of forensically related cases.
Etymology
The term forensic stems from the Latin word, forēnsis (3rd declension, adjective), meaning "of a forum, place of assembly". The history of the term originates in Roman times, when a criminal charge meant presenting the case before a group of public individuals in the forum. Both the person accused of the crime and the accuser would give speeches based on their sides of the story. The case would be decided in favor of the individual with the best argument and delivery. This origin is the source of the two modern usages of the word forensic—as a form of legal evidence; and as a category of public presentation.
In modern use, the term forensics is often used in place of "forensic science."
The word "science", is derived fr
Document 2:::
Animal science is described as "studying the biology of animals that are under the control of humankind". It can also be described as the production and management of farm animals. Historically, the degree was called animal husbandry and the animals studied were livestock species, like cattle, sheep, pigs, poultry, and horses. Today, courses available look at a broader area, including companion animals, like dogs and cats, and many exotic species. Degrees in Animal Science are offered at a number of colleges and universities. Animal science degrees are often offered at land-grant universities, which will often have on-campus farms to give students hands-on experience with livestock animals.
Education
Professional education in animal science prepares students for careers in areas such as animal breeding, food and fiber production, nutrition, animal agribusiness, animal behavior, and welfare. Courses in a typical Animal Science program may include genetics, microbiology, animal behavior, nutrition, physiology, and reproduction. Courses in support areas, such as genetics, soils, agricultural economics and marketing, legal aspects, and the environment also are offered.
Bachelor degree
At many universities, a Bachelor of Science (BS) degree in Animal Science allows emphasis in certain areas. Typical areas are species-specific or career-specific. Species-specific areas of emphasis prepare students for a career in dairy management, beef management, swine management, sheep or small ruminant management, poultry production, or the horse industry. Other career-specific areas of study include pre-veterinary medicine studies, livestock business and marketing, animal welfare and behavior, animal nutrition science, animal reproduction science, or genetics. Youth programs are also an important part of animal science programs.
Pre-veterinary emphasis
Many schools that offer a degree option in Animal Science also offer a pre-veterinary emphasis such as Iowa State University, th
Document 3:::
The mathematical sciences are a group of areas of study that includes, in addition to mathematics, those academic disciplines that are primarily mathematical in nature but may not be universally considered subfields of mathematics proper.
Statistics, for example, is mathematical in its methods but grew out of bureaucratic and scientific observations, which merged with inverse probability and then grew through applications in some areas of physics, biometrics, and the social sciences to become its own separate, though closely allied, field. Theoretical astronomy, theoretical physics, theoretical and applied mechanics, continuum mechanics, mathematical chemistry, actuarial science, computer science, computational science, data science, operations research, quantitative biology, control theory, econometrics, geophysics and mathematical geosciences are likewise other fields often considered part of the mathematical sciences.
Some institutions offer degrees in mathematical sciences (e.g. the United States Military Academy, Stanford University, and University of Khartoum) or applied mathematical sciences (for example, the University of Rhode Island).
See also
Document 4:::
Science, technology, engineering, and mathematics (STEM) is an umbrella term used to group together the distinct but related technical disciplines of science, technology, engineering, and mathematics. The term is typically used in the context of education policy or curriculum choices in schools. It has implications for workforce development, national security concerns (as a shortage of STEM-educated citizens can reduce effectiveness in this area), and immigration policy, with regard to admitting foreign students and tech workers.
There is no universal agreement on which disciplines are included in STEM; in particular, whether or not the science in STEM includes social sciences, such as psychology, sociology, economics, and political science. In the United States, these are typically included by organizations such as the National Science Foundation (NSF), the Department of Labor's O*Net online database for job seekers, and the Department of Homeland Security. In the United Kingdom, the social sciences are categorized separately and are instead grouped with humanities and arts to form another counterpart acronym HASS (Humanities, Arts, and Social Sciences), rebranded in 2020 as SHAPE (Social Sciences, Humanities and the Arts for People and the Economy). Some sources also use HEAL (health, education, administration, and literacy) as the counterpart of STEM.
Terminology
History
Previously referred to as SMET by the NSF, in the early 1990s the acronym STEM was used by a variety of educators, including Charles E. Vela, the founder and director of the Center for the Advancement of Hispanics in Science and Engineering Education (CAHSEE). Moreover, the CAHSEE started a summer program for talented under-represented students in the Washington, D.C., area called the STEM Institute. Based on the program's recognized success and his expertise in STEM education, Charles Vela was asked to serve on numerous NSF and Congressional panels in science, mathematics, and engineering edu
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of science is the application of science to answer questions related to the law?
A. ecology
B. forensic
C. physics
D. biologic
Answer:
|
|
sciq-2256
|
multiple_choice
|
What is the term for the spacing of individuals within a population?
|
[
"dispersion",
"population density",
"suspension",
"equilibrium"
] |
A
|
Relavent Documents:
Document 0:::
The term population biology has been used with different meanings.
In 1971 Edward O. Wilson et al. used the term in the sense of applying mathematical models to population genetics, community ecology, and population dynamics. Alan Hastings used the term in 1997 as the title of his book on the mathematics used in population dynamics. The name was also used for a course given at UC Davis in the late 2010s, which describes it as an interdisciplinary field combining the areas of ecology and evolutionary biology. The course includes mathematics, statistics, ecology, genetics, and systematics. Numerous types of organisms are studied.
The journal Theoretical Population Biology is published.
See also
Document 1:::
Outline of demography contains human demography and population related important concepts and high-level aggregated lists compiled in the useful categories.
The subheadings have been grouped by the following 4 categories:
Meta (lit. "highest" level) units, such as the universal important concepts related to demographics and places.
Macro (lit. "high" level) units where the "whole world" is the smallest unit of measurement, such as the aggregated summary demographics at global level. For example, United Nations.
Meso (lit. "middle" or "intermediate" level) units where the smallest unit of measurement cover more than one nation and more than one continent but not all the nations or continents. For example, summary list at continental level, e.g. Eurasia and Latin America or Middle East which cover two or more continents. Other examples include the intercontinental organisations e.g. the Commonwealth of Nations or the organisation of Arab states.
Micro (lit. "lower" or "smaller") level units where country is the smallest unit of measurement, such as the "globally aggregated lists" by the "individual countries" .
Please do not add sections on the items that are the nano (lit. "minor" or "tiny") level units as per the context described above, e.g. list of things within a city must be kept out.
Meta or important concepts
Global human population
World population
Demographics of the world
Fertility and intelligence
Human geography
Geographic mobility
Globalization
Human migration
List of lists on linguistics
Impact of human population
Human impact on the environment
Biological dispersal
Carrying capacity
Doomsday argument
Environmental migrant
Human overpopulation
Malthusian catastrophe
List of countries by carbon dioxide emissions
List of countries by carbon dioxide emissions per capita
List of countries by greenhouse gas emissions
List of countries by greenhouse gas emissions per capita
Overconsumption
Overexploitation
Population eco
Document 2:::
Spatial variability occurs when a quantity that is measured at different spatial locations exhibits values that differ across the locations. Spatial variability can be assessed using spatial descriptive statistics such as the range.
Let us suppose that the Rev' z(x) is perfectly known at any point x within the field under study. Then the uncertainty about z(x) is reduced to zero, whereas its spatial variability still exists. Uncertainty is closely related to the amount of spatial variability, but it is also strongly dependent upon sampling.
Geostatistical analyses have been strictly performed to study the spatial variability of pesticide sorption and degradation in the field. Webster and Oliver provided a description of geostatistical techniques. Describing uncertainty using geostatistics is not an activity exempt from uncertainty itself as variogram uncertainty may be large and spatial interpolation may be undertaken using different techniques.
Document 3:::
Demography (), also known as Demographics, is the statistical study of populations, especially human beings.
Demographic analysis examines and measures the dimensions and dynamics of populations; it can cover whole societies or groups defined by criteria such as education, nationality, religion, and ethnicity. Educational institutions usually treat demography as a field of sociology, though there are a number of independent demography departments. These methods have primarily been developed to study human populations, but are extended to a variety of areas where researchers want to know how populations of social actors can change across time through processes of birth, death, and migration. In the context of human biological populations, demographic analysis uses administrative records to develop an independent estimate of the population. Demographic analysis estimates are often considered a reliable standard for judging the accuracy of the census information gathered at any time. In the labor force, demographic analysis is used to estimate sizes and flows of populations of workers; in population ecology the focus is on the birth, death, migration and immigration of individuals in a population of living organisms, alternatively, in social human sciences could involve movement of firms and institutional forms. Demographic analysis is used in a wide variety of contexts. For example, it is often used in business plans, to describe the population connected to the geographic location of the business. Demographic analysis is usually abbreviated as DA. For the 2010 U.S. Census, The U.S. Census Bureau has expanded its DA categories. Also as part of the 2010 U.S. Census, DA now also includes comparative analysis between independent housing estimates, and census address lists at different key time points.
Patient demographics form the core of the data for any medical institution, such as patient and emergency contact information and patient medical record data. They allo
Document 4:::
Population density (in agriculture: standing stock or plant density) is a measurement of population per unit land area. It is mostly applied to humans, but sometimes to other living organisms too. It is a key geographical term.
Biological population densities
Population density is population divided by total land area, sometimes including seas and oceans, as appropriate.
Low densities may cause an extinction vortex and further reduce fertility. This is called the Allee effect after the scientist who identified it. Examples of the causes of reduced fertility in low population densities are:
Increased problems with locating sexual mates
Increased inbreeding
===Human densities===
Population density is the number of people per unit of area, usually transcribed as "per square kilometer" or square mile, and which may include or exclude, for example, areas of water or glaciers. Commonly this is calculated for a county, city, country, another territory or the entire world.
The world's population is around 8,000,000,000 and the Earth's total area (including land and water) is . Therefore, from this very crude type of calculation, the worldwide human population density is approximately 8,000,000,000 ÷ 510,000,000 = . However, if only the Earth's land area of is taken into account, then human population density is . This includes all continental and island land area, including Antarctica. However, if Antarctica is excluded, then population density rises to over .
The European Commission's Joint Research Centre (JRC) has developed a suite of (open and free) data and tools named the Global Human Settlement Layer (GHSL) to improve the science for policy support to the European Commission Directorate Generals and Services and as support to the United Nations system.
Several of the most densely populated territories in the world are city-states, microstates and urban dependencies. In fact, 95% of the world's population is concentrated on just 10% of the world's land.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the term for the spacing of individuals within a population?
A. dispersion
B. population density
C. suspension
D. equilibrium
Answer:
|
|
sciq-9678
|
multiple_choice
|
Parasites usually harm their hosts (or they wouldn't be parasites), but what do they usually stop short of doing?
|
[
"reproducing with hosts",
"mutating their hosts",
"benefiting their hosts",
"killing their host"
] |
D
|
Relavent Documents:
Document 0:::
Parasitism is a close relationship between species, where one organism, the parasite, lives on or inside another organism, the host, causing it some harm, and is adapted structurally to this way of life. The entomologist E. O. Wilson characterised parasites as "predators that eat prey in units of less than one". Parasites include single-celled protozoans such as the agents of malaria, sleeping sickness, and amoebic dysentery; animals such as hookworms, lice, mosquitoes, and vampire bats; fungi such as honey fungus and the agents of ringworm; and plants such as mistletoe, dodder, and the broomrapes.
There are six major parasitic strategies of exploitation of animal hosts, namely parasitic castration, directly transmitted parasitism (by contact), trophicallytransmitted parasitism (by being eaten), vector-transmitted parasitism, parasitoidism, and micropredation. One major axis of classification concerns invasiveness: an endoparasite lives inside the host's body; an ectoparasite lives outside, on the host's surface.
Like predation, parasitism is a type of consumer–resource interaction, but unlike predators, parasites, with the exception of parasitoids, are typically much smaller than their hosts, do not kill them, and often live in or on their hosts for an extended period. Parasites of animals are highly specialised, and reproduce at a faster rate than their hosts. Classic examples include interactions between vertebrate hosts and tapeworms, flukes, the malaria-causing Plasmodium species, and fleas.
Parasites reduce host fitness by general or specialised pathology, from parasitic castration to modification of host behaviour. Parasites increase their own fitness by exploiting hosts for resources necessary for their survival, in particular by feeding on them and by using intermediate (secondary) hosts to assist in their transmission from one definitive (primary) host to another. Although parasitism is often unambiguous, it is part of a spectrum of interactions between
Document 1:::
A large proportion of living species on Earth live a parasitic way of life. Parasites have traditionally been seen as targets of eradication efforts, and they have often been overlooked in conservation efforts. In the case of parasites living in the wild – and thus harmless to humans and domesticated animals – this view is changing. The conservation biology of parasites is an emerging and interdisciplinary field that recognizes the integral role parasites play in ecosystems. Parasites are intricately woven into the fabric of ecological communities, with diverse species occupying a range of ecological niches and displaying complex relationships with their hosts.
The rationale for parasite conservation extends beyond their intrinsic value and ecological roles. Parasites offer potential benefits to human health and well-being. Many parasites produce bioactive compounds with pharmaceutical properties, which can be utilized in drug discovery and development. Understanding and conserving parasite biodiversity not only contributes to the preservation of ecosystems but also holds promise for medical advancements and novel therapeutic interventions.
Parasite role in ecosystems
Ranging from microscopic pathogens to larger organisms such as worms and arthropods, parasites exhibit remarkable diversity in their life cycles, transmission strategies, and host relationships. They can be found in virtually every ecosystem on Earth, including terrestrial, freshwater, and marine environments. Parasites often rely on one or multiple host species to complete their life cycle, and their presence can have profound effects on host populations, communities, and even entire ecosystems. One of the fundamental aspects of parasite ecology is their role as a trophic level within the food web. Parasites can occupy various positions within the trophic hierarchy, acting as predators, consumers, or even decomposers. They regulate host populations by influencing host behavior, growth, and reproduc
Document 2:::
In biology and medicine, a host is a larger organism that harbours a smaller organism; whether a parasitic, a mutualistic, or a commensalist guest (symbiont). The guest is typically provided with nourishment and shelter. Examples include animals playing host to parasitic worms (e.g. nematodes), cells harbouring pathogenic (disease-causing) viruses, or a bean plant hosting mutualistic (helpful) nitrogen-fixing bacteria. More specifically in botany, a host plant supplies food resources to micropredators, which have an evolutionarily stable relationship with their hosts similar to ectoparasitism. The host range is the collection of hosts that an organism can use as a partner.
Symbiosis
Symbiosis spans a wide variety of possible relationships between organisms, differing in their permanence and their effects on the two parties. If one of the partners in an association is much larger than the other, it is generally known as the host. In parasitism, the parasite benefits at the host's expense. In commensalism, the two live together without harming each other, while in mutualism, both parties benefit.
Most parasites are only parasitic for part of their life cycle. By comparing parasites with their closest free-living relatives, parasitism has been shown to have evolved on at least 233 separate occasions. Some organisms live in close association with a host and only become parasitic when environmental conditions deteriorate.
A parasite may have a long-term relationship with its host, as is the case with all endoparasites. The guest seeks out the host and obtains food or another service from it, but does not usually kill it. In contrast, a parasitoid spends a large part of its life within or on a single host, ultimately causing the host's death, with some of the strategies involved verging on predation. Generally, the host is kept alive until the parasitoid is fully grown and ready to pass on to its next life stage. A guest's relationship with its host may be intermitten
Document 3:::
In experimental physics, and particularly in high energy and nuclear physics, a parasite experiment or parasitic experiment is an experiment performed using a big particle accelerator or other large facility, without interfering with the scheduled experiments of that facility. This allows the experimenters to proceed without the usual competitive time scheduling procedure. These experiments may be instrument tests or experiments whose scientific interest has not been clearly established.
Further reading
Experimental particle physics
Document 4:::
Archaeoparasitology, a multi-disciplinary field within paleopathology, is the study of parasites in archaeological contexts. It includes studies of the protozoan and metazoan parasites of humans in the past, as well as parasites which may have affected past human societies, such as those infesting domesticated animals.
Reinhard suggested that the term "archaeoparasitology" be applied to "... all parasitological remains excavated from archaeological contexts ... derived from human activity" and that "the term 'paleoparasitology' be applied to studies of nonhuman, paleontological material." (p. 233) Paleoparasitology includes all studies of ancient parasites outside of archaeological contexts, such as those found in amber, and even dinosaur parasites.
The first archaeoparasitology report described calcified eggs of Bilharzia haematobia (now Schistosoma haematobium) from the kidneys of an ancient Egyptian mummy. Since then, many fundamental archaeological questions have been answered by integrating our knowledge of the hosts, life cycles and basic biology of parasites, with the archaeological, anthropological and historical contexts in which they are found.
Parasitology basics
Parasites are organisms which live in close association with another organism, called the host, in which the parasite benefits from the association, to the detriment of the host. Many other kinds of associations may exist between two closely allied organisms, such as commensalism or mutualism.
Endoparasites (such as protozoans and helminths), tend to be found inside the host, while ectoparasites (such as ticks, lice and fleas) live on the outside of the host body. Parasite life cycles often require that different developmental stages pass sequentially through multiple host species in order to successfully mature and reproduce. Some parasites are very host-specific, meaning that only one or a few species of hosts are capable of perpetuating their life cycle. Others are not host-spec
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Parasites usually harm their hosts (or they wouldn't be parasites), but what do they usually stop short of doing?
A. reproducing with hosts
B. mutating their hosts
C. benefiting their hosts
D. killing their host
Answer:
|
|
ai2_arc-742
|
multiple_choice
|
In humans, the gene for a free earlobe [E] is dominant over the gene for an attached earlobe [e]. If one parent has a free earlobe [ Ee ] and the other parent has an attached earlobe ( ee ), what is the probability that their offspring will have an attached earlobe?
|
[
"0%",
"25%",
"50%",
"100%"
] |
C
|
Relavent Documents:
Document 0:::
In statistical genetics, inclusive composite interval mapping (ICIM) has been proposed as an approach to QTL (quantitative trait locus) mapping for populations derived from bi-parental crosses. QTL mapping is based on genetic linkage map and phenotypic data to attempt to locate individual genetic factors on chromosomes and to estimate their genetic effects.
Additive and dominance QTL mapping
Two genetic assumptions used in ICIM are (1) the genotypic value of an individual is the summation of effects from all genes affecting the trait of interest; and (2) linked QTL are separated by at least one blank marker interval. Under the two assumptions, they proved that additive effect of the QTL located in a marker interval can be completely absorbed by the regression coefficients of the two flanking markers, while the QTL dominance effect causes marker dominance effects, as well as additive by additive and dominance by dominance interactions between the two flanking markers. By including two multiplication variables between flanking markers, the additive and dominance effects of one QTL can be completely absorbed. As a consequence, an inclusive linear model of phenotype regressing on all genetic markers (and marker multiplications) can be used to fit the positions and additive (and dominance) effects of all QTL in the genome. A two-step strategy was adopted in ICIM for additive and dominance QTL mapping. In the first step, stepwise regression was applied to identify the most significant marker variables in the linear model. In the second step, one-dimensional scanning or interval mapping was conducted for detecting QTL and estimating its additive and dominance effects, based on the phenotypic values adjusted by the regression model in the first step.
Genetic and statistical properties in additive QTL mapping
Computer simulations were used to study the asymptotic properties of ICIM in additive QTL mapping. The test statistic LOD score linearly increases as the increase in
Document 1:::
In human biology, handedness is an individual's preferential use of one hand, known as the dominant hand, due to it being stronger, faster or more dextrous. The other hand, comparatively often the weaker, less dextrous or simply less subjectively preferred, is called the non-dominant hand. In a study from 1975 on 7,688 children in US grades 1-6, left handers comprised 9.6% of the sample, with 10.5% of male children and 8.7% of female children being left-handed. Overall, around 90% of people are right-handed. Handedness is often defined by one's writing hand, as it is fairly common for people to prefer to do a particular task with a particular hand. There are people with true ambidexterity (equal preference of either hand), but it is rare—most people prefer using one hand for most purposes.
Most of the current research suggests that left-handedness has an epigenetic marker—a combination of genetics, biology and the environment.
Because the vast majority of the population is right-handed, many devices are designed for use by right-handed people, making their use by left-handed people more difficult. In many countries, left-handed people are or were required to write with their right hands. However, left-handed people have an advantage in sports that involves aiming at a target in an area of an opponent's control, as their opponents are more accustomed to the right-handed majority. As a result, they are over-represented in baseball, tennis, fencing, cricket, boxing, and mixed martial arts.
Types
Right-handedness is the most common type. Right-handed people are more skillful with their right hands. Studies suggest that approximately 90% of people are right-handed.
Left-handedness is less common. Studies suggest that approximately 10% of people are left-handed.
Ambidexterity refers to having equal ability in both hands. Those who learn it still tend to favor their originally dominant hand. This is uncommon, with about a 1% prevalence.
Mixed-handedness or cross-do
Document 2:::
The Principle of genetics is a genetics textbook authored by D. Peter Snustad & Michael J. Simmons, an emeritus professor of biology, published by John Wiley & Sons, Inc..
The 6th edition of the book was published on 2012.
Description
The book is sectioned into four parts. The first part, Genetics and the Scientific Method briefly review the History of genetics and the various methods used in genetic study. The second part focus on Mendelian inheritance, the third part deals with Molecular genetics and the last section deals with Quantitative genetics and Evolutionary Genetics.
Review
The book had been reviewed and rated high by several editors and geneticists.
Document 3:::
The ACE model is a statistical model commonly used to analyze the results of twin and adoption studies. This classic behaviour genetic model aims to partition the phenotypic variance into three categories: additive genetic variance (A), common (or shared) environmental factors (C), and specific (or nonshared) environmental factors plus measurement error (E). It is widely used in genetic epidemiology and behavioural genetics. The basic ACE model relies on several assumptions, including the absence of assortative mating, that there is no genetic dominance or epistasis, that all genetic effects are additive, and the absence of gene-environment interactions. In order to address these limitations, several variants of the ACE model have been developed, including an ACE-β model, which emphasizes the identification of causal effects, and the ACDE model, which accounts for the effects of genetic dominance.
See also
ADE model
Document 4:::
Research on the heritability of IQ inquires into the degree of variation in IQ within a population that is due to genetic variation between individuals in that population. There has been significant controversy in the academic community about the heritability of IQ since research on the issue began in the late nineteenth century. Intelligence in the normal range is a polygenic trait, meaning that it is influenced by more than one gene, and in the case of intelligence at least 500 genes. Further, explaining the similarity in IQ of closely related persons requires careful study because environmental factors may be correlated with genetic factors.
Early twin studies of adult individuals have found a heritability of IQ between 57% and 73%, with some recent studies showing heritability for IQ as high as 80%. IQ goes from being weakly correlated with genetics for children, to being strongly correlated with genetics for late teens and adults. The heritability of IQ increases with the child's age and reaches a plateau at 14-16 years old, continuing at that level well into adulthood. However, poor prenatal environment, malnutrition and disease are known to have lifelong deleterious effects.
Although IQ differences between individuals have been shown to have a large hereditary component, it does not follow that disparities in IQ between groups have a genetic basis. The scientific consensus is that genetics does not explain average differences in IQ test performance between racial groups.
Heritability and caveats
Heritability is a statistic used in the fields of breeding and genetics that estimates the degree of variation in a phenotypic trait in a population that is due to genetic variation between individuals in that population. The concept of heritability can be expressed in the form of the following question: "What is the proportion of the variation in a given trait within a population that is not explained by the environment or random chance?"
Estimates of heritabi
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
In humans, the gene for a free earlobe [E] is dominant over the gene for an attached earlobe [e]. If one parent has a free earlobe [ Ee ] and the other parent has an attached earlobe ( ee ), what is the probability that their offspring will have an attached earlobe?
A. 0%
B. 25%
C. 50%
D. 100%
Answer:
|
|
sciq-8974
|
multiple_choice
|
What cellular process is controlled by allosteric enzymes at key points in glycolysis and the citric acid cycle?
|
[
"cellular respiration",
"Metabolism",
"photosynthesis",
"mitosis"
] |
A
|
Relavent Documents:
Document 0:::
Cellular respiration is the process by which biological fuels are oxidized in the presence of an inorganic electron acceptor, such as oxygen, to drive the bulk production of adenosine triphosphate (ATP), which contains energy. Cellular respiration may be described as a set of metabolic reactions and processes that take place in the cells of organisms to convert chemical energy from nutrients into ATP, and then release waste products.
Cellular respiration is a vital process that happens in the cells of living organisms, including humans, plants, and animals. It's how cells produce energy to power all the activities necessary for life.
The reactions involved in respiration are catabolic reactions, which break large molecules into smaller ones, producing large amounts of energy (ATP). Respiration is one of the key ways a cell releases chemical energy to fuel cellular activity. The overall reaction occurs in a series of biochemical steps, some of which are redox reactions. Although cellular respiration is technically a combustion reaction, it is an unusual one because of the slow, controlled release of energy from the series of reactions.
Nutrients that are commonly used by animal and plant cells in respiration include sugar, amino acids and fatty acids, and the most common oxidizing agent is molecular oxygen (O2). The chemical energy stored in ATP (the bond of its third phosphate group to the rest of the molecule can be broken allowing more stable products to form, thereby releasing energy for use by the cell) can then be used to drive processes requiring energy, including biosynthesis, locomotion or transportation of molecules across cell membranes.
Aerobic respiration
Aerobic respiration requires oxygen (O2) in order to create ATP. Although carbohydrates, fats and proteins are consumed as reactants, aerobic respiration is the preferred method of pyruvate production in glycolysis, and requires pyruvate to the mitochondria in order to be fully oxidized by the c
Document 1:::
Biochemistry or biological chemistry is the study of chemical processes within and relating to living organisms. A sub-discipline of both chemistry and biology, biochemistry may be divided into three fields: structural biology, enzymology, and metabolism. Over the last decades of the 20th century, biochemistry has become successful at explaining living processes through these three disciplines. Almost all areas of the life sciences are being uncovered and developed through biochemical methodology and research. Biochemistry focuses on understanding the chemical basis which allows biological molecules to give rise to the processes that occur within living cells and between cells, in turn relating greatly to the understanding of tissues and organs, as well as organism structure and function. Biochemistry is closely related to molecular biology, which is the study of the molecular mechanisms of biological phenomena.
Much of biochemistry deals with the structures, bonding, functions, and interactions of biological macromolecules, such as proteins, nucleic acids, carbohydrates, and lipids. They provide the structure of cells and perform many of the functions associated with life. The chemistry of the cell also depends upon the reactions of small molecules and ions. These can be inorganic (for example, water and metal ions) or organic (for example, the amino acids, which are used to synthesize proteins). The mechanisms used by cells to harness energy from their environment via chemical reactions are known as metabolism. The findings of biochemistry are applied primarily in medicine, nutrition and agriculture. In medicine, biochemists investigate the causes and cures of diseases. Nutrition studies how to maintain health and wellness and also the effects of nutritional deficiencies. In agriculture, biochemists investigate soil and fertilizers, with the goal of improving crop cultivation, crop storage, and pest control. In recent decades, biochemical principles a
Document 2:::
Cellular waste products are formed as a by-product of cellular respiration, a series of processes and reactions that generate energy for the cell, in the form of ATP. One example of cellular respiration creating cellular waste products are aerobic respiration and anaerobic respiration.
Each pathway generates different waste products.
Aerobic respiration
When in the presence of oxygen, cells use aerobic respiration to obtain energy from glucose molecules.
Simplified Theoretical Reaction: C6H12O6 (aq) + 6O2 (g) → 6CO2 (g) + 6H2O (l) + ~ 30ATP
Cells undergoing aerobic respiration produce 6 molecules of carbon dioxide, 6 molecules of water, and up to 30 molecules of ATP (adenosine triphosphate), which is directly used to produce energy, from each molecule of glucose in the presence of surplus oxygen.
In aerobic respiration, oxygen serves as the recipient of electrons from the electron transport chain. Aerobic respiration is thus very efficient because oxygen is a strong oxidant.
Aerobic respiration proceeds in a series of steps, which also increases efficiency - since glucose is broken down gradually and ATP is produced as needed, less energy is wasted as heat. This strategy results in the waste products H2O and CO2 being formed in different amounts at different phases of respiration. CO2 is formed in Pyruvate decarboxylation, H2O is formed in oxidative phosphorylation, and both are formed in the citric acid cycle.
The simple nature of the final products also indicates the efficiency of this method of respiration. All of the energy stored in the carbon-carbon bonds of glucose is released, leaving CO2 and H2O. Although there is energy stored in the bonds of these molecules, this energy is not easily accessible by the cell. All usable energy is efficiently extracted.
Anaerobic respiration
Anaerobic respiration is done by aerobic organisms when there is not sufficient oxygen in a cell to undergo aerobic respiration as well as by cells called anaerobes that
Document 3:::
In enzymology, the committed step (also known as the first committed step) is an effectively irreversible enzymatic reaction that occurs at a branch point during the biosynthesis of some molecules.
As the name implies, after this step, the molecules are "committed" to the pathway and will ultimately end up in the pathway's final product. The first committed step should not be confused with the rate-determining step, which is the slowest step in a reaction or pathway. However, it is sometimes the case that the first committed step is in fact the rate-determining step as well.
Regulation
Metabolic pathways require tight regulation so that the proper compounds get produced in the proper amounts. Often, the first committed step is regulated by processes such as feedback inhibition and activation. Such regulation ensures that pathway intermediates do not accumulate, a situation that can be wasteful or even harmful to the cell.
Examples of enzymes that catalyze the first committed steps of metabolic pathways
Phosphofructokinase 1 catalyzes the first committed step of glycolysis.
LpxC catalyzes the first committed step of lipid A biosynthesis.
8-amino-7-oxononanoate synthase catalyzes the first committed step in plant biotin synthesis.
MurA catalyzes the first committed step of peptidoglycan biosynthesis.
Aspartate transcarbamoylase catalyzes the committed step in the pyrimidine biosynthetic pathway in E. coli.
3-deoxy-D-arabinose-heptulsonate 7-phosphate synthase catalyses the first committed step of the shikimate pathway responsible for the synthesis of the aromatic amino acids Tyrosine, Tryptophan and Phenylalanine in plants, bacteria, fungi and some lower eukaryotes.
Citrate synthase catalyzes the addition of acetyl-CoA to oxaloacetate and is the first committed step of the Citric Acid Cycle.
Acetyl-CoA carboxylase catalyzes the irreversible carboxylation of acetyl-CoA to malonyl-CoA in the first committed step of fatty acid biosynthesis.
Glucose-6-phosphate dehy
Document 4:::
Amylolytic process or amylolysis is the conversion of starch into sugar by the action of acids or enzymes such as amylase.
Starch begins to pile up inside the leaves of plants during times of light when starch is able to be produced by photosynthetic processes. This ability to make starch disappears in the dark due to the lack of illumination; there is insufficient amount of light produced during the dark needed to carry this reaction forward. Turning starch into sugar is done by the enzyme amylase.
Different pathways of amylase & location of amylase activity
The process in which amylase breaks down starch for sugar consumption is not consistent with all organisms that use amylase to breakdown stored starch. There are different amylase pathways that are involved in starch degradation. The occurrence of starch degradation into sugar by the enzyme amylase was most commonly known to take place in the Chloroplast, but that has been proven wrong. One example is the spinach plant, in which the chloroplast contains both alpha and beta amylase (They are different versions of amylase involved in the breakdown of starch and they differ in their substrate specificity). In spinach leaves, the extrachloroplastic region contains the highest level of amylase degradation of starch. The difference between chloroplast and extrachloroplastic starch degradation is in the amylase pathway they prefer; either beta or alpha amylase. For spinach leaves, Alpha-amylase is preferred but for plants/organisms like wheat, barley, peas, etc. the Beta-amylase is preferred.
Usage
The amylolytic process is used in the brewing of alcohol from grains. Since grains contain starches but little to no simple sugars, the sugar needed to produce alcohol is derived from starch via the amylolytic process. In beer brewing, this is done through malting. In sake brewing, the mold Aspergillus oryzae provides amylolysis, and in Tapai, Saccharomyces cerevisiae. The amylolytic process can also be used to allow
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What cellular process is controlled by allosteric enzymes at key points in glycolysis and the citric acid cycle?
A. cellular respiration
B. Metabolism
C. photosynthesis
D. mitosis
Answer:
|
|
sciq-3408
|
multiple_choice
|
Mirrors and lenses are used in optical instruments to reflect and refract what?
|
[
"electricity",
"gravity",
"mass",
"light"
] |
D
|
Relavent Documents:
Document 0:::
Optics is the branch of physics which involves the behavior and properties of light, including its interactions with matter and the construction of instruments that use or detect it. Optics usually describes the behavior of visible, ultraviolet, and infrared light. Because light is an electromagnetic wave, other forms of electromagnetic radiation such as X-rays, microwaves, and radio waves exhibit similar properties.
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
W
Z
See also
:Category:Optical components
:Category:Optical materials
Document 1:::
The calculation of glass properties (glass modeling) is used to predict glass properties of interest or glass behavior under certain conditions (e.g., during production) without experimental investigation, based on past data and experience, with the intention to save time, material, financial, and environmental resources, or to gain scientific insight. It was first practised at the end of the 19th century by A. Winkelmann and O. Schott. The combination of several glass models together with other relevant functions can be used for optimization and six sigma procedures. In the form of statistical analysis glass modeling can aid with accreditation of new data, experimental procedures, and measurement institutions (glass laboratories).
History
Historically, the calculation of glass properties is directly related to the founding of glass science. At the end of the 19th century the physicist Ernst Abbe developed equations that allow calculating the design of optimized optical microscopes in Jena, Germany, stimulated by co-operation with the optical workshop of Carl Zeiss. Before Ernst Abbe's time the building of microscopes was mainly a work of art and experienced craftsmanship, resulting in very expensive optical microscopes with variable quality. Now Ernst Abbe knew exactly how to construct an excellent microscope, but unfortunately, the required lenses and prisms with specific ratios of refractive index and dispersion did not exist. Ernst Abbe was not able to find answers to his needs from glass artists and engineers; glass making was not based on science at this time.
In 1879 the young glass engineer Otto Schott sent Abbe glass samples with a special composition (lithium silicate glass) that he had prepared himself and that he hoped to show special optical properties. Following measurements by Ernst Abbe, Schott's glass samples did not have the desired properties, and they were also not as homogeneous as desired. Nevertheless, Ernst Abbe invited Otto Schott to work
Document 2:::
In optics, an image-forming optical system is a system capable of being used for imaging. The diameter of the aperture of the main objective is a common criterion for comparison among optical systems, such as large telescopes.
The two traditional optical systems are mirror-systems (catoptrics) and lens-systems (dioptrics). However, in the late twentieth century, optical fiber was introduced as a technology for transmitting images over long distances. Catoptrics and dioptrics have a focal point that concentrates light onto a specific point, while optical fiber the transfer of an image from one plane to another without the need for an optical focus.
Isaac Newton is reported to have designed what he called a catadioptrical phantasmagoria, which can be interpreted to mean an elaborate structure of both mirrors and lenses.
Catoptrics and optical fiber have no chromatic aberration, while dioptrics need to have this error corrected. Newton believed that such correction was impossible, because he thought the path of the light depended only on its color. In 1757 John Dollond was able to create an achromatised dioptric, which was the forerunner of the lenses used in all popular photographic equipment today.
Lower-energy X-Rays are the highest energy electromagnetic radiation that can be formed into an image, using a Wolter telescope. There are three types of Wolter telescopes Near infrared is typically the longest wavelength that are handled optically, such as in some large telescopes.
Document 3:::
Optical engineering is the field of science and engineering encompassing the physical phenomena and technologies associated with the generation, transmission, manipulation, detection, and utilization of light. Optical engineers use optics to solve problems and to design and build devices that make light do something useful. They design and operate optical equipment that uses the properties of light using physics and chemistry, such as lenses, microscopes, telescopes, lasers, sensors, fiber optic communication systems and optical disc systems (e.g. CD, DVD).
Optical engineering metrology uses optical methods to measure either micro-vibrations with instruments like the laser speckle interferometer, or properties of masses with instruments that measure refraction
Nano-measuring and nano-positioning machines are devices designed by optical engineers. These machines, for example microphotolithographic steppers, have nanometer precision, and consequently are used in the fabrication of goods at this scale.
See also
Optical lens design
Optical physics
Optician
Document 4:::
The National Center for Optics and Photonics Education, known as OP-TEC for short, was a joint effort by educational institutions and other groups to develop curriculum materials for photonics. Headquartered in Waco, Texas, it was funded by the National Science Foundation.
OP-TEC held workshops at various institutions around the United States to promote the use of optics and photonics in secondary and post-secondary curricula.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Mirrors and lenses are used in optical instruments to reflect and refract what?
A. electricity
B. gravity
C. mass
D. light
Answer:
|
|
sciq-4691
|
multiple_choice
|
The sum of the kinetic and potential energies of a system’s atoms and molecules is called what?
|
[
"internal energy",
"mechanical energy",
"stored energy",
"used energy"
] |
A
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In physics, energy density is the amount of energy stored in a given system or region of space per unit volume. It is sometimes confused with energy per unit mass which is properly called specific energy or .
Often only the useful or extractable energy is measured, which is to say that inaccessible energy (such as rest mass energy) is ignored. In cosmological and other general relativistic contexts, however, the energy densities considered are those that correspond to the elements of the stress-energy tensor and therefore do include mass energy as well as energy densities associated with pressure.
Energy per unit volume has the same physical units as pressure and in many situations is synonymous. For example, the energy density of a magnetic field may be expressed as and behaves like a physical pressure. Likewise, the energy required to compress a gas to a certain volume may be determined by multiplying the difference between the gas pressure and the external pressure by the change in volume. A pressure gradient describes the potential to perform work on the surroundings by converting internal energy to work until equilibrium is reached.
Overview
There are different types of energy stored in materials, and it takes a particular type of reaction to release each type of energy. In order of the typical magnitude of the energy released, these types of reactions are: nuclear, chemical, electrochemical, and electrical.
Nuclear reactions take place in stars and nuclear power plants, both of which derive energy from the binding energy of nuclei. Chemical reactions are used by organisms to derive energy from food and by automobiles to derive energy from gasoline. Liquid hydrocarbons (fuels such as gasoline, diesel and kerosene) are today the densest way known to economically store and transport chemical energy at a large scale (1 kg of diesel fuel burns with the oxygen contained in ≈15 kg of air). Electrochemical reactions are used by most mobile devices such as laptop
Document 2:::
The internal energy of a thermodynamic system is the energy contained within it, measured as the quantity of energy necessary to bring the system from its standard internal state to its present internal state of interest, accounting for the gains and losses of energy due to changes in its internal state, including such quantities as magnetization. It excludes the kinetic energy of motion of the system as a whole and the potential energy of position of the system as a whole, with respect to its surroundings and external force fields. It includes the thermal energy, i.e., the constituent particles' kinetic energies of motion relative to the motion of the system as a whole. The internal energy of an isolated system cannot change, as expressed in the law of conservation of energy, a foundation of the first law of thermodynamics.
The internal energy cannot be measured absolutely. Thermodynamics concerns changes in the internal energy, not its absolute value. The processes that change the internal energy are transfers, into or out of the system, of matter, or of energy, as heat, or by thermodynamic work. These processes are measured by changes in the system's properties, such as temperature, entropy, volume, electric polarization, and molar constitution. The internal energy depends only on the internal state of the system and not on the particular choice from many possible processes by which energy may pass into or out of the system. It is a state variable, a thermodynamic potential, and an extensive property.
Thermodynamics defines internal energy macroscopically, for the body as a whole. In statistical mechanics, the internal energy of a body can be analyzed microscopically in terms of the kinetic energies of microscopic motion of the system's particles from translations, rotations, and vibrations, and of the potential energies associated with microscopic forces, including chemical bonds.
The unit of energy in the International System of Units (SI) is the joule (J)
Document 3:::
The energy systems language, also referred to as energese, or energy circuit language, or generic systems symbols, is a modelling language used for composing energy flow diagrams in the field of systems ecology. It was developed by Howard T. Odum and colleagues in the 1950s during studies of the tropical forests funded by the United States Atomic Energy Commission.
Design intent
The design intent of the energy systems language was to facilitate the generic depiction of energy flows through any scale system while encompassing the laws of physics, and in particular, the laws of thermodynamics (see energy transformation for an example).
In particular H.T. Odum aimed to produce a language which could facilitate the intellectual analysis, engineering synthesis and management of global systems such as the geobiosphere, and its many subsystems. Within this aim, H.T. Odum had a strong concern that many abstract mathematical models of such systems were not thermodynamically valid. Hence he used analog computers to make system models due to their intrinsic value; that is, the electronic circuits are of value for modelling natural systems which are assumed to obey the laws of energy flow, because, in themselves the circuits, like natural systems, also obey the known laws of energy flow, where the energy form is electrical. However Odum was interested not only in the electronic circuits themselves, but also in how they might be used as formal analogies for modeling other systems which also had energy flowing through them. As a result, Odum did not restrict his inquiry to the analysis and synthesis of any one system in isolation. The discipline that is most often associated with this kind of approach, together with the use of the energy systems language is known as systems ecology.
General characteristics
When applying the electronic circuits (and schematics) to modeling ecological and economic systems, Odum believed that generic categories, or characteristic modules, could
Document 4:::
The term "thermal energy" is used loosely in various contexts in physics and engineering, generally related to the kinetic energy of vibrating and colliding atoms in a substance. It can refer to several different well-defined physical concepts. These include the internal energy or enthalpy of a body of matter and radiation; heat, defined as a type of energy transfer (as is thermodynamic work); and the characteristic energy of a degree of freedom, , in a system that is described in terms of its microscopic particulate constituents (where denotes temperature and denotes the Boltzmann constant).
Relation to heat and internal energy
In thermodynamics, heat is energy transferred to or from a thermodynamic system by mechanisms other than thermodynamic work or transfer of matter, such as conduction, radiation, and friction. Heat refers to a quantity transferred between systems, not to a property of any one system, or "contained" within it. On the other hand, internal energy and enthalpy are properties of a single system. Heat and work depend on the way in which an energy transfer occurred, whereas internal energy is a property of the state of a system and can thus be understood without knowing how the energy got there.
Macroscopic thermal energy
The internal energy of a body can change in a process in which chemical potential energy is converted into non-chemical energy. In such a process, the thermodynamic system can change its internal energy by doing work on its surroundings, or by gaining or losing energy as heat. It is not quite lucid to merely say that "the converted chemical potential energy has simply become internal energy". It is, however, convenient and more lucid to say that "the chemical potential energy has been converted into thermal energy". Such thermal energy may be viewed as a contributor to internal energy or to enthalpy, thinking of the contribution as a process without thinking that the contributed energy has become an identifiable component o
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The sum of the kinetic and potential energies of a system’s atoms and molecules is called what?
A. internal energy
B. mechanical energy
C. stored energy
D. used energy
Answer:
|
|
sciq-4331
|
multiple_choice
|
Because carbohydrates have a carbonyl functional group and several hydroxyl groups, they can undergo a variety of biochemically important reactions. the carbonyl group, for example, can be oxidized to form a carboxylic acid or reduced to form this?
|
[
"glucose",
"caffeine",
"alcohol",
"sucrose"
] |
C
|
Relavent Documents:
Document 0:::
A reducing sugar is any sugar that is capable of acting as a reducing agent. In an alkaline solution, a reducing sugar forms some aldehyde or ketone, which allows it to act as a reducing agent, for example in Benedict's reagent. In such a reaction, the sugar becomes a carboxylic acid.
All monosaccharides are reducing sugars, along with some disaccharides, some oligosaccharides, and some polysaccharides. The monosaccharides can be divided into two groups: the aldoses, which have an aldehyde group, and the ketoses, which have a ketone group. Ketoses must first tautomerize to aldoses before they can act as reducing sugars. The common dietary monosaccharides galactose, glucose and fructose are all reducing sugars.
Disaccharides are formed from two monosaccharides and can be classified as either reducing or nonreducing. Nonreducing disaccharides like sucrose and trehalose have glycosidic bonds between their anomeric carbons and thus cannot convert to an open-chain form with an aldehyde group; they are stuck in the cyclic form. Reducing disaccharides like lactose and maltose have only one of their two anomeric carbons involved in the glycosidic bond, while the other is free and can convert to an open-chain form with an aldehyde group.
The aldehyde functional group allows the sugar to act as a reducing agent, for example, in the Tollens' test or Benedict's test. The cyclic hemiacetal forms of aldoses can open to reveal an aldehyde, and certain ketoses can undergo tautomerization to become aldoses. However, acetals, including those found in polysaccharide linkages, cannot easily become free aldehydes.
Reducing sugars react with amino acids in the Maillard reaction, a series of reactions that occurs while cooking food at high temperatures and that is important in determining the flavor of food. Also, the levels of reducing sugars in wine, juice, and sugarcane are indicative of the quality of these food products.
Terminology
Oxidation-reduction
A reducing sugar is on
Document 1:::
The metabolome refers to the complete set of small-molecule chemicals found within a biological sample. The biological sample can be a cell, a cellular organelle, an organ, a tissue, a tissue extract, a biofluid or an entire organism. The small molecule chemicals found in a given metabolome may include both endogenous metabolites that are naturally produced by an organism (such as amino acids, organic acids, nucleic acids, fatty acids, amines, sugars, vitamins, co-factors, pigments, antibiotics, etc.) as well as exogenous chemicals (such as drugs, environmental contaminants, food additives, toxins and other xenobiotics) that are not naturally produced by an organism.
In other words, there is both an endogenous metabolome and an exogenous metabolome. The endogenous metabolome can be further subdivided to include a "primary" and a "secondary" metabolome (particularly when referring to plant or microbial metabolomes). A primary metabolite is directly involved in the normal growth, development, and reproduction. A secondary metabolite is not directly involved in those processes, but usually has important ecological function. Secondary metabolites may include pigments, antibiotics or waste products derived from partially metabolized xenobiotics. The study of the metabolome is called metabolomics.
Origins
The word metabolome appears to be a blending of the words "metabolite" and "chromosome". It was constructed to imply that metabolites are indirectly encoded by genes or act on genes and gene products. The term "metabolome" was first used in 1998 and was likely coined to match with existing biological terms referring to the complete set of genes (the genome), the complete set of proteins (the proteome) and the complete set of transcripts (the transcriptome). The first book on metabolomics was published in 2003. The first journal dedicated to metabolomics (titled simply "Metabolomics") was launched in 2005 and is currently edited by Prof. Roy Goodacre. Some of the m
Document 2:::
Glycoinformatics is a field of bioinformatics that pertains to the study of carbohydrates involved in protein post-translational modification. It broadly includes (but is not restricted to) database, software, and algorithm development for the study of carbohydrate structures, glycoconjugates, enzymatic carbohydrate synthesis and degradation, as well as carbohydrate interactions. Conventional usage of the term does not currently include the treatment of carbohydrates from the better-known nutritive aspect.
Issues to consider
Even though glycosylation is the most common form of protein modification, with highly complex carbohydrate structures, the bioinformatics on glycome is still very poor.
Unlike proteins and nucleic acids which are linear, carbohydrates are often branched and extremely complex. For instance, just four sugars can be strung together to form more than 5 million different types of carbohydrates or nine different sugars may be assembled into 15 million possible four-sugar-chains.
Also, the number of simple sugars that make up glycans is more than the number of nucleotides that make up DNA or RNA. Therefore, it is more computationally expensive to evaluate their structures.
One of the main constrains in the glycoinformatics is the difficulty of representing sugars in the sequence form especially due to their branching nature. Owing to the lack of a genetic blue print, carbohydrates do not have a "fixed" sequence. Instead, the sequence is largely determined by the presence of a variety of enzymes, their kinetic differences and variations in the biosynthetic micro-environment of the cells. This increases the complexity of analysis and experimental reproducibility of the carbohydrate structure of interest. It is for this reason that carbohydrates are often considered as the "information poor" molecules.
Databases
Table of major glyco-databases.
Document 3:::
Biochemistry or biological chemistry is the study of chemical processes within and relating to living organisms. A sub-discipline of both chemistry and biology, biochemistry may be divided into three fields: structural biology, enzymology, and metabolism. Over the last decades of the 20th century, biochemistry has become successful at explaining living processes through these three disciplines. Almost all areas of the life sciences are being uncovered and developed through biochemical methodology and research. Biochemistry focuses on understanding the chemical basis which allows biological molecules to give rise to the processes that occur within living cells and between cells, in turn relating greatly to the understanding of tissues and organs, as well as organism structure and function. Biochemistry is closely related to molecular biology, which is the study of the molecular mechanisms of biological phenomena.
Much of biochemistry deals with the structures, bonding, functions, and interactions of biological macromolecules, such as proteins, nucleic acids, carbohydrates, and lipids. They provide the structure of cells and perform many of the functions associated with life. The chemistry of the cell also depends upon the reactions of small molecules and ions. These can be inorganic (for example, water and metal ions) or organic (for example, the amino acids, which are used to synthesize proteins). The mechanisms used by cells to harness energy from their environment via chemical reactions are known as metabolism. The findings of biochemistry are applied primarily in medicine, nutrition and agriculture. In medicine, biochemists investigate the causes and cures of diseases. Nutrition studies how to maintain health and wellness and also the effects of nutritional deficiencies. In agriculture, biochemists investigate soil and fertilizers, with the goal of improving crop cultivation, crop storage, and pest control. In recent decades, biochemical principles a
Document 4:::
This is a list of articles that describe particular biomolecules or types of biomolecules.
A
For substances with an A- or α- prefix such as
α-amylase, please see the parent page (in this case Amylase).
A23187 (Calcimycin, Calcium Ionophore)
Abamectine
Abietic acid
Acetic acid
Acetylcholine
Actin
Actinomycin D
Adenine
Adenosmeme
Adenosine diphosphate (ADP)
Adenosine monophosphate (AMP)
Adenosine triphosphate (ATP)
Adenylate cyclase
Adiponectin
Adonitol
Adrenaline, epinephrine
Adrenocorticotropic hormone (ACTH)
Aequorin
Aflatoxin
Agar
Alamethicin
Alanine
Albumins
Aldosterone
Aleurone
Alpha-amanitin
Alpha-MSH (Melaninocyte stimulating hormone)
Allantoin
Allethrin
α-Amanatin, see Alpha-amanitin
Amino acid
Amylase (also see α-amylase)
Anabolic steroid
Anandamide (ANA)
Androgen
Anethole
Angiotensinogen
Anisomycin
Antidiuretic hormone (ADH)
Anti-Müllerian hormone (AMH)
Arabinose
Arginine
Argonaute
Ascomycin
Ascorbic acid (vitamin C)
Asparagine
Aspartic acid
Asymmetric dimethylarginine
ATP synthase
Atrial-natriuretic peptide (ANP)
Auxin
Avidin
Azadirachtin A – C35H44O16
B
Bacteriocin
Beauvericin
beta-Hydroxy beta-methylbutyric acid
beta-Hydroxybutyric acid
Bicuculline
Bilirubin
Biopolymer
Biotin (Vitamin H)
Brefeldin A
Brassinolide
Brucine
Butyric acid
C
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Because carbohydrates have a carbonyl functional group and several hydroxyl groups, they can undergo a variety of biochemically important reactions. the carbonyl group, for example, can be oxidized to form a carboxylic acid or reduced to form this?
A. glucose
B. caffeine
C. alcohol
D. sucrose
Answer:
|
|
sciq-1764
|
multiple_choice
|
What is venus covered in a thick layer of?
|
[
"storms",
"clouds",
"gases",
"fog"
] |
B
|
Relavent Documents:
Document 0:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
The Inverness Campus is an area in Inverness, Scotland. 5.5 hectares of the site have been designated as an enterprise area for life sciences by the Scottish Government. This designation is intended to encourage research and development in the field of life sciences, by providing incentives to locate at the site.
The enterprise area is part of a larger site, over 200 acres, which will house Inverness College, Scotland's Rural College (SRUC), the University of the Highlands and Islands, a health science centre and sports and other community facilities. The purpose built research hub will provide space for up to 30 staff and researchers, allowing better collaboration.
The Highland Science Academy will be located on the site, a collaboration formed by Highland Council, employers and public bodies. The academy will be aimed towards assisting young people to gain the necessary skills to work in the energy, engineering and life sciences sectors.
History
The site was identified in 2006. Work started to develop the infrastructure on the site in early 2012. A virtual tour was made available in October 2013 to help mark Doors Open Day.
The construction had reached halfway stage in May 2014, meaning that it is on track to open doors to receive its first students in August 2015.
In May 2014, work was due to commence on a building designed to provide office space and laboratories as part of the campus's "life science" sector. Morrison Construction have been appointed to undertake the building work.
Scotland's Rural College (SRUC) will be able to relocate their Inverness-based activities to the Campus. SRUC's research centre for Comparative Epidemiology and Medicine, and Agricultural Business Consultancy services could co-locate with UHI where their activities have complementary themes.
By the start of 2017, there were more than 600 people working at the site.
In June 2021, a new bridge opened connecting Inverness Campus to Inverness Shopping Park. It crosses the Aberdeen
Document 3:::
Tech City College (Formerly STEM Academy) is a free school sixth form located in the Islington area of the London Borough of Islington, England.
It originally opened in September 2013, as STEM Academy Tech City and specialised in Science, Technology, Engineering and Maths (STEM) and the Creative Application of Maths and Science. In September 2015, STEM Academy joined the Aspirations Academy Trust was renamed Tech City College. Tech City College offers A-levels and BTECs as programmes of study for students.
Document 4:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is venus covered in a thick layer of?
A. storms
B. clouds
C. gases
D. fog
Answer:
|
|
sciq-7795
|
multiple_choice
|
Where does most geologic activity take place?
|
[
"outer core",
"plate boundaries",
"asthenosphere",
"inner core"
] |
B
|
Relavent Documents:
Document 0:::
The geologic record in stratigraphy, paleontology and other natural sciences refers to the entirety of the layers of rock strata. That is, deposits laid down by volcanism or by deposition of sediment derived from weathering detritus (clays, sands etc.). This includes all its fossil content and the information it yields about the history of the Earth: its past climate, geography, geology and the evolution of life on its surface. According to the law of superposition, sedimentary and volcanic rock layers are deposited on top of each other. They harden over time to become a solidified (competent) rock column, that may be intruded by igneous rocks and disrupted by tectonic events.
Correlating the rock record
At a certain locality on the Earth's surface, the rock column provides a cross section of the natural history in the area during the time covered by the age of the rocks. This is sometimes called the rock history and gives a window into the natural history of the location that spans many geological time units such as ages, epochs, or in some cases even multiple major geologic periods—for the particular geographic region or regions. The geologic record is in no one place entirely complete for where geologic forces one age provide a low-lying region accumulating deposits much like a layer cake, in the next may have uplifted the region, and the same area is instead one that is weathering and being torn down by chemistry, wind, temperature, and water. This is to say that in a given location, the geologic record can be and is quite often interrupted as the ancient local environment was converted by geological forces into new landforms and features. Sediment core data at the mouths of large riverine drainage basins, some of which go deep thoroughly support the law of superposition.
However using broadly occurring deposited layers trapped within differently located rock columns, geologists have pieced together a system of units covering most of the geologic time scale
Document 1:::
The core–mantle boundary (CMB) of Earth lies between the planet's silicate mantle and its liquid iron–nickel outer core, at a depth of below Earth's surface. The boundary is observed via the discontinuity in seismic wave velocities at that depth due to the differences between the acoustic impedances of the solid mantle and the molten outer core. P-wave velocities are much slower in the outer core than in the deep mantle while S-waves do not exist at all in the liquid portion of the core. Recent evidence suggests a distinct boundary layer directly above the CMB possibly made of a novel phase of the basic perovskite mineralogy of the deep mantle named post-perovskite. Seismic tomography studies have shown significant irregularities within the boundary zone and appear to be dominated by the African and Pacific Large Low-Shear-Velocity Provinces (LLSVP).
The uppermost section of the outer core is thought to be about 500–1,800 K hotter than the overlying mantle, creating a thermal boundary layer. The boundary is thought to harbor topography, much like Earth's surface, that is supported by solid-state convection within the overlying mantle. Variations in the thermal properties of the core-mantle boundary may affect how the outer core's iron-rich fluids flow, which are ultimately responsible for Earth's magnetic field.
The D″ region
The approx. 200 km thick layer of the lower mantle directly above the boundary is referred to as the D″ region ("D double-prime" or "D prime prime") and is sometimes included in discussions regarding the core–mantle boundary zone. The D″ name originates from geophysicist Keith Bullen's designations for the Earth's layers. His system was to label each layer alphabetically, A through G, with the crust as 'A' and the inner core as 'G'. In his 1942 publication of his model, the entire lower mantle was the D layer. In 1949, Bullen found his 'D' layer to actually be two different layers. The upper part of the D layer, about 1800 km thick, was r
Document 2:::
The thermal history of Earth involves the study of the cooling history of Earth's interior. It is a sub-field of geophysics. (Thermal histories are also computed for the internal cooling of other planetary and stellar bodies.) The study of the thermal evolution of Earth's interior is uncertain and controversial in all aspects, from the interpretation of petrologic observations used to infer the temperature of the interior, to the fluid dynamics responsible for heat loss, to material properties that determine the efficiency of heat transport.
Overview
Observations that can be used to infer the temperature of Earth's interior range from the oldest rocks on Earth to modern seismic images of the inner core size. Ancient volcanic rocks can be associated with a depth and temperature of melting through their geochemical composition. Using this technique and some geological inferences about the conditions under which the rock is preserved, the temperature of the mantle can be inferred. The mantle itself is fully convective, so that the temperature in the mantle is basically constant with depth outside the top and bottom thermal boundary layers. This is not quite true because the temperature in any convective body under pressure must increase along an adiabat, but the adiabatic temperature gradient is usually much smaller than the temperature jumps at the boundaries. Therefore, the mantle is usually associated with a single or potential temperature that refers to the mid-mantle temperature extrapolated along the adiabat to the surface. The potential temperature of the mantle is estimated to be about 1350 C today. There is an analogous potential temperature of the core but since there are no samples from the core its present-day temperature relies on extrapolating the temperature along an adiabat from the inner core boundary, where the iron solidus is somewhat constrained.
Thermodynamics
The simplest mathematical formulation of the thermal history of Earth's interior i
Document 3:::
Earth's crustal evolution involves the formation, destruction and renewal of the rocky outer shell at that planet's surface.
The variation in composition within the Earth's crust is much greater than that of other terrestrial planets. Mars, Venus, Mercury and other planetary bodies have relatively quasi-uniform crusts unlike that of the Earth which contains both oceanic and continental plates. This unique property reflects the complex series of crustal processes that have taken place throughout the planet's history, including the ongoing process of plate tectonics.
The proposed mechanisms regarding Earth's crustal evolution take a theory-orientated approach. Fragmentary geologic evidence and observations provide the basis for hypothetical solutions to problems relating to the early Earth system. Therefore, a combination of these theories creates both a framework of current understanding and also a platform for future study.
Early crust
Mechanisms of early crust formation
The early Earth was entirely molten. This was due to high temperatures created and maintained by the following processes:
Compression of the early atmosphere
Rapid axial rotation
Regular impacts with neighbouring planetesimals.
The mantle remained hotter than modern day temperatures throughout the Archean. Over time the Earth began to cool as planetary accretion slowed and heat stored within the magma ocean was lost to space through radiation.
A theory for the initiation of magma solidification states that once cool enough, the cooler base of the magma ocean would begin to crystallise first. This is because pressure of 25 GPa at the surface cause the solidus to lower. The formation of a thin 'chill-crust' at the extreme surface would provide thermal insulation to the shallow sub surface, keeping it warm enough to maintain the mechanism of crystallisation from the deep magma ocean.
The composition of the crystals produced during the crystallisation of the magma ocean varied with depth. Ex
Document 4:::
Tectonophysics, a branch of geophysics, is the study of the physical processes that underlie tectonic deformation. This includes measurement or calculation of the stress- and strain fields on Earth’s surface and the rheologies of the crust, mantle, lithosphere and asthenosphere.
Overview
Tectonophysics is concerned with movements in the Earth's crust and deformations over scales from meters to thousands of kilometers. These govern processes on local and regional scales and at structural boundaries, such as the destruction of continental crust (e.g. gravitational instability) and oceanic crust (e.g. subduction), convection in the Earth's mantle (availability of melts), the course of continental drift, and second-order effects of plate tectonics such as thermal contraction of the lithosphere. This involves the measurement of a hierarchy of strains in rocks and plates as well as deformation rates; the study of laboratory analogues of natural systems; and the construction of models for the history of deformation.
History
Tectonophysics was adopted as the name of a new section of AGU on April 19, 1940, at AGU's 21st Annual Meeting. According to the AGU website (https://tectonophysics.agu.org/agu-100/section-history/), using the words from Norman Bowen, the main goal of the tectonophysics section was to “designate this new borderline field between geophysics, physics and geology … for the solution of problems of tectonics.” Consequently, the claim below that the term was defined in 1954 by Gzolvskii is clearly incorrect. Since 1940 members of AGU had been presenting papers at AGU meetings, the contents of which defined the meaning of the field.
Tectonophysics was defined as a field in 1954 when Mikhail Vladimirovich Gzovskii published three papers in the journal Izvestiya Akad. Nauk SSSR, Sireya Geofizicheskaya: "On the tasks and content of tectonophysics", "Tectonic stress fields", and "Modeling of tectonic stress fields". He defined the main goals of tectonophysica
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Where does most geologic activity take place?
A. outer core
B. plate boundaries
C. asthenosphere
D. inner core
Answer:
|
|
sciq-790
|
multiple_choice
|
The voltage and current are exactly in phase in a what?
|
[
"resistor",
"capacitor",
"battery",
"harmonic"
] |
A
|
Relavent Documents:
Document 0:::
In electrical engineering, electrical terms are associated into pairs called duals. A dual of a relationship is formed by interchanging voltage and current in an expression. The dual expression thus produced is of the same form, and the reason that the dual is always a valid statement can be traced to the duality of electricity and magnetism.
Here is a partial list of electrical dualities:
voltage – current
parallel – serial (circuits)
resistance – conductance
voltage division – current division
impedance – admittance
capacitance – inductance
reactance – susceptance
short circuit – open circuit
Kirchhoff's current law – Kirchhoff's voltage law.
Thévenin's theorem – Norton's theorem
History
The use of duality in circuit theory is due to Alexander Russell who published his ideas in 1904.
Examples
Constitutive relations
Resistor and conductor (Ohm's law)
Capacitor and inductor – differential form
Capacitor and inductor – integral form
Voltage division — current division
Impedance and admittance
Resistor and conductor
Capacitor and inductor
See also
Duality (electricity and magnetism)
Duality (mechanical engineering)
Dual impedance
Dual graph
Mechanical–electrical analogies
List of dualities
Document 1:::
There are several formal analogies that can be made between electricity, which is invisible to the eye, and more familiar physical behaviors, such as the flowing of water or the motion of mechanical devices.
In the case of capacitance, one analogy to a capacitor in mechanical rectilineal terms is a spring where the compliance of the spring is analogous to the capacitance. Thus in electrical engineering, a capacitor may be defined as an ideal electrical component which satisfies the equation
where = voltage measured at the terminals of the capacitor, = the capacitance of the capacitor, = current flowing between the terminals of the capacitor, and = time.
The equation quoted above has the same form as that describing an ideal massless spring:
, where:
is the force applied between the two ends of the spring,
is the stiffness, or spring constant (inverse of compliance) defined as force/displacement, and
is the speed (or velocity) of one end of the spring, the other end being fixed.
Note that in the electrical case, current (I) is defined as the rate of change of charge (Q) with respect to time:
While in the mechanical case, velocity (v) is defined as the rate of change of displacement (x) with respect to time:
Thus, in this analogy:
Charge is represented by linear displacement,
current is represented by linear velocity,
voltage by force.
time by time
Also, these analogous relationships apply:
energy. Energy stored in a spring is , while energy stored in a capacitor is .
Electric power. Here there is an analogy between the mechanical concept of power as the scalar product of velocity and displacement, and the electrical concept that in an AC circuit with sinusoidal excitation, power is the product where is the phase angle between and , measured in RMS terms.
Electrical resistance (R) is analogous to mechanical viscous drag coefficient (force being proportional to velocity is analogous to Ohm's law - voltage being proportional to current
Document 2:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 3:::
Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams.
Course content
E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are:
Electrostatics
Conductors, capacitors, and dielectrics
Electric circuits
Magnetic fields
Electromagnetism.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with
Document 4:::
Mathematical methods are integral to the study of electronics.
Mathematics in electronics
Electronics engineering careers usually include courses in calculus (single and multivariable), complex analysis, differential equations (both ordinary and partial), linear algebra and probability. Fourier analysis and Z-transforms are also subjects which are usually included in electrical engineering programs. Laplace transform can simplify computing RLC circuit behaviour.
Basic applications
A number of electrical laws apply to all electrical networks. These include
Faraday's law of induction: Any change in the magnetic environment of a coil of wire will cause a voltage (emf) to be "induced" in the coil.
Gauss's Law: The total of the electric flux out of a closed surface is equal to the charge enclosed divided by the permittivity.
Kirchhoff's current law: the sum of all currents entering a node is equal to the sum of all currents leaving the node or the sum of total current at a junction is zero
Kirchhoff's voltage law: the directed sum of the electrical potential differences around a circuit must be zero.
Ohm's law: the voltage across a resistor is the product of its resistance and the current flowing through it.at constant temperature.
Norton's theorem: any two-terminal collection of voltage sources and resistors is electrically equivalent to an ideal current source in parallel with a single resistor.
Thévenin's theorem: any two-terminal combination of voltage sources and resistors is electrically equivalent to a single voltage source in series with a single resistor.
Millman's theorem: the voltage on the ends of branches in parallel is equal to the sum of the currents flowing in every branch divided by the total equivalent conductance.
See also Analysis of resistive circuits.
Circuit analysis is the study of methods to solve linear systems for an unknown variable.
Circuit analysis
Components
There are many electronic components currently used and they all have thei
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The voltage and current are exactly in phase in a what?
A. resistor
B. capacitor
C. battery
D. harmonic
Answer:
|
|
ai2_arc-454
|
multiple_choice
|
In December, one side of Earth will receive less energy from the Sun than the other side. Which statement best explains this fact?
|
[
"Earth rotates on its axis.",
"Earth is tilted on its axis.",
"Sunlight traveling to Earth reflects off the Moon.",
"Sunlight traveling to Earth is blocked by Moon."
] |
B
|
Relavent Documents:
Document 0:::
Sun path, sometimes also called day arc, refers to the daily and seasonal arc-like path that the Sun appears to follow across the sky as the Earth rotates and orbits the Sun. The Sun's path affects the length of daytime experienced and amount of daylight received along a certain latitude during a given season.
The relative position of the Sun is a major factor in the heat gain of buildings and in the performance of solar energy systems. Accurate location-specific knowledge of sun path and climatic conditions is essential for economic decisions about solar collector area, orientation, landscaping, summer shading, and the cost-effective use of solar trackers.
Angles
Effect of the Earth's axial tilt
Sun paths at any latitude and any time of the year can be determined from basic geometry. The Earth's axis of rotation tilts about 23.5 degrees, relative to the plane of Earth's orbit around the Sun. As the Earth orbits the Sun, this creates the 47° declination difference between the solstice sun paths, as well as the hemisphere-specific difference between summer and winter.
In the Northern Hemisphere, the winter sun (November, December, January) rises in the southeast, transits the celestial meridian at a low angle in the south (more than 43° above the southern horizon in the tropics), and then sets in the southwest. It is on the south (equator) side of the house all day long. A vertical window facing south (equator side) is effective for capturing solar thermal energy. For comparison, the winter sun in the Southern Hemisphere (May, June, July) rises in the northeast, peaks out at a low angle in the north (more than halfway up from the horizon in the tropics), and then sets in the northwest. There, the north-facing window would let in plenty of solar thermal energy to the house.
In the Northern Hemisphere in summer (May, June, July), the Sun rises in the northeast, peaks out slightly south of overhead point (lower in the south at higher latitude), and then sets in t
Document 1:::
Solar rotation varies with latitude. The Sun is not a solid body, but is composed of a gaseous plasma. Different latitudes rotate at different periods. The source of this differential rotation is an area of current research in solar astronomy. The rate of surface rotation is observed to be the fastest at the equator (latitude ) and to decrease as latitude increases. The solar rotation period is 24.47 days at the equator and almost 38 days at the poles. The average rotation is 28 days.
Current Carrington Rotation: CR []
Surface rotation as an equation
The differential rotation rate is usually described by the equation:
where is the angular velocity in degrees per day, is the solar latitude, A is angular velocity at the equator, and B, C are constants controlling the decrease in velocity with increasing latitude. The values of A, B, and C differ depending on the techniques used to make the measurement, as well as the time period studied. A current set of accepted average values is:
A= 14.713 ± 0.0491 °/day
B= −2.396 ± 0.188 °/day
C= −1.787 ± 0.253 °/day
Sidereal rotation
At the equator, the solar rotation period is 24.47 days. This is called the sidereal rotation period, and should not be confused with the synodic rotation period of 26.24 days, which is the time for a fixed feature on the Sun to rotate to the same apparent position as viewed from Earth (the earth's orbital rotation is in the same direction as the sun's rotation). The synodic period is longer because the Sun must rotate for a sidereal period plus an extra amount due to the orbital motion of Earth around the Sun. Note that astrophysical literature does not typically use the equatorial rotation period, but instead often uses the definition of a Carrington rotation: a synodic rotation period of 27.2753 days or a sidereal period of 25.38 days. This chosen period roughly corresponds to the prograde rotation at a latitude of 26° north or south, which is consistent with the typical latitude of sunspot
Document 2:::
Earth's rotation or Earth's spin is the rotation of planet Earth around its own axis, as well as changes in the orientation of the rotation axis in space. Earth rotates eastward, in prograde motion. As viewed from the northern polar star Polaris, Earth turns counterclockwise.
The North Pole, also known as the Geographic North Pole or Terrestrial North Pole, is the point in the Northern Hemisphere where Earth's axis of rotation meets its surface. This point is distinct from Earth's North Magnetic Pole. The South Pole is the other point where Earth's axis of rotation intersects its surface, in Antarctica.
Earth rotates once in about 24 hours with respect to the Sun, but once every 23 hours, 56 minutes and 4 seconds with respect to other distant stars (see below). Earth's rotation is slowing slightly with time; thus, a day was shorter in the past. This is due to the tidal effects the Moon has on Earth's rotation. Atomic clocks show that the modern day is longer by about 1.7 milliseconds than a century ago, slowly increasing the rate at which UTC is adjusted by leap seconds. Analysis of historical astronomical records shows a slowing trend; the length of a day increased by about 2.3 milliseconds per century since the 8th century BCE.
Scientists reported that in 2020 Earth had started spinning faster, after consistently spinning slower than 86,400 seconds per day in the decades before. On June 29, 2022, Earth's spin was completed in 1.59 milliseconds under 24 hours, setting a new record. Because of that trend, engineers worldwide are discussing a 'negative leap second' and other possible timekeeping measures.
This increase in speed is thought to be due to various factors, including the complex motion of its molten core, oceans, and atmosphere, the effect of celestial bodies such as the Moon, and possibly climate change, which is causing the ice at Earth's poles to melt. The masses of ice account for the Earth's shape being that of an oblate spheroid, bulging around t
Document 3:::
Solar physics is the branch of astrophysics that specializes in the study of the Sun. It deals with detailed measurements that are possible only for our closest star. It intersects with many disciplines of pure physics, astrophysics, and computer science, including fluid dynamics, plasma physics including magnetohydrodynamics, seismology, particle physics, atomic physics, nuclear physics, stellar evolution, space physics, spectroscopy, radiative transfer, applied optics, signal processing, computer vision, computational physics, stellar physics and solar astronomy.
Because the Sun is uniquely situated for close-range observing (other stars cannot be resolved with anything like the spatial or temporal resolution that the Sun can), there is a split between the related discipline of observational astrophysics (of distant stars) and observational solar physics.
The study of solar physics is also important as it provides a "physical laboratory" for the study of plasma physics.
History
Ancient times
Babylonians were keeping a record of solar eclipses, with the oldest record originating from the ancient city of Ugarit, in modern-day Syria. This record dates to about 1300 BC. Ancient Chinese astronomers were also observing solar phenomena (such as solar eclipses and visible sunspots) with the purpose of keeping track of calendars, which were based on lunar and solar cycles. Unfortunately, records kept before 720 BC are very vague and offer no useful information. However, after 720 BC, 37 solar eclipses were noted over the course of 240 years.
Medieval times
Astronomical knowledge flourished in the Islamic world during medieval times. Many observatories were built in cities from Damascus to Baghdad, where detailed astronomical observations were taken. Particularly, a few solar parameters were measured and detailed observations of the Sun were taken. Solar observations were taken with the purpose of navigation, but mostly for timekeeping. Islam requires its followers to
Document 4:::
The length of the day (LOD), which has increased over the long term of Earth's history due to tidal effects, is also subject to fluctuations on a shorter scale of time. Exact measurements of time by atomic clocks and satellite laser ranging have revealed that the LOD is subject to a number of different changes. These subtle variations have periods that range from a few weeks to a few years. They are attributed to interactions between the dynamic atmosphere and Earth itself. The International Earth Rotation and Reference Systems Service monitors the changes.
In the absence of external torques, the total angular momentum of Earth as a whole system must be constant. Internal torques are due to relative movements and mass redistribution of Earth's core, mantle, crust, oceans, atmosphere, and cryosphere. In order to keep the total angular momentum constant, a change of the angular momentum in one region must necessarily be balanced by angular momentum changes in the other regions.
Crustal movements (such as continental drift) or polar cap melting are slow secular events. The characteristic coupling time between core and mantle has been estimated to be on the order of ten years, and the so-called 'decade fluctuations' of Earth's rotation rate are thought to result from fluctuations within the core, transferred to the mantle. The length of day (LOD) varies significantly even for time scales from a few years down to weeks (Figure), and the observed fluctuations in the LOD - after eliminating the effects of external torques - are a direct consequence of the action of internal torques. These short term fluctuations are very probably generated by the interaction between the solid Earth and the atmosphere.
The length of day of other planets also varies, particularly of the planet Venus, which has such a dynamic and strong atmosphere that its length of day fluctuates by up to 20 minutes.
Observations
Any change of the axial component of the atmospheric angular momentum (A
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
In December, one side of Earth will receive less energy from the Sun than the other side. Which statement best explains this fact?
A. Earth rotates on its axis.
B. Earth is tilted on its axis.
C. Sunlight traveling to Earth reflects off the Moon.
D. Sunlight traveling to Earth is blocked by Moon.
Answer:
|
|
sciq-4669
|
multiple_choice
|
Glaciers modify the landscape by what?
|
[
"truncation",
"sediment",
"erosion",
"silt"
] |
C
|
Relavent Documents:
Document 0:::
Biogeoclimatic ecosystem classification (BEC) is an ecological classification framework used in British Columbia to define, describe, and map ecosystem-based units at various scales, from broad, ecologically-based climatic regions down to local ecosystems or sites. BEC is termed an ecosystem classification as the approach integrates site, soil, and vegetation characteristics to develop and characterize all units. BEC has a strong application focus and guides to classification and management of forests, grasslands and wetlands are available for much of the province to aid in identification of the ecosystem units.
History
The biogeoclimatic ecosystem classification (BEC) system evolved from the work of Vladimir J. Krajina, a Czech-trained professor of ecology and botany at the University of British Columbia and his students, from 1949 - 1970. Krajina conceptualized the biogeoclimatic approach as an attempt to describe the ecologically diverse and largely undescribed landscape of British Columbia, the mountainous western-most province of Canada, using a unique blend of various contemporary traditions. These included the American tradition of community change and climax, the state factor concept of Jenny, the Braun-Blanquet approach, the Russian biogeocoenose, and environmental grids, and the European microscopic pedology approach
The biogeoclimatic approach was subsequently adopted by the Forest Service of British Columbia in 1976—initially as a five-year program to develop the classification to assist with tree species selection in reforestation. The classification concepts adopted from Krajina were modified by the staff of the B.C. Forest Service in the implementation of a provincial classification. Over the past 40 years, the BEC approach has been expanded and applied to all regions of British Columbia. It has developed into a comprehensive framework for understanding ecosystems in a climatically and topographically complex region.
Classification Framework
Biog
Document 1:::
Plant ecology is a subdiscipline of ecology that studies the distribution and abundance of plants, the effects of environmental factors upon the abundance of plants, and the interactions among plants and between plants and other organisms. Examples of these are the distribution of temperate deciduous forests in North America, the effects of drought or flooding upon plant survival, and competition among desert plants for water, or effects of herds of grazing animals upon the composition of grasslands.
A global overview of the Earth's major vegetation types is provided by O.W. Archibold. He recognizes 11 major vegetation types: tropical forests, tropical savannas, arid regions (deserts), Mediterranean ecosystems, temperate forest ecosystems, temperate grasslands, coniferous forests, tundra (both polar and high mountain), terrestrial wetlands, freshwater ecosystems and coastal/marine systems. This breadth of topics shows the complexity of plant ecology, since it includes plants from floating single-celled algae up to large canopy forming trees.
One feature that defines plants is photosynthesis. Photosynthesis is the process of a chemical reactions to create glucose and oxygen, which is vital for plant life. One of the most important aspects of plant ecology is the role plants have played in creating the oxygenated atmosphere of earth, an event that occurred some 2 billion years ago. It can be dated by the deposition of banded iron formations, distinctive sedimentary rocks with large amounts of iron oxide. At the same time, plants began removing carbon dioxide from the atmosphere, thereby initiating the process of controlling Earth's climate. A long term trend of the Earth has been toward increasing oxygen and decreasing carbon dioxide, and many other events in the Earth's history, like the first movement of life onto land, are likely tied to this sequence of events.
One of the early classic books on plant ecology was written by J.E. Weaver and F.E. Clements. It
Document 2:::
Lisa Schulte Moore is an American landscape ecologist. Schulte Moore is a professor of natural resource ecology and management at Iowa State University. In 2020 she received a $10 million USD grant to study anerobic digestion and its application to turning manure into usable energy. In 2021 she was named a MacArthur fellow.
Work
Moore has worked with farmers to develop resilient and sustainable agricultural practices and systems that take into consideration climate change, water quality and loss of biodiversity.
Moore has written on various ecological topics, including the ecological effects of fire on landscapes; soil carbon storage, biodiversity improvement, the effects of wind and fire on forests, among others.
Awards and honors
John D. and Katherine T. MacArthur Foundation Fellowship
Citation for Leadership and Achievement, Council for Scientific Society Presidents (2022)
Document 3:::
Pavilion Lake is a freshwater lake located in Marble Canyon, British Columbia, Canada home to colonies of freshwater microbialites.
Location and Local Communities
It is located between the towns of Lillooet and Cache Creek (29.44 kilometres WNW, as the crow flies, from Cache Creek) and lies along BC Highway 99, 8.85 highway kilometres (northeast then southeast) from Pavilion, British Columbia. There is a small community of lakeshore residences, some recreational and seasonal only, located on the lake's eastern shore adjacent to the highway. The lake is overlooked by the cliffs of Marble Canyon, which is the southern buttress of the Marble Range, and the forests of the northernmost Clear Range. Also overlooking the lake is Chimney Rock (K'lpalekw in Secwepemc'tsn, "Coyote's Penis"), which like the lake and the canyon have spiritual significance to the adjoining native communities, the Tskwaylaxw people of Pavilion and the Bonaparte band of Secwepemc at Upper Hat Creek. One of the rancheries and a rodeo and pow-wow ground of the Pavilion Band is located at Marble Canyon's south entrance. The lake area and its foreshore were added to Marble Canyon Provincial Park in order to protect its special scientific and heritage values.
Characteristics
The lake demonstrates karst hydrology, with underground inflows from Marble Canyon creeks. The lake has generally low biological productivity, and is classified as ultraoligotrophic. It also features a high degree of water clarity. The lake gets covered with ice annually, and is dimictic, going through two thermal overturns per year. The lake reaches a maximum depth of 65 meters below the surface. It is also a hard water lake, due to its high mineral content.
Microbialites and Scientific Research
Part of a karst formation, the lake is most notable for being home to colonies of microbialites, a type of stromatolite. Colonies of microbialites grow from depths of 5 to 55 meters. Low sedimentation rates may allow for continued
Document 4:::
Bioclimatology is the interdisciplinary field of science that studies the interactions between the biosphere and the Earth's atmosphere on time scales of the order of seasons or longer (in contrast to biometeorology).
Examples of relevant processes
Climate processes largely control the distribution, size, shape and properties of living organisms on Earth. For instance, the general circulation of the atmosphere on a planetary scale broadly determines the location of large deserts or the regions subject to frequent precipitation, which, in turn, greatly determine which organisms can naturally survive in these environments. Furthermore, changes in climates, whether due to natural processes or to human interferences, may progressively modify these habitats and cause overpopulation or extinction of indigenous species.
The biosphere, for its part, and in particular continental vegetation, which constitutes over 99% of the total biomass, has played a critical role in establishing and maintaining the chemical composition of the Earth's atmosphere, especially during the early evolution of the planet (See History of Earth for more details on this topic). Currently, the terrestrial vegetation exchanges some 60 billion tons of carbon with the atmosphere on an annual basis (through processes of carbon fixation and carbon respiration), thereby playing a critical role in the carbon cycle. On a global and annual basis, small imbalances between these two major fluxes, as do occur through changes in land cover and land use, contribute to the current increase in atmospheric carbon dioxide.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Glaciers modify the landscape by what?
A. truncation
B. sediment
C. erosion
D. silt
Answer:
|
|
sciq-11309
|
multiple_choice
|
What compound has positive and negative ions?
|
[
"covalent",
"hydrocarbon",
"ionic",
"protein"
] |
C
|
Relavent Documents:
Document 0:::
In chemistry, an ionophore () is a chemical species that reversibly binds ions. Many ionophores are lipid-soluble entities that transport ions across the cell membrane. Ionophores catalyze ion transport across hydrophobic membranes, such as liquid polymeric membranes (carrier-based ion selective electrodes) or lipid bilayers found in the living cells or synthetic vesicles (liposomes). Structurally, an ionophore contains a hydrophilic center and a hydrophobic portion that interacts with the membrane.
Some ionophores are synthesized by microorganisms to import ions into their cells. Synthetic ion carriers have also been prepared. Ionophores selective for cations and anions have found many applications in analysis. These compounds have also shown to have various biological effects and a synergistic effect when combined with the ion they bind.
Classification
Biological activities of metal ion-binding compounds can be changed in response to the increment of the metal concentration, and based on the latter compounds can be classified as "metal ionophores", "metal chelators" or "metal shuttles". If the biological effect is augmented by increasing the metal concentration, it is classified as a "metal ionophore". If the biological effect is decreased or reversed by increasing the metal concentration, it is classified as a "metal chelator". If the biological effect is not affected by increasing the metal concentration, and the compound-metal complex enters the cell, it is classified as a "metal shuttle". The term ionophore (from Greek ion carrier or ion bearer) was proposed by Berton Pressman in 1967 when he and his colleagues were investigating the antibiotic mechanisms of valinomycin and nigericin.
Many ionophores are produced naturally by a variety of microbes, fungi and plants, and act as a defense against competing or pathogenic species. Multiple synthetic membrane-spanning ionophores have also been synthesized.
The two broad classifications of ionophores synthesiz
Document 1:::
The use of ionic liquids in carbon capture is a potential application of ionic liquids as absorbents for use in carbon capture and sequestration. Ionic liquids, which are salts that exist as liquids near room temperature, are polar, nonvolatile materials that have been considered for many applications. The urgency of climate change has spurred research into their use in energy-related applications such as carbon capture and storage.
Carbon capture using absorption
Ionic liquids as solvents
Amines are the most prevalent absorbent in postcombustion carbon capture technology today. In particular, monoethanolamine (MEA) has been used in industrial scales in postcombustion carbon capture, as well as in other CO2 separations, such as "sweetening" of natural gas. However, amines are corrosive, degrade over time, and require large industrial facilities. Ionic liquids on the other hand, have low vapor pressures . This property results from their strong Coulombic attractive force. Vapor pressure remains low through the substance's thermal decomposition point (typically >300 °C). In principle, this low vapor pressure simplifies their use and makes them "green" alternatives. Additionally, it reduces risk of contamination of the CO2 gas stream and of leakage into the environment.
The solubility of CO2 in ionic liquids is governed primarily by the anion, less so by the cation. The hexafluorophosphate (PF6–) and tetrafluoroborate (BF4–) anions have been shown to be especially amenable to CO2 capture.
Ionic liquids have been considered as solvents in a variety of liquid-liquid extraction processes, but never commercialized. Beside that, ionic liquids have replaced the conventional volatile solvents in industry such as absorption of gases or extractive distillation. Additionally, ionic liquids are used as co-solutes for the generation of aqueous biphasic systems, or purification of biomolecules.
Process
A typical CO2 absorption process consists of a feed gas, an absorptio
Document 2:::
The ionic strength of a solution is a measure of the concentration of ions in that solution. Ionic compounds, when dissolved in water, dissociate into ions. The total electrolyte concentration in solution will affect important properties such as the dissociation constant or the solubility of different salts. One of the main characteristics of a solution with dissolved ions is the ionic strength. Ionic strength can be molar (mol/L solution) or molal (mol/kg solvent) and to avoid confusion the units should be stated explicitly. The concept of ionic strength was first introduced by Lewis and Randall in 1921 while describing the activity coefficients of strong electrolytes.
Quantifying ionic strength
The molar ionic strength, I, of a solution is a function of the concentration of all ions present in that solution.
where one half is because we are including both cations and anions, ci is the molar concentration of ion i (M, mol/L), zi is the charge number of that ion, and the sum is taken over all ions in the solution. For a 1:1 electrolyte such as sodium chloride, where each ion is singly-charged, the ionic strength is equal to the concentration. For the electrolyte MgSO4, however, each ion is doubly-charged, leading to an ionic strength that is four times higher than an equivalent concentration of sodium chloride:
Generally multivalent ions contribute strongly to the ionic strength.
Calculation example
As a more complex example, the ionic strength of a mixed solution 0.050 M in Na2SO4 and 0.020 M in KCl is:
Non-ideal solutions
Because in non-ideal solutions volumes are no longer strictly additive it is often preferable to work with molality b (mol/kg of H2O) rather than molarity c (mol/L). In that case, molal ionic strength is defined as:
in which
i = ion identification number
z = charge of ion
b = molality (mol solute per Kg solvent)
Importance
The ionic strength plays a central role in the Debye–Hückel theory that describes the strong deviations from id
Document 3:::
A monatomic ion (also called simple ion) is an ion consisting of exactly one atom. If, instead of being monatomic, an ion contains more than one atom, even if these are of the same element, it is called a polyatomic ion. For example, calcium carbonate consists of the monatomic cation Ca2+ and the polyatomic anion ; both pentazenium () and azide () are polyatomic as well.
A type I binary ionic compound contains a metal that forms only one type of ion. A type II ionic compound contains a metal that forms more than one type of ion, i.e., the same element in different oxidation states.
{|class="wikitable"
|-
! colspan="2" | Common type I monatomic cations
|-
| Hydrogen
| H+
|-
| Lithium
| Li+
|-
| Sodium
| Na+
|-
| Potassium
| K+
|-
| Rubidium
| Rb+
|-
| Caesium
| Cs+
|-
| Magnesium
| Mg2+
|-
| Calcium
| Ca2+
|-
| Strontium
| Sr2+
|-
| Barium
| Ba2+
|-
| Aluminium
| Al3+
|-
| Silver
| Ag+
|-
| Zinc
| Zn2+
|-
|}
{|class="wikitable"
|-
! colspan="3" | Common type II monatomic cations
|-
|-
| iron(II)
| Fe2+
| ferrous
|-
| iron(III)
| Fe3+
| ferric
|-
| copper(I)
| Cu+
| cuprous
|-
| copper(II)
| Cu2+
| cupric
|-
| cobalt(II)
| Co+2
| cobaltous
|-
| cobalt(III)
| Co3+
| cobaltic
|-
| tin(II)
| Sn2+
| stannous
|-
| tin(IV)
| Sn4+
| stannic
|}
Document 4:::
An ion () is an atom or molecule with a net electrical charge. The charge of an electron is considered to be negative by convention and this charge is equal and opposite to the charge of a proton, which is considered to be positive by convention. The net charge of an ion is not zero because its total number of electrons is unequal to its total number of protons.
A cation is a positively charged ion with fewer electrons than protons while an anion is a negatively charged ion with more electrons than protons. Opposite electric charges are pulled towards one another by electrostatic force, so cations and anions attract each other and readily form ionic compounds.
Ions consisting of only a single atom are termed atomic or monatomic ions, while two or more atoms form molecular ions or polyatomic ions. In the case of physical ionization in a fluid (gas or liquid), "ion pairs" are created by spontaneous molecule collisions, where each generated pair consists of a free electron and a positive ion. Ions are also created by chemical interactions, such as the dissolution of a salt in liquids, or by other means, such as passing a direct current through a conducting solution, dissolving an anode via ionization.
History of discovery
The word ion was coined from Greek neuter present participle of ienai (), meaning "to go". A cation is something that moves down ( pronounced kato, meaning "down") and an anion is something that moves up (, meaning "up"). They are so called because ions move toward the electrode of opposite charge. This term was introduced (after a suggestion by the English polymath William Whewell) by English physicist and chemist Michael Faraday in 1834 for the then-unknown species that goes from one electrode to the other through an aqueous medium. Faraday did not know the nature of these species, but he knew that since metals dissolved into and entered a solution at one electrode and new metal came forth from a solution at the other electrode; that some kind of
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What compound has positive and negative ions?
A. covalent
B. hydrocarbon
C. ionic
D. protein
Answer:
|
|
sciq-11504
|
multiple_choice
|
In headwater streams, what plant process is mostly attributed to algae that are growing on rocks?
|
[
"mitosis",
"symbiosis",
"reproduction",
"photosynthesis"
] |
D
|
Relavent Documents:
Document 0:::
The River Continuum Concept (RCC) is a model for classifying and describing flowing water, in addition to the classification of individual sections of waters after the occurrence of indicator organisms. The theory is based on the concept of dynamic equilibrium in which streamforms balance between physical parameters, such as width, depth, velocity, and sediment load, also taking into account biological factors. It offers an introduction to map out biological communities and also an explanation for their sequence in individual sections of water. This allows the structure of the river to be more predictable as to the biological properties of the water. The concept was first developed in 1980 by Robin L. Vannote, with fellow researchers at Stroud Water Research Center.
Background of RCC
The River Continuum Concept is based on the idea that a watercourse is an open ecosystem that is in constant interaction with the bank, and moving from source to mouth, constantly changing. Basis for this change in the overall system is due to the gradual change of physical environmental conditions such as the width, depth, water, flow characteristics, temperature, and the complexity of the water. According to Vannote's hypothesis, which is based on the physical geomorphological theory, structural and functional characteristics of stream communities are selected to conform to the most probable position or mean state of the physical system. As a river changes from headwaters to the lower reaches, there will be a change in the relationship between the production and consumption (respiration) of the material (P/R ratio).
The four scientists who collaborated with Dr. Vannote were Drs. G.Wayne Minshall (Idaho State University), Kenneth W. Cummins (Michigan State University), James R. Sedell (Oregon State University), and Colbert E. Cushing (Battelle-Pacific Northwest Laboratory). The group studied stream and river ecosystems in their respective geographical areas to support or disp
Document 1:::
"Resurrection ecology" is an evolutionary biology technique whereby researchers hatch dormant eggs from lake sediments to study animals as they existed decades ago. It is a new approach that might allow scientists to observe evolution as it occurred, by comparing the animal forms hatched from older eggs with their extant descendants. This technique is particularly important because the live organisms hatched from egg banks can be used to learn about the evolution of behavioural, plastic or competitive traits that are not apparent from more traditional paleontological methods.
One such researcher in the field is W. Charles Kerfoot of Michigan Technological University whose results were published in the journal Limnology and Oceanography. He reported on success in a search for "resting eggs" of zooplankton that are dormant in Portage Lake on Michigan's Upper Peninsula. The lake has undergone a considerable amount of change over the last 100 years including flooding by copper mine debris, dredging, and eutrophication. Others have used this technique to explore the evolutionary effects of eutrophication, predation, and metal contamination. Resurrection ecology provided the best empirical example of the "Red Queen Hypothesis" in nature. Any organism that produces a resting stage can be used for resurrection ecology. However, the most frequently used organism is the water flea, Daphnia. This genus has well-established protocols for lab experimentation and usually asexually reproduces allowing for experiments on many individuals with the same genotype.
Although the more esoteric demonstration of natural selection is alone a valuable aspect of the study described, there is a clear ecological implication in the discovery that very old zooplankton eggs have survived in the lake: the potential still exists, if and when this environment is restored to something of a more pristine nature, for at least some of the original (pre-disturbance) inhabitants to re-establish populatio
Document 2:::
Energy, nutrients, and contaminants derived from aquatic ecosystems and transferred to terrestrial ecosystems are termed aquatic-terrestrial subsidies or, more simply, aquatic subsidies. Common examples of aquatic subsidies include organisms that move across habitat boundaries and deposit their nutrients as they decompose in terrestrial habitats or are consumed by terrestrial predators, such as spiders, lizards, birds, and bats. Aquatic insects that develop within streams and lakes before emerging as winged adults and moving to terrestrial habitats contribute to aquatic subsidies. Fish removed from aquatic ecosystems by terrestrial predators are another important example. Conversely, the flow of energy and nutrients from terrestrial ecosystems to aquatic ecosystems are considered terrestrial subsidies; both aquatic subsidies and terrestrial subsidies are types of cross-boundary subsidies. Energy and nutrients are derived from outside the ecosystem where they are ultimately consumed.
Allochthonous describes resources and energy derived from another ecosystem; aquatic-terrestrial subsidies are examples of allochthonous resources. Autochthonous resources are produced by plants or algae within the local ecosystem Allochthonous resources, including aquatic-terrestrial subsidies, can subsidize predator populations and increase predator impacts on prey populations, sometimes initiating trophic cascades. Nutritional quality of autochthonous and allochthonous resources influences their use by animals and other consumers, even when they are readily available.
Resource subsidies
Resource subsidies, in forms of nutrients, matter, or organisms, describe movements of essential resources across habitat boundaries to animals or other consumers. These inputs of resources can influence individual growth, species abundance and diversity, community structure, secondary productivity and food web dynamics. Allochthonous resources are defined as originating outside of the ecosystem wh
Document 3:::
Paleolimnology (from Greek: παλαιός, palaios, "ancient", λίμνη, limne, "lake", and λόγος, logos, "study") is a scientific sub-discipline closely related to both limnology and paleoecology. Paleolimnological studies focus on reconstructing the past environments of inland waters (e.g., lakes and streams) using the geologic record, especially with regard to events such as climatic change, eutrophication, acidification, and internal ontogenic processes.
Paleolimnological studies are mostly conducted using analyses of the physical, chemical, and mineralogical properties of sediments, or of biological records such as fossil pollen, diatoms, or chironomids.
History
Lake ontogeny
Most early paleolimnological studies focused on the biological productivity of lakes, and the role of internal lake processes in lake development. Although Einar Naumann had speculated that the productivity of lakes should gradually decrease due to leaching of catchment soils, August Thienemann suggested that the reverse process likely occurred. Early midge records seemed to support Thienemann's view.
Hutchinson and Wollack suggested that, following an initial oligotrophic stage, lakes would achieve and maintain a trophic equilibrium. They also stressed parallels between the early development of lake communities and the sigmoid growth phase of animal communities – implying that the apparent early developmental processes in lakes were dominated by colonization effects, and lags due to the limited reproductive potential of the colonizing organisms.
In a classic paper, Raymond Lindeman outlined a hypothetical developmental sequence, with lakes progressively developing through oligotrophic, mesotrophic, and eutrophic stages, before senescing to a dystrophic stage and then filling completely with sediment. A climax forest community would eventually be established on the peaty fill of the former lake basin. These ideas were further elaborated by Ed Deevey, who suggested that lake development was dom
Document 4:::
Lake 226 is one lake in Canada's Experimental Lakes Area (ELA) in Ontario. The ELA is a freshwater and fisheries research facility that operated these experiments alongside Fisheries and Oceans Canada and Environment Canada. In 1968 this area in northwest Ontario was set aside for limnological research, aiming to study the watershed of the 58 small lakes in this area. The ELA projects began as a response to the claim that carbon was the limiting agent causing eutrophication of lakes rather than phosphorus, and that monitoring phosphorus in the water would be a waste of money. This claim was made by soap and detergent companies, as these products do not biodegrade and can cause buildup of phosphates in water supplies that lead to eutrophication. The theory that carbon was the limiting agent was quickly debunked by the ELA Lake 227 experiment that began in 1969, which found that carbon could be drawn from the atmosphere to remain proportional to the input of phosphorus in the water. Experimental Lake 226 was then created to test phosphorus' impact on eutrophication by itself.
Lake ecosystem
Geography
The ELA lakes were far from human activities, therefore allowing the study of environmental conditions without human interaction. Lake 226 was specifically studied over a four-year period, from 1973–1977 to test eutrophication. Lake 226 itself is a 16.2 ha double basin lake located on highly metamorphosed granite known as Precambrian granite. The depth of the lake was measured in 1994 to be 14.7 m for the northeast basin and 11.6 m for the southeast basin. Lake 226 had a total lake volume of 9.6 × 105 m3, prior to the lake being additionally studied for drawdown alongside other ELA lakes. Due to this relatively small fetch of Lake 226, wind action is minimized, preventing resuspension of epilimnetic sediments.
Eutrophication experiment
To test the effects of fertilization on water quality and algae blooms, Lake 226 was split in half with a curtain. This curtain divi
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
In headwater streams, what plant process is mostly attributed to algae that are growing on rocks?
A. mitosis
B. symbiosis
C. reproduction
D. photosynthesis
Answer:
|
|
sciq-9648
|
multiple_choice
|
What do we call the most widely accepted cosmological explanation of how the universe formed?
|
[
"big light theory",
"big crunch theory",
"singularity theory",
"big bang theory"
] |
D
|
Relavent Documents:
Document 0:::
The ultimate fate of the universe is a topic in physical cosmology, whose theoretical restrictions allow possible scenarios for the evolution and ultimate fate of the universe to be described and evaluated. Based on available observational evidence, deciding the fate and evolution of the universe has become a valid cosmological question, being beyond the mostly untestable constraints of mythological or theological beliefs. Several possible futures have been predicted by different scientific hypotheses, including that the universe might have existed for a finite and infinite duration, or towards explaining the manner and circumstances of its beginning.
Observations made by Edwin Hubble during the 1930s–1950s found that galaxies appeared to be moving away from each other, leading to the currently accepted Big Bang theory. This suggests that the universe began very dense about 13.787 billion years ago, and it has expanded and (on average) become less dense ever since. Confirmation of the Big Bang mostly depends on knowing the rate of expansion, average density of matter, and the physical properties of the mass–energy in the universe.
There is a strong consensus among cosmologists that the shape of the universe is considered "flat" (parallel lines stay parallel) and will continue to expand forever.
Factors that need to be considered in determining the universe's origin and ultimate fate include the average motions of galaxies, the shape and structure of the universe, and the amount of dark matter and dark energy that the universe contains.
Emerging scientific basis
Theory
The theoretical scientific exploration of the ultimate fate of the universe became possible with Albert Einstein's 1915 theory of general relativity. General relativity can be employed to describe the universe on the largest possible scale. There are several possible solutions to the equations of general relativity, and each solution implies a possible ultimate fate of the universe.
Alexander Fr
Document 1:::
The Big Bang event is a physical theory that describes how the universe expanded from an initial state of high density and temperature. Various cosmological models of the Big Bang explain the evolution of the observable universe from the earliest known periods through its subsequent large-scale form. These models offer a comprehensive explanation for a broad range of observed phenomena, including the abundance of light elements, the cosmic microwave background (CMB) radiation, and large-scale structure. The overall uniformity of the Universe, known as the flatness problem, is explained through cosmic inflation: a sudden and very rapid expansion of space during the earliest moments. However, physics currently lacks a widely accepted theory of quantum gravity that can successfully model the earliest conditions of the Big Bang.
Crucially, these models are compatible with the Hubble–Lemaître law—the observation that the farther away a galaxy is, the faster it is moving away from Earth. Extrapolating this cosmic expansion backwards in time using the known laws of physics, the models describe an increasingly concentrated cosmos preceded by a singularity in which space and time lose meaning (typically named "the Big Bang singularity"). In 1964 the CMB was discovered, which convinced many cosmologists that the competing steady-state model of cosmic evolution was falsified, since the Big Bang models predict a uniform background radiation caused by high temperatures and densities in the distant past. A wide range of empirical evidence strongly favors the Big Bang event, which is now essentially universally accepted. Detailed measurements of the expansion rate of the universe place the Big Bang singularity at an estimated billion years ago, which is considered the age of the universe.
There remain aspects of the observed universe that are not yet adequately explained by the Big Bang models. After its initial expansion, the universe cooled sufficiently to allow the formation
Document 2:::
Physical cosmology is a branch of cosmology concerned with the study of cosmological models. A cosmological model, or simply cosmology, provides a description of the largest-scale structures and dynamics of the universe and allows study of fundamental questions about its origin, structure, evolution, and ultimate fate. Cosmology as a science originated with the Copernican principle, which implies that celestial bodies obey identical physical laws to those on Earth, and Newtonian mechanics, which first allowed those physical laws to be understood.
Physical cosmology, as it is now understood, began with the development in 1915 of Albert Einstein's general theory of relativity, followed by major observational discoveries in the 1920s: first, Edwin Hubble discovered that the universe contains a huge number of external galaxies beyond the Milky Way; then, work by Vesto Slipher and others showed that the universe is expanding. These advances made it possible to speculate about the origin of the universe, and allowed the establishment of the Big Bang theory, by Georges Lemaître, as the leading cosmological model. A few researchers still advocate a handful of alternative cosmologies; however, most cosmologists agree that the Big Bang theory best explains the observations.
Dramatic advances in observational cosmology since the 1990s, including the cosmic microwave background, distant supernovae and galaxy redshift surveys, have led to the development of a standard model of cosmology. This model requires the universe to contain large amounts of dark matter and dark energy whose nature is currently not well understood, but the model gives detailed predictions that are in excellent agreement with many diverse observations.
Cosmology draws heavily on the work of many disparate areas of research in theoretical and applied physics. Areas relevant to cosmology include particle physics experiments and theory, theoretical and observational astrophysics, general relativity, quant
Document 3:::
Cosmology () is a branch of physics and metaphysics dealing with the nature of the universe. The term cosmology was first used in English in 1656 in Thomas Blount's Glossographia, and in 1731 taken up in Latin by German philosopher Christian Wolff, in Cosmologia Generalis. Religious or mythological cosmology is a body of beliefs based on mythological, religious, and esoteric literature and traditions of creation myths and eschatology. In the science of astronomy, cosmology is concerned with the study of the chronology of the universe.
Physical cosmology is the study of the observable universe's origin, its large-scale structures and dynamics, and the ultimate fate of the universe, including the laws of science that govern these areas. It is investigated by scientists, including astronomers and physicists, as well as philosophers, such as metaphysicians, philosophers of physics, and philosophers of space and time. Because of this shared scope with philosophy, theories in physical cosmology may include both scientific and non-scientific propositions and may depend upon assumptions that cannot be tested. Physical cosmology is a sub-branch of astronomy that is concerned with the universe as a whole. Modern physical cosmology is dominated by the Big Bang Theory which attempts to bring together observational astronomy and particle physics; more specifically, a standard parameterization of the Big Bang with dark matter and dark energy, known as the Lambda-CDM model.
Theoretical astrophysicist David N. Spergel has described cosmology as a "historical science" because "when we look out in space, we look back in time" due to the finite nature of the speed of light.
Disciplines
Physics and Astrophysics have played central roles in shaping our understanding of the universe through scientific observation and experiment. Physical cosmology was shaped through both mathematics and observation in an analysis of the whole universe. The universe is generally understood to have beg
Document 4:::
Cosmogony is any model concerning the origin of the cosmos or the universe.
Overview
Scientific theories
In astronomy, cosmogony refers to the study of the origin of particular astrophysical objects or systems, and is most commonly used in reference to the origin of the universe, the Solar System, or the Earth–Moon system. The prevalent cosmological model of the early development of the universe is the Big Bang theory.
Sean M. Carroll, who specializes in theoretical cosmology and field theory, explains two competing explanations for the origins of the singularity, which is the center of a space in which a characteristic is limitless (one example is the singularity of a black hole, where gravity is the characteristic that becomes infinite).
It is generally accepted that the universe began at a point of singularity. When the singularity of the universe started to expand, the Big Bang occurred, which evidently began the universe. The other explanation, held by proponents such as Stephen Hawking, asserts that time did not exist when it emerged along with the universe. This assertion implies that the universe does not have a beginning, as time did not exist "prior" to the universe. Hence, it is unclear whether properties such as space or time emerged with the singularity and the known universe.
Despite the research, there is currently no theoretical model that explains the earliest moments of the universe's existence (during the Planck epoch) due to a lack of a testable theory of quantum gravity. Nevertheless, researchers of string theory, its extensions (such as M-theory), and of loop quantum cosmology, like Barton Zwiebach and Washington Taylor, have proposed solutions to assist in the explanation of the universe's earliest moments. Cosmogonists have only tentative theories for the early stages of the universe and its beginning. The proposed theoretical scenarios include string theory, M-theory, the Hartle–Hawking initial state, emergent Universe, string landsca
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do we call the most widely accepted cosmological explanation of how the universe formed?
A. big light theory
B. big crunch theory
C. singularity theory
D. big bang theory
Answer:
|
|
sciq-834
|
multiple_choice
|
Asexual reproduction in plants is typically an extension of the capacity for what?
|
[
"substrate growth",
"extracellular growth",
"indeterminate growth",
"blooming growth"
] |
C
|
Relavent Documents:
Document 0:::
Plant reproduction is the production of new offspring in plants, which can be accomplished by sexual or asexual reproduction. Sexual reproduction produces offspring by the fusion of gametes, resulting in offspring genetically different from either parent. Asexual reproduction produces new individuals without the fusion of gametes, resulting in clonal plants that are genetically identical to the parent plant and each other, unless mutations occur.
Asexual reproduction
Asexual reproduction does not involve the production and fusion of male and female gametes. Asexual reproduction may occur through budding, fragmentation, spore formation, regeneration and vegetative propagation.
Asexual reproduction is a type of reproduction where the offspring comes from one parent only, thus inheriting the characteristics of the parent. Asexual reproduction in plants occurs in two fundamental forms, vegetative reproduction and agamospermy. Vegetative reproduction involves a vegetative piece of the original plant producing new individuals by budding, tillering, etc. and is distinguished from apomixis, which is a replacement of sexual reproduction, and in some cases involves seeds. Apomixis occurs in many plant species such as dandelions (Taraxacum species) and also in some non-plant organisms. For apomixis and similar processes in non-plant organisms, see parthenogenesis.
Natural vegetative reproduction is a process mostly found in perennial plants, and typically involves structural modifications of the stem or roots and in a few species leaves. Most plant species that employ vegetative reproduction do so as a means to perennialize the plants, allowing them to survive from one season to the next and often facilitating their expansion in size. A plant that persists in a location through vegetative reproduction of individuals gives rise to a clonal colony. A single ramet, or apparent individual, of a clonal colony is genetically identical to all others in the same colony. The dist
Document 1:::
In biology and botany, indeterminate growth is growth that is not terminated in contrast to determinate growth that stops once a genetically pre-determined structure has completely formed. Thus, a plant that grows and produces flowers and fruit until killed by frost or some other external factor is called indeterminate. For example, the term is applied to tomato varieties that grow in a rather gangly fashion, producing fruit throughout the growing season. In contrast, a determinate tomato plant grows in a more bushy shape and is most productive for a single, larger harvest, then either tapers off with minimal new growth or fruit or dies.
Inflorescences
In reference to an inflorescence (a shoot specialised for bearing flowers, and bearing no leaves other than bracts), an indeterminate type (such as a raceme) is one in which the first flowers to develop and open are from the buds at the base, followed progressively by buds nearer to the growing tip. The growth of the shoot is not impeded by the opening of the early flowers or development of fruits and its appearance is of growing, producing, and maturing flowers and fruit indefinitely. In practice the continued growth of the terminal end necessarily peters out sooner or later, though without producing any definite terminal flower, and in some species it may stop growing before any of the buds have opened.
Not all plants produce indeterminate inflorescences however; some produce a definite terminal flower that terminates the development of new buds towards the tip of that inflorescence. In most species that produce a determinate inflorescence in this way, all of the flower buds are formed before the first ones begin to open, and all open more or less at the same time. In some species with determinate inflorescences however, the terminal flower blooms first, which stops the elongation of the main axis, but side buds develop lower down. One type of example is Dianthus; another type is exemplified by Allium; and yet ot
Document 2:::
Plant reproductive morphology is the study of the physical form and structure (the morphology) of those parts of plants directly or indirectly concerned with sexual reproduction.
Among all living organisms, flowers, which are the reproductive structures of angiosperms, are the most varied physically and show a correspondingly great diversity in methods of reproduction. Plants that are not flowering plants (green algae, mosses, liverworts, hornworts, ferns and gymnosperms such as conifers) also have complex interplays between morphological adaptation and environmental factors in their sexual reproduction. The breeding system, or how the sperm from one plant fertilizes the ovum of another, depends on the reproductive morphology, and is the single most important determinant of the genetic structure of nonclonal plant populations. Christian Konrad Sprengel (1793) studied the reproduction of flowering plants and for the first time it was understood that the pollination process involved both biotic and abiotic interactions. Charles Darwin's theories of natural selection utilized this work to build his theory of evolution, which includes analysis of the coevolution of flowers and their insect pollinators.
Use of sexual terminology
Plants have complex lifecycles involving alternation of generations. One generation, the sporophyte, gives rise to the next generation, the gametophyte asexually via spores. Spores may be identical isospores or come in different sizes (microspores and megaspores), but strictly speaking, spores and sporophytes are neither male nor female because they do not produce gametes. The alternate generation, the gametophyte, produces gametes, eggs and/or sperm. A gametophyte can be monoicous (bisexual), producing both eggs and sperm, or dioicous (unisexual), either female (producing eggs) or male (producing sperm).
In the bryophytes (liverworts, mosses, and hornworts), the sexual gametophyte is the dominant generation. In ferns and seed plants (inc
Document 3:::
Vegetative reproduction (also known as vegetative propagation, vegetative multiplication or cloning) is any form of asexual reproduction occurring in plants in which a new plant grows from a fragment or cutting of the parent plant or specialized reproductive structures, which are sometimes called vegetative propagules.
Many plants naturally reproduce this way, but it can also be induced artificially. Horticulturists have developed asexual propagation techniques that use vegetative propagules to replicate plants. Success rates and difficulty of propagation vary greatly. Monocotyledons typically lack a vascular cambium, making them more challenging to propagate.
Background
Plant propagation is the process of plant reproduction of a species or cultivar, and it can be sexual or asexual. It can happen through the use of vegetative parts of the plants, such as leaves, stems, and roots to produce new plants or through growth from specialized vegetative plant parts.
While many plants reproduce by vegetative reproduction, they rarely exclusively use that method to reproduce. Vegetative reproduction is not evolutionary advantageous; it does not allow for genetic diversity and could lead plants to accumulate deleterious mutations. Vegetative reproduction is favored when it allows plants to produce more offspring per unit of resource than reproduction through seed production. In general, juveniles of a plant are easier to propagate vegetatively.
Although most plants normally reproduce sexually, many can reproduce vegetatively, or can be induced to do so via hormonal treatments. This is because meristematic cells capable of cellular differentiation are present in many plant tissues.
Vegetative propagation is usually considered a cloning method. However, root cuttings of thornless blackberries (Rubus fruticosus) will revert to thorny type because the adventitious shoot develops from a cell that is genetically thorny. Thornless blackberry is a chimera, with the epidermal
Document 4:::
A lateral shoot, commonly known as a branch, is a part of a plant's shoot system that develops from axillary buds on the stem's surface, extending laterally from the plant's stem.
Importance to photosynthesis
As a plant grows it requires more energy, it also is required to out-compete nearby plants for this energy. One of the ways a plant can compete for this energy is to increase its height, another is to increase its overall surface area. That is to say, the more lateral shoots a plant develops, the more foliage the plant can support increases how much photosynthesis the plant can perform as it allows for more area for the plant to uptake carbon dioxide as well as sunlight.
Genes, transcription factors, and growth
Through testing with Arabidopsis thaliana (A plant considered a model organism for plant genetic studies) genes including MAX1 and MAX2 have been found to affect growth of lateral shoots. Gene knockouts of these genes cause abnormal proliferation of the plants affected, implying they are used for repressing said growth in wild type plants. Another set of experiments with Arabidopsis thaliana testing genes in the plant hormone florigen, two genes FT and TSF (which are abbreviations for Flowering Locus T, and Twin Sister of FT) when knocked out, appear to affect lateral shoot in a negative fashion. These mutants cause slower growth and improper formation of lateral shoots, which could also mean that lateral shoots are important to florigen's function. Along with general growth there are also transcription factors that directly effect the production of additional lateral shoots like the TCP family (also known as Teosinte branched 1/cycloidea/proliferating cell factor) which are plant specific proteins that suppress lateral shoot branching. Additionally the TCP family has been found to be partially responsible for inhibiting the cell's Growth hormone–releasing hormone (GHRF) which means it also inhibits cell proliferation.
See also
Apical dominance
Sho
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Asexual reproduction in plants is typically an extension of the capacity for what?
A. substrate growth
B. extracellular growth
C. indeterminate growth
D. blooming growth
Answer:
|
|
sciq-5355
|
multiple_choice
|
What kind of interference is observed when the paths differ by a whole wavelength, and the waves arrive in phase?
|
[
"non-interference",
"spontaneous interference",
"constructive interference",
"necessary interference"
] |
C
|
Relavent Documents:
Document 0:::
In radio communication, multipath is the propagation phenomenon that results in radio signals reaching the receiving antenna by two or more paths. Causes of multipath include atmospheric ducting, ionospheric reflection and refraction, and reflection from water bodies and terrestrial objects such as mountains and buildings. When the same signal is received over more than one path, it can create interference and phase shifting of the signal. Destructive interference causes fading; this may cause a radio signal to become too weak in certain areas to be received adequately. For this reason, this effect is also known as multipath interference or multipath distortion.
Where the magnitudes of the signals arriving by the various paths have a distribution known as the Rayleigh distribution, this is known as Rayleigh fading. Where one component (often, but not necessarily, a line of sight component) dominates, a Rician distribution provides a more accurate model, and this is known as Rician fading. Where two components dominate, the behavior is best modeled with the two-wave with diffuse power (TWDP) distribution. All of these descriptions are commonly used and accepted and lead to results. However, they are generic and abstract/hide/approximate the underlying physics.
Interference
Multipath interference is a phenomenon in the physics of waves whereby a wave from a source travels to a detector via two or more paths and the two (or more) components of the wave interfere constructively or destructively. Multipath interference is a common cause of "ghosting" in analog television broadcasts and of fading of radio waves.
The condition necessary is that the components of the wave remain coherent throughout the whole extent of their travel.
The interference will arise owing to the two (or more) components of the wave having, in general, travelled a different length (as measured by optical path length – geometric length and refraction (differing optical speed)), and thus arrivin
Document 1:::
In telecommunications, an interference is that which modifies a signal in a disruptive manner, as it travels along a communication channel between its source and receiver. The term is often used to refer to the addition of unwanted signals to a useful signal. Common examples include:
Electromagnetic interference (EMI)
Co-channel interference (CCI), also known as crosstalk
Adjacent-channel interference (ACI)
Intersymbol interference (ISI)
Inter-carrier interference (ICI), caused by doppler shift in OFDM modulation (multitone modulation).
Common-mode interference (CMI)
Conducted interference
Noise is a form of interference but not all interference is noise.
Radio resource management aims at reducing and controlling the co-channel and adjacent-channel interference.
Interference alignment
A solution to interference problems in wireless communication networks is interference alignment, which was crystallized by Syed Ali Jafar at the University of California, Irvine. A specialized application was previously studied by Yitzhak Birk and Tomer Kol for an index coding problem in 1998. For interference management in wireless communication, interference alignment was originally introduced by Mohammad Ali Maddah-Ali, Abolfazl S. Motahari, and Amir Keyvan Khandani, at the University of Waterloo, for communication over wireless X channels. Interference alignment was eventually established as a general principle by Jafar and Viveck R. Cadambe in 2008, when they introduced "a mechanism to align an arbitrarily large number of interferers, leading to the surprising conclusion that wireless networks are not essentially interference limited." This led to the adoption of interference alignment in the design of wireless networks.
Jafar explained:
According to New York University senior researcher Paul Horn:
See also
Distortion
Inter-flow interference
Intra-flow interference
Meaconing
Signal-to-interference ratio (SIR)
Signal-to-noise plus interference (SNIR)
Document 2:::
Computer science and engineering (CSE) is an academic program at many universities which comprises computer science classes (e.g. data structures and algorithms) and computer engineering classes (e.g computer architecture). There is no clear division in computing between science and engineering, just like in the field of materials science and engineering. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both undergraduate as well postgraduate with specializations.
Academic courses
Academic programs vary between colleges, but typically include a combination of topics in computer science, computer engineering, and electrical engineering. Undergraduate courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most CSE programs require introductory mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, probability, and statistics, as well as the basics of electrical and electronic engineering, physics, and electromagnetism.
Example universities with CSE majors and departments
APJ Abdul Kalam Technological University
American International University-B
Document 3:::
In telecommunication, intersymbol interference (ISI) is a form of distortion of a signal in which one symbol interferes with subsequent symbols. This is an unwanted phenomenon as the previous symbols have a similar effect as noise, thus making the communication less reliable. The spreading of the pulse beyond its allotted time interval causes it to interfere with neighboring pulses. ISI is usually caused by multipath propagation or the inherent linear or non-linear frequency response of a communication channel causing successive symbols to blur together.
The presence of ISI in the system introduces errors in the decision device at the receiver output. Therefore, in the design of the transmitting and receiving filters, the objective is to minimize the effects of ISI, and thereby deliver the digital data to its destination with the smallest error rate possible.
Ways to alleviate intersymbol interference include adaptive equalization and error correcting codes.
Causes
Multipath propagation
One of the causes of intersymbol interference is multipath propagation in which a wireless signal from a transmitter reaches the receiver via multiple paths. The causes of this include reflection (for instance, the signal may bounce off buildings), refraction (such as through the foliage of a tree) and atmospheric effects such as atmospheric ducting and ionospheric reflection. Since the various paths can be of different lengths, this results in the different versions of the signal arriving at the receiver at different times. These delays mean that part or all of a given symbol will be spread into the subsequent symbols, thereby interfering with the correct detection of those symbols. Additionally, the various paths often distort the amplitude and/or phase of the signal, thereby causing further interference with the received signal.
Bandlimited channels
Another cause of intersymbol interference is the transmission of a signal through a bandlimited channel, i.e., one where the
Document 4:::
The Fresnel–Arago laws are three laws which summarise some of the more important properties of interference between light of different states of polarization. Augustin-Jean Fresnel and François Arago, both discovered the laws, which bear their name.
The laws are as follows:
Two orthogonal, coherent linearly polarized waves cannot interfere.
Two parallel coherent linearly polarized waves will interfere in the same way as natural light.
The two constituent orthogonal linearly polarized states of natural light cannot interfere to form a readily observable interference pattern, even if rotated into alignment (because they are incoherent).
One may understand this more clearly when considering two waves, given by the form and , where the boldface indicates that the relevant quantity is a vector, interfering. We know that the intensity of light goes as the electric field squared (in fact, , where the angled brackets denote a time average), and so we just add the fields before squaring them. Extensive algebra yields an interference term in the intensity of the resultant wave, namely:
, where represents the phase difference arising from a combined path length and initial phase-angle difference.
Now it can be seen that if is perpendicular to (as in the case of the first Fresnel–Arago law), and there is no interference. On the other hand, if is parallel to (as in the case of the second Fresnel–Arago law), the interference term produces a variation in the light intensity corresponding to . Finally, if natural light is decomposed into orthogonal linear polarizations (as in the third Fresnel–Arago law), these states are incoherent, meaning that the phase difference will be fluctuating so quickly and randomly that after time-averaging we have , so again and there is no interference (even if is rotated so that it is parallel to ).
See also
Unpolarized light
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What kind of interference is observed when the paths differ by a whole wavelength, and the waves arrive in phase?
A. non-interference
B. spontaneous interference
C. constructive interference
D. necessary interference
Answer:
|
|
sciq-8176
|
multiple_choice
|
What is another name for trisomy 21?
|
[
"Fragile X",
"Tay-Sachs Disease",
"down syndrome",
"cystic fibrosis"
] |
C
|
Relavent Documents:
Document 0:::
Down syndrome is a chromosomal abnormality characterized by the presence of an extra copy of genetic material on chromosome 21, either in whole (trisomy 21) or part (such as due to translocations). The effects of the extra copy varies greatly from individual to individual, depending on the extent of the extra copy, genetic background, environmental factors, and random chance. Down syndrome can occur in all human populations, and analogous effects have been found in other species, such as chimpanzees and mice. In 2005, researchers have been able to create transgenic mice with most of human chromosome 21 (in addition to their normal chromosomes).
A typical human karyotype is shown here. Every chromosome has two copies. In the bottom right, there are chromosomal differences between males (XY) and females (XX), which do not concern us. A typical human karyotype is designated as 46,XX or 46,XY, indicating 46 chromosomes with an XX arrangement for females and 46 chromosomes with an XY arrangement for males. For this article, we will use females for the karyotype designation (46,XX).
Trisomy 21
Trisomy 21 (47,XY,+21) is caused by a meiotic nondisjunction event. A typical gamete (either egg or sperm) has one copy of each chromosome (23 total). When it is combined with a gamete from the other parent during conception, the child has 46 chromosomes. However, with nondisjunction, a gamete is produced with an extra copy of chromosome 21 (the gamete has 24 chromosomes). When combined with a typical gamete from the other parent, the child now has 47 chromosomes, with three copies of chromosome 21. The trisomy 21 karyotype figure shows the chromosomal arrangement, with the prominent extra chromosome 21.
Trisomy 21 is the cause of approximately 95% of observed Down syndrome, with 88% coming from nondisjunction in the maternal gamete and 8% coming from nondisjunction in the paternal gamete. Mitotic nondisjunction after conception would lead to mosaicism, and is discussed later.
Document 1:::
Science, technology, engineering, and mathematics (STEM) is an umbrella term used to group together the distinct but related technical disciplines of science, technology, engineering, and mathematics. The term is typically used in the context of education policy or curriculum choices in schools. It has implications for workforce development, national security concerns (as a shortage of STEM-educated citizens can reduce effectiveness in this area), and immigration policy, with regard to admitting foreign students and tech workers.
There is no universal agreement on which disciplines are included in STEM; in particular, whether or not the science in STEM includes social sciences, such as psychology, sociology, economics, and political science. In the United States, these are typically included by organizations such as the National Science Foundation (NSF), the Department of Labor's O*Net online database for job seekers, and the Department of Homeland Security. In the United Kingdom, the social sciences are categorized separately and are instead grouped with humanities and arts to form another counterpart acronym HASS (Humanities, Arts, and Social Sciences), rebranded in 2020 as SHAPE (Social Sciences, Humanities and the Arts for People and the Economy). Some sources also use HEAL (health, education, administration, and literacy) as the counterpart of STEM.
Terminology
History
Previously referred to as SMET by the NSF, in the early 1990s the acronym STEM was used by a variety of educators, including Charles E. Vela, the founder and director of the Center for the Advancement of Hispanics in Science and Engineering Education (CAHSEE). Moreover, the CAHSEE started a summer program for talented under-represented students in the Washington, D.C., area called the STEM Institute. Based on the program's recognized success and his expertise in STEM education, Charles Vela was asked to serve on numerous NSF and Congressional panels in science, mathematics, and engineering edu
Document 2:::
Trisomy 18, also known as Edwards syndrome, is a genetic disorder caused by the presence of a third copy of all or part of chromosome 18. Many parts of the body are affected. Babies are often born small and have heart defects. Other features include a small head, small jaw, clenched fists with overlapping fingers, and severe intellectual disability.
Most cases of trisomy 18 occur due to problems during the formation of the reproductive cells or during early development. The chance of this condition occurring increases with the mother's age. Rarely, cases may be inherited. Occasionally, not all cells have the extra chromosome, known as mosaic trisomy, and symptoms in these cases may be less severe. An ultrasound during pregnancy can increase suspicion for the condition, which can be confirmed by amniocentesis.
Treatment is supportive. After having one child with the condition, the risk of having a second is typically around one percent. It is the second-most common condition due to a third chromosome at birth, after Down syndrome.
Trisomy 18 occurs in around 1 in 5,000 live births. Many of those affected die before birth. Some studies suggest that more babies that survive to birth are female. Survival beyond a year of life is around 5–10%. It is named after English geneticist John Hilton Edwards, who first described the syndrome in 1960.
Signs and symptoms
Children born with Edwards' syndrome may have some or all of these characteristics: kidney malformations, structural heart defects at birth (i.e., ventricular septal defect, atrial septal defect, patent ductus arteriosus), intestines protruding outside the body (omphalocele), esophageal atresia, intellectual disability, developmental delays, growth deficiency, feeding difficulties, breathing difficulties, and arthrogryposis (a muscle disorder that causes multiple joint contractures at birth).
Some physical malformations associated with Edwards' syndrome include small head (microcephaly) accompanied by a promi
Document 3:::
The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work.
History
It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council.
Function
Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres.
STEM ambassadors
To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell.
Funding
STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments.
See also
The WISE Campaign
Engineering and Physical Sciences Research Council
National Centre for Excellence in Teaching Mathematics
Association for Science Education
Glossary of areas of mathematics
Glossary of astronomy
Glossary of biology
Glossary of chemistry
Glossary of engineering
Glossary of physics
Document 4:::
Kathryn "Kay" McGee (née Greene, May 6, 1920, in Chicago, Illinois – February 16, 2012 in River Forest, Illinois) was an American activist, recognized for founding two of the first organizations for the benefit of those with Down Syndrome. She worked seeking recognition, rights and opportunities for people with Down Syndrome.
The birth of her fourth child, Tricia McGee, on March 16, 1960, commenced a decades long effort to bring parents of children with Down Syndrome together to create medical and educational options for such children. Tricia McGee was diagnosed as a mongoloid shortly after birth, which is what doctors called a person with Down Syndrome when Tricia was born, but is now considered a slur. Down Syndrome is a genetic disorder that was first described in 1866 by British doctor John L. Down. It was discovered to be caused by an extra chromosome by French pediatrician Jérôme Lejeune in July 1958, less than two years before Tricia was born. Medical advice in 1960 was typically to institutionalize children with Down Syndrome. After Tricia's birth in 1960, the family pediatrician recommended that the McGees place her in an institution rather than bring her home from the hospital. A few years later when he saw her functioning well at the Alcuin Montessori School in River Forest, Illinois, he explained that he had been told in medical school to make that recommendation to people, and said that he would never do so again. After bringing Tricia home and adjusting to the reality that such an infant faces exceptional developmental challenges, Kay and Martin attempted to learn about Down Syndrome and find similarly situated parents in the Chicago area.
Early experience and efforts at organizing parents
Within six months Kay determined that there were children with Down Syndrome in communities but that they were not visible as society was not accepting and parents were protective of their vulnerable family members. In late 1960 Kay invited those parents she was ab
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is another name for trisomy 21?
A. Fragile X
B. Tay-Sachs Disease
C. down syndrome
D. cystic fibrosis
Answer:
|
|
sciq-10058
|
multiple_choice
|
What returns blood from capillaries to an atrium?
|
[
"the liver",
"the lymph system",
"veins",
"arteries"
] |
C
|
Relavent Documents:
Document 0:::
Great vessels are the large vessels that bring blood to and from the heart. These are:
Superior vena cava
Inferior vena cava
Pulmonary arteries
Pulmonary veins
Aorta
Transposition of the great vessels is a group of congenital heart defects involving an abnormal spatial arrangement of any of the great vessels.
Document 1:::
The blood circulatory system is a system of organs that includes the heart, blood vessels, and blood which is circulated throughout the entire body of a human or other vertebrate. It includes the cardiovascular system, or vascular system, that consists of the heart and blood vessels (from Greek kardia meaning heart, and from Latin vascula meaning vessels). The circulatory system has two divisions, a systemic circulation or circuit, and a pulmonary circulation or circuit. Some sources use the terms cardiovascular system and vascular system interchangeably with the circulatory system.
The network of blood vessels are the great vessels of the heart including large elastic arteries, and large veins; other arteries, smaller arterioles, capillaries that join with venules (small veins), and other veins. The circulatory system is closed in vertebrates, which means that the blood never leaves the network of blood vessels. Some invertebrates such as arthropods have an open circulatory system. Diploblasts such as sponges, and comb jellies lack a circulatory system.
Blood is a fluid consisting of plasma, red blood cells, white blood cells, and platelets; it is circulated around the body carrying oxygen and nutrients to the tissues and collecting and disposing of waste materials. Circulated nutrients include proteins and minerals and other components include hemoglobin, hormones, and gases such as oxygen and carbon dioxide. These substances provide nourishment, help the immune system to fight diseases, and help maintain homeostasis by stabilizing temperature and natural pH.
In vertebrates, the lymphatic system is complementary to the circulatory system. The lymphatic system carries excess plasma (filtered from the circulatory system capillaries as interstitial fluid between cells) away from the body tissues via accessory routes that return excess fluid back to blood circulation as lymph. The lymphatic system is a subsystem that is essential for the functioning of the bloo
Document 2:::
The pulmonary circulation is a division of the circulatory system in all vertebrates. The circuit begins with deoxygenated blood returned from the body to the right atrium of the heart where it is pumped out from the right ventricle to the lungs. In the lungs the blood is oxygenated and returned to the left atrium to complete the circuit.
The other division of the circulatory system is the systemic circulation that begins with receiving the oxygenated blood from the pulmonary circulation into the left atrium. From the atrium the oxygenated blood enters the left ventricle where it is pumped out to the rest of the body, returning as deoxygenated blood back to the pulmonary circulation.
The blood vessels of the pulmonary circulation are the pulmonary arteries and the pulmonary veins.
A separate circulatory circuit known as the bronchial circulation supplies oxygenated blood to the tissue of the larger airways of the lung.
Structure
De-oxygenated blood leaves the heart, goes to the lungs, and then enters back into the heart. De-oxygenated blood leaves through the right ventricle through the pulmonary artery. From the right atrium, the blood is pumped through the tricuspid valve (or right atrioventricular valve) into the right ventricle. Blood is then pumped from the right ventricle through the pulmonary valve and into the pulmonary artery.
Lungs
The pulmonary arteries carry deoxygenated blood to the lungs, where carbon dioxide is released and oxygen is picked up during respiration. Arteries are further divided into very fine capillaries which are extremely thin-walled. The pulmonary veins return oxygenated blood to the left atrium of the heart.
Veins
Oxygenated blood leaves the lungs through pulmonary veins, which return it to the left part of the heart, completing the pulmonary cycle. This blood then enters the left atrium, which pumps it through the mitral valve into the left ventricle. From the left ventricle, the blood passes through the aortic valve to the
Document 3:::
Veins () are blood vessels in the circulatory system of humans and most other animals that carry blood toward the heart. Most veins carry deoxygenated blood from the tissues back to the heart; exceptions are those of the pulmonary and fetal circulations which carry oxygenated blood to the heart. In the systemic circulation arteries carry oxygenated blood away from the heart, and veins return deoxygenated blood to the heart, in the deep veins.
There are three sizes of veins, large, medium, and small. Smaller veins are called venules, and the smallest the post-capillary venules are microscopic that make up the veins of the microcirculation. Veins are often closer to the skin than arteries.
Veins have less smooth muscle and connective tissue and wider internal diameters than arteries. Because of their thinner walls and wider lumens they are able to expand and hold more blood. This greater capacity gives them the term of capacitance vessels. At any time, nearly 70% of the total volume of blood in the human body is in the veins. In medium and large sized veins the flow of blood is maintained by one-way (unidirectional) venous valves to prevent backflow. In the lower limbs this is also aided by muscle pumps, also known as venous pumps that exert pressure on intramuscular veins when they contract and drive blood back to the heart.
Structure
There are three sizes of vein, large, medium, and small. Smaller veins are called venules. The smallest veins are the post-capillary venules. Veins have a similar three-layered structure to arteries. The layers known as tunicae have a concentric arrangement that forms the wall of the vessel. The outer layer, is a thick layer of connective tissue called the tunica externa or adventitia; this layer is absent in the post-capillary venules. The middle layer, consists of bands of smooth muscle and is known as the tunica media. The inner layer, is a thin lining of endothelium known as the tunica intima. The tunica media in the veins is mu
Document 4:::
In haemodynamics, the body must respond to physical activities, external temperature, and other factors by homeostatically adjusting its blood flow to deliver nutrients such as oxygen and glucose to stressed tissues and allow them to function. Haemodynamic response (HR) allows the rapid delivery of blood to active neuronal tissues. The brain consumes large amounts of energy but does not have a reservoir of stored energy substrates. Since higher processes in the brain occur almost constantly, cerebral blood flow is essential for the maintenance of neurons, astrocytes, and other cells of the brain. This coupling between neuronal activity and blood flow is also referred to as neurovascular coupling.
Vascular anatomy overview
In order to understand how blood is delivered to cranial tissues, it is important to understand the vascular anatomy of the space itself. Large cerebral arteries in the brain split into smaller arterioles, also known as pial arteries. These consist of endothelial cells and smooth muscle cells, and as these pial arteries further branch and run deeper into the brain, they associate with glial cells, namely astrocytes. The intracerebral arterioles and capillaries are unlike systemic arterioles and capillaries in that they do not readily allow substances to diffuse through them; they are connected by tight junctions in order to form the blood brain barrier (BBB). Endothelial cells, smooth muscle, neurons, astrocytes, and pericytes work together in the brain order to maintain the BBB while still delivering nutrients to tissues and adjusting blood flow in the intracranial space to maintain homeostasis. As they work as a functional neurovascular unit, alterations in their interactions at the cellular level can impair HR in the brain and lead to deviations in normal nervous function.
Mechanisms
Various cell types play a role in HR, including astrocytes, smooth muscle cells, endothelial cells of blood vessels, and pericytes. These cells control whether th
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What returns blood from capillaries to an atrium?
A. the liver
B. the lymph system
C. veins
D. arteries
Answer:
|
|
sciq-2501
|
multiple_choice
|
The entire volume of what is filtered through the kidneys about 300 times per day?
|
[
"blood",
"saliva",
"urine",
"gastric juice"
] |
A
|
Relavent Documents:
Document 0:::
Body fluids, bodily fluids, or biofluids, sometimes body liquids, are liquids within the human body. In lean healthy adult men, the total body water is about 60% (60–67%) of the total body weight; it is usually slightly lower in women (52–55%). The exact percentage of fluid relative to body weight is inversely proportional to the percentage of body fat. A lean man, for example, has about 42 (42–47) liters of water in his body.
The total body of water is divided into fluid compartments, between the intracellular fluid compartment (also called space, or volume) and the extracellular fluid (ECF) compartment (space, volume) in a two-to-one ratio: 28 (28–32) liters are inside cells and 14 (14–15) liters are outside cells.
The ECF compartment is divided into the interstitial fluid volume – the fluid outside both the cells and the blood vessels – and the intravascular volume (also called the vascular volume and blood plasma volume) – the fluid inside the blood vessels – in a three-to-one ratio: the interstitial fluid volume is about 12 liters; the vascular volume is about 4 liters.
The interstitial fluid compartment is divided into the lymphatic fluid compartment – about 2/3, or 8 (6–10) liters, and the transcellular fluid compartment (the remaining 1/3, or about 4 liters).
The vascular volume is divided into the venous volume and the arterial volume; and the arterial volume has a conceptually useful but unmeasurable subcompartment called the effective arterial blood volume.
Compartments by location
intracellular fluid (ICF), which consist of cytosol and fluids in the cell nucleus
Extracellular fluid
Intravascular fluid (blood plasma)
Interstitial fluid
Lymphatic fluid (sometimes included in interstitial fluid)
Transcellular fluid
Health
Body fluid is the term most often used in medical and health contexts. Modern medical, public health, and personal hygiene practices treat body fluids as potentially unclean. This is because they can be vectors for infectious
Document 1:::
The Starling principle holds that extracellular fluid movements between blood and tissues are determined by differences in hydrostatic pressure and colloid osmotic (oncotic) pressure between plasma inside microvessels and interstitial fluid outside them. The Starling Equation, proposed many years after the death of Starling, describes that relationship in mathematical form and can be applied to many biological and non-biological semipermeable membranes. The classic Starling principle and the equation that describes it have in recent years been revised and extended.
Every day around 8 litres of water (solvent) containing a variety of small molecules (solutes) leaves the blood stream of an adult human and perfuses the cells of the various body tissues. Interstitial fluid drains by afferent lymph vessels to one of the regional lymph node groups, where around 4 litres per day is reabsorbed to the blood stream. The remainder of the lymphatic fluid is rich in proteins and other large molecules and rejoins the blood stream via the thoracic duct which empties into the great veins close to the heart. Filtration from plasma to interstitial (or tissue) fluid occurs in microvascular capillaries and post-capillary venules. In most tissues the micro vessels are invested with a continuous internal surface layer that includes a fibre matrix now known as the endothelial glycocalyx whose interpolymer spaces function as a system of small pores, radius circa 5 nm. Where the endothelial glycocalyx overlies a gap in the junction molecules that bind endothelial cells together (inter endothelial cell cleft), the plasma ultrafiltrate may pass to the interstitial space, leaving larger molecules reflected back into the plasma.
A small number of continuous capillaries are specialised to absorb solvent and solutes from interstitial fluid back into the blood stream through fenestrations in endothelial cells, but the volume of solvent absorbed every day is small.
Discontinuous capillaries as
Document 2:::
Urine is a liquid by-product of metabolism in humans and in many other animals. Urine flows from the kidneys through the ureters to the urinary bladder. Urination results in urine being excreted from the body through the urethra.
Cellular metabolism generates many by-products that are rich in nitrogen and must be cleared from the bloodstream, such as urea, uric acid, and creatinine. These by-products are expelled from the body during urination, which is the primary method for excreting water-soluble chemicals from the body. A urinalysis can detect nitrogenous wastes of the mammalian body.
Urine plays an important role in the earth's nitrogen cycle. In balanced ecosystems, urine fertilizes the soil and thus helps plants to grow. Therefore, urine can be used as a fertilizer. Some animals use it to mark their territories. Historically, aged or fermented urine (known as lant) was also used for gunpowder production, household cleaning, tanning of leather and dyeing of textiles.
Human urine and feces are collectively referred to as human waste or human excreta, and are managed via sanitation systems. Livestock urine and feces also require proper management if the livestock population density is high.
Physiology
Most animals have excretory systems for elimination of soluble toxic wastes. In humans, soluble wastes are excreted primarily by the urinary system and, to a lesser extent in terms of urea, removed by perspiration. The urinary system consists of the kidneys, ureters, urinary bladder, and urethra. The system produces urine by a process of filtration, reabsorption, and tubular secretion. The kidneys extract the soluble wastes from the bloodstream, as well as excess water, sugars, and a variety of other compounds. The resulting urine contains high concentrations of urea and other substances, including toxins. Urine flows from the kidneys through the ureter, bladder, and finally the urethra before passing from the body.
Duration
Research looking at the duration
Document 3:::
Assessment of kidney function occurs in different ways, using the presence of symptoms and signs, as well as measurements using urine tests, blood tests, and medical imaging.
Functions of a healthy kidney include maintaining a person's fluid balance, maintaining an acid-base balance; regulating electrolytes including sodium, potassium, and other electrolytes; clearing toxins; regulating blood pressure; and regulating hormones, such as erythropoietin; and activation of vitamin D.
Description
The functions of the kidney include maintenance of acid-base balance; regulation of fluid balance; regulation of sodium, potassium, and other electrolytes; clearance of toxins; absorption of glucose, amino acids, and other small molecules; regulation of blood pressure; production of various hormones, such as erythropoietin; and activation of vitamin D.
The GFR is regarded as the best overall measure of the kidney's ability to carry out these numerous functions. An estimate of the GFR is used clinically to determine the degree of kidney impairment and to track the progression of the disease. The GFR, however, does not reveal the source of the kidney disease. This is accomplished by urinalysis, measurement of urine protein excretion, kidney imaging, and, if necessary, kidney biopsy.
Much of renal physiology is studied at the level of the nephron the smallest functional unit of the kidney. Each nephron begins with a filtration component that filters the blood entering the kidney. This filtrate then flows along the length of the nephron, which is a tubular structure lined by a single layer of specialized cells and surrounded by capillaries. The major functions of these lining cells are the reabsorption of water and small molecules from the filtrate into the blood, and the secretion of wastes from the blood into the urine.
Proper function of the kidney requires that it receives and adequately filters blood. This is performed at the microscopic level by many hundreds of thousa
Document 4:::
The human body and even its individual body fluids may be conceptually divided into various fluid compartments, which, although not literally anatomic compartments, do represent a real division in terms of how portions of the body's water, solutes, and suspended elements are segregated. The two main fluid compartments are the intracellular and extracellular compartments. The intracellular compartment is the space within the organism's cells; it is separated from the extracellular compartment by cell membranes.
About two-thirds of the total body water of humans is held in the cells, mostly in the cytosol, and the remainder is found in the extracellular compartment. The extracellular fluids may be divided into three types: interstitial fluid in the "interstitial compartment" (surrounding tissue cells and bathing them in a solution of nutrients and other chemicals), blood plasma and lymph in the "intravascular compartment" (inside the blood vessels and lymphatic vessels), and small amounts of transcellular fluid such as ocular and cerebrospinal fluids in the "transcellular compartment".
The normal processes by which life self-regulates its biochemistry (homeostasis) produce fluid balance across the fluid compartments. Water and electrolytes are continuously moving across barriers (eg, cell membranes, vessel walls), albeit often in small amounts, to maintain this healthy balance. The movement of these molecules is controlled and restricted by various mechanisms. When illnesses upset the balance, electrolyte imbalances can result.
The interstitial and intravascular compartments readily exchange water and solutes, but the third extracellular compartment, the transcellular, is thought of as separate from the other two and not in dynamic equilibrium with them.
The science of fluid balance across fluid compartments has practical application in intravenous therapy, where doctors and nurses must predict fluid shifts and decide which IV fluids to give (for example, isot
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The entire volume of what is filtered through the kidneys about 300 times per day?
A. blood
B. saliva
C. urine
D. gastric juice
Answer:
|
|
sciq-7218
|
multiple_choice
|
Living organisms are comprised of organic compounds, which are molecules built around what element?
|
[
"oxygen",
"helium",
"silicon",
"carbon"
] |
D
|
Relavent Documents:
Document 0:::
Assembly theory is a hypothesis that characterizes object complexity. When applied to molecule complexity, its authors claim it to be the first technique that is experimentally verifiable, unlike other molecular complexity algorithms that lack experimental measure. The theory was developed as a means to detect evidence of extraterrestrial life from data gathered by astronomical observations or probes.
Background
The hypothesis was proposed by chemist Leroy Cronin and developed by the team he leads at the University of Glasgow, then extended in collaboration with a team at Arizona State University led by astrobiologist Sara Imari Walker. It is difficult to identify chemical signatures that are unique to life. For example, the Viking lander biological experiments detected molecules that could be explained by either living or natural non-living processes.
Assembly theory outputs how complex a given object is as a function of the number of independent parts and their abundances. To calculate how complex an item is, it is recursively divided into its component parts. The 'assembly index' is defined as the shortest path to put the object back together.
For example, the word 'abracadabra' contains 5 unique letters (a, b, c, d and r) and is 11 symbols long. It can be assembled from its constituents as a + b --> ab + r --> abr + a --> abra + c --> abrac + a --> abraca + d --> abracad + abra --> abracadabra, because 'abra' was already constructed at an earlier stage. Because this requires 7 steps, the assembly index is 7. The string ‘abcdefghijk’ has no repeats so has an assembly index of 10.
While other approaches can provide a measure of complexity, the researchers claim that assembly theory's molecular assembly number is the first to be measurable experimentally. They argue that the molecular assembly number can be used to gauge the improbability that a complex molecule was created without life, with a higher number of steps corresponding to a higher improbability. Th
Document 1:::
Biotic material or biological derived material is any material that originates from living organisms. Most such materials contain carbon and are capable of decay.
The earliest life on Earth arose at least 3.5 billion years ago. Earlier physical evidences of life include graphite, a biogenic substance, in 3.7 billion-year-old metasedimentary rocks discovered in southwestern Greenland, as well as, "remains of biotic life" found in 4.1 billion-year-old rocks in Western Australia. Earth's biodiversity has expanded continually except when interrupted by mass extinctions. Although scholars estimate that over 99 percent of all species of life (over five billion) that ever lived on Earth are extinct, there are still an estimated 10–14 million extant species, of which about 1.2 million have been documented and over 86% have not yet been described.
Examples of biotic materials are wood, straw, humus, manure, bark, crude oil, cotton, spider silk, chitin, fibrin, and bone.
The use of biotic materials, and processed biotic materials (bio-based material) as alternative natural materials, over synthetics is popular with those who are environmentally conscious because such materials are usually biodegradable, renewable, and the processing is commonly understood and has minimal environmental impact. However, not all biotic materials are used in an environmentally friendly way, such as those that require high levels of processing, are harvested unsustainably, or are used to produce carbon emissions.
When the source of the recently living material has little importance to the product produced, such as in the production of biofuels, biotic material is simply called biomass. Many fuel sources may have biological sources, and may be divided roughly into fossil fuels, and biofuel.
In soil science, biotic material is often referred to as organic matter. Biotic materials in soil include glomalin, Dopplerite and humic acid. Some biotic material may not be considered to be organic matte
Document 2:::
This list of life sciences comprises the branches of science that involve the scientific study of life – such as microorganisms, plants, and animals including human beings. This science is one of the two major branches of natural science, the other being physical science, which is concerned with non-living matter. Biology is the overall natural science that studies life, with the other life sciences as its sub-disciplines.
Some life sciences focus on a specific type of organism. For example, zoology is the study of animals, while botany is the study of plants. Other life sciences focus on aspects common to all or many life forms, such as anatomy and genetics. Some focus on the micro-scale (e.g. molecular biology, biochemistry) other on larger scales (e.g. cytology, immunology, ethology, pharmacy, ecology). Another major branch of life sciences involves understanding the mindneuroscience. Life sciences discoveries are helpful in improving the quality and standard of life and have applications in health, agriculture, medicine, and the pharmaceutical and food science industries. For example, it has provided information on certain diseases which has overall aided in the understanding of human health.
Basic life science branches
Biology – scientific study of life
Anatomy – study of form and function, in plants, animals, and other organisms, or specifically in humans
Astrobiology – the study of the formation and presence of life in the universe
Bacteriology – study of bacteria
Biotechnology – study of combination of both the living organism and technology
Biochemistry – study of the chemical reactions required for life to exist and function, usually a focus on the cellular level
Bioinformatics – developing of methods or software tools for storing, retrieving, organizing and analyzing biological data to generate useful biological knowledge
Biolinguistics – the study of the biology and evolution of language.
Biological anthropology – the study of humans, non-hum
Document 3:::
This is a list of topics in molecular biology. See also index of biochemistry articles.
Document 4:::
Biochemistry is the study of the chemical processes in living organisms. It deals with the structure and function of cellular components such as proteins, carbohydrates, lipids, nucleic acids and other biomolecules.
Articles related to biochemistry include:
0–9
2-amino-5-phosphonovalerate - 3' end - 5' end
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Living organisms are comprised of organic compounds, which are molecules built around what element?
A. oxygen
B. helium
C. silicon
D. carbon
Answer:
|
|
sciq-5130
|
multiple_choice
|
What is the term for much bigger evolutionary changes that result in new species?
|
[
"macroevolution",
"recalibration",
"breaking away",
"regression"
] |
A
|
Relavent Documents:
Document 0:::
Megaevolution describes the most dramatic events in evolution. It is no longer suggested that the evolutionary processes involved are necessarily special, although in some cases they might be. Whereas macroevolution can apply to relatively modest changes that produced diversification of species and genera and are readily compared to microevolution, "megaevolution" is used for great changes. Megaevolution has been extensively debated because it has been seen as a possible objection to Charles Darwin's theory of gradual evolution by natural selection.
A list was prepared by John Maynard Smith and Eörs Szathmáry which they called The Major Transitions in Evolution. On the 1999 edition of the list they included:
Replicating molecules: change to populations of molecules in protocells
Independent replicators leading to chromosomes
RNA as gene and enzyme change to DNA genes and protein enzymes
Bacterial cells (prokaryotes) leading to cells (eukaryotes) with nuclei and organelles
Asexual clones leading to sexual populations
Single-celled organisms leading to fungi, plants and animals
Solitary individuals leading to colonies with non-reproducing castes (termites, ants & bees)
Primate societies leading to human societies with language
Some of these topics had been discussed before.
Numbers one to six on the list are events which are of huge importance, but about which we know relatively little. All occurred before (and mostly very much before) the fossil record started, or at least before the Phanerozoic eon.
Numbers seven and eight on the list are of a different kind from the first six, and have generally not been considered by the other authors. Number four is of a type which is not covered by traditional evolutionary theory, The origin of eukaryotic cells is probably due to symbiosis between prokaryotes. This is a kind of evolution which must be a rare event.
The Cambrian radiation example
The Cambrian explosion or Cambrian radiation was the relatively rapid appeara
Document 1:::
The history of life on Earth seems to show a clear trend; for example, it seems intuitive that there is a trend towards increasing complexity in living organisms. More recently evolved organisms, such as mammals, appear to be much more complex than organisms, such as bacteria, which have existed for a much longer period of time. However, there are theoretical and empirical problems with this claim. From a theoretical perspective, it appears that there is no reason to expect evolution to result in any largest-scale trends, although small-scale trends, limited in time and space, are expected (Gould, 1997). From an empirical perspective, it is difficult to measure complexity and, when it has been measured, the evidence does not support a largest-scale trend (McShea, 1996).
History
Many of the founding figures of evolution supported the idea of Evolutionary progress which has fallen from favour, but the work of Francisco J. Ayala and Michael Ruse suggests is still influential.
Hypothetical largest-scale trends
McShea (1998) discusses eight features of organisms that might indicate largest-scale trends in evolution: entropy, energy intensiveness, evolutionary versatility, developmental depth, structural depth, adaptedness, size, complexity. He calls these "live hypotheses", meaning that trends in these features are currently being considered by evolutionary biologists. McShea observes that the most popular hypothesis, among scientists, is that there is a largest-scale trend towards increasing complexity.
Evolutionary theorists agree that there are local trends in evolution, such as increasing brain size in hominids, but these directional changes do not persist indefinitely, and trends in opposite directions also occur (Gould, 1997). Evolution causes organisms to adapt to their local environment; when the environment changes, the direction of the trend may change. The question of whether there is evolutionary progress is better formulated as the question of whether
Document 2:::
Evolutionary biology is the subfield of biology that studies the evolutionary processes (natural selection, common descent, speciation) that produced the diversity of life on Earth. It is also defined as the study of the history of life forms on Earth. Evolution holds that all species are related and gradually change over generations. In a population, the genetic variations affect the phenotypes (physical characteristics) of an organism. These changes in the phenotypes will be an advantage to some organisms, which will then be passed onto their offspring. Some examples of evolution in species over many generations are the peppered moth and flightless birds. In the 1930s, the discipline of evolutionary biology emerged through what Julian Huxley called the modern synthesis of understanding, from previously unrelated fields of biological research, such as genetics and ecology, systematics, and paleontology.
The investigational range of current research has widened to encompass the genetic architecture of adaptation, molecular evolution, and the different forces that contribute to evolution, such as sexual selection, genetic drift, and biogeography. Moreover, the newer field of evolutionary developmental biology ("evo-devo") investigates how embryogenesis is controlled, thus yielding a wider synthesis that integrates developmental biology with the fields of study covered by the earlier evolutionary synthesis.
Subfields
Evolution is the central unifying concept in biology. Biology can be divided into various ways. One way is by the level of biological organization, from molecular to cell, organism to population. Another way is by perceived taxonomic group, with fields such as zoology, botany, and microbiology, reflecting what was once seen as the major divisions of life. A third way is by approaches, such as field biology, theoretical biology, experimental evolution, and paleontology. These alternative ways of dividing up the subject have been combined with evolution
Document 3:::
In evolutionary biology, megatrajectories are the major evolutionary milestones and directions in the evolution of life.
Posited by A. H. Knoll and Richard K. Bambach in their 2000 collaboration, "Directionality in the History of Life," Knoll and Bamback argue that, in consideration of the problem of progress in evolutionary history, a middle road that encompasses both contingent and convergent features of biological evolution may be attainable through the idea of the megatrajectory:
We believe that six broad megatrajectories capture the essence of vectoral change in the history of life. The megatrajectories for a logical sequence dictated by the necessity for complexity level N to exist before N+1 can evolve...In the view offered here, each megatrajectory adds new and qualitatively distinct dimensions to the way life utilizes ecospace.
According to Knoll and Bambach, the six megatrajectories outlined by biological evolution thus far are:
the origin of life to the "Last Common Ancestor"
prokaryote diversification
unicellular eukaryote diversification
multicellular organisms
land organisms
appearance of intelligence and technology
Milan M. Ćirković and Robert Bradbury, have taken the megatrajectory concept one step further by theorizing that a seventh megatrajectory exists: postbiological evolution triggered by the emergence of artificial intelligence at least equivalent to the biologically-evolved one, as well as the invention of several key technologies of the similar level of complexity and environmental impact, such as molecular nanoassembling or stellar uplifting.
See also
Intelligence principle
Document 4:::
Plant evolution is the subset of evolutionary phenomena that concern plants. Evolutionary phenomena are characteristics of populations that are described by averages, medians, distributions, and other statistical methods. This distinguishes plant evolution from plant development, a branch of developmental biology which concerns the changes that individuals go through in their lives. The study of plant evolution attempts to explain how the present diversity of plants arose over geologic time. It includes the study of genetic change and the consequent variation that often results in speciation, one of the most important types of radiation into taxonomic groups called clades. A description of radiation is called a phylogeny and is often represented by type of diagram called a phylogenetic tree.
Evolutionary trends
Differences between plant and animal physiology and reproduction cause minor differences in how they evolve.
One major difference is the totipotent nature of plant cells, allowing them to reproduce asexually much more easily than most animals. They are also capable of polyploidy – where more than two chromosome sets are inherited from the parents. This allows relatively fast bursts of evolution to occur, for example by the effect of gene duplication. The long periods of dormancy that seed plants can employ also makes them less vulnerable to extinction, as they can "sit out" the tough periods and wait until more clement times to leap back to life.
The effect of these differences is most profoundly seen during extinction events. These events, which wiped out between 6 and 62% of terrestrial animal families, had "negligible" effect on plant families. However, the ecosystem structure is significantly rearranged, with the abundances and distributions of different groups of plants changing profoundly. These effects are perhaps due to the higher diversity within families, as extinction – which was common at the species level – was very selective. For example, win
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the term for much bigger evolutionary changes that result in new species?
A. macroevolution
B. recalibration
C. breaking away
D. regression
Answer:
|
|
sciq-3190
|
multiple_choice
|
Upwelling mantle at the mid-ocean ridge pushes plates in which direction?
|
[
"inward",
"westward",
"outward",
"eastward"
] |
C
|
Relavent Documents:
Document 0:::
The term dynamic topography is used in geodynamics to refer the elevation differences caused by the flow within Earth's mantle.
Definition
In geodynamics, dynamic topography refers to topography generated by the motion of zones of differing degrees of buoyancy (convection) in Earth's mantle. It is also seen as the residual topography obtained by removing the isostatic contribution from the observed topography (i.e., the topography that cannot be explained by an isostatic equilibrium of the crust or the lithosphere resting on a fluid mantle) and all observed topography due to post-glacial rebound. Elevation differences due to dynamic topography are frequently on the order of a few hundred meters to a couple of kilometers. Large scale surface features due to dynamic topography are mid-ocean ridges and oceanic trenches. Other prominent examples include areas overlying mantle plumes such as the African superswell.
The mid-ocean ridges are high due to dynamic topography because the upwelling hot material underneath them pushes them up above the surrounding seafloor. This provides an important driving force in plate tectonics called ridge push: the increased gravitational potential energy of the mid-ocean ridge due to its dynamic uplift causes it to extend and push the surrounding lithosphere away from the ridge axis. Dynamic topography and mantle density variations can explain 90% of the long-wavelength geoid after the hydrostatic ellipsoid is subtracted out.
Dynamic topography is the reason why the geoid is high over regions of low-density mantle. If the mantle were static, these low-density regions would be geoid lows. However, these low-density regions move upwards in a mobile, convecting mantle, elevating density interfaces such as the core-mantle boundary, 440 and 670 kilometer discontinuities, and the Earth's surface. Since both the density and the dynamic topography provide approximately the same magnitude of change in the geoid, the resultant geoid is a relati
Document 1:::
InterRidge is a non-profit organisation that promotes interdisciplinary, international studies in the research of oceanic spreading centres, including mid-ocean ridge and back-arc basin systems. It does so by creating a global research community, planning and coordinating new science programmes that no single nation can achieve alone, exchanging scientific information, and sharing new technologies and facilities. InterRidge is dedicated to reaching out to the public, scientists and governments, and to providing a unified voice for ocean ridge researchers worldwide.
It was launched in 1992, and in 2011 InterRidge has 6 principal, 3 associate, and 21 corresponding member nations and regions. InterRidge has more than 2500 individual member scientists in disciplines ranging from marine geology to chemistry, biology, and ocean engineering.
The InterRidge Office rotates every 3 years. During 2013-2015, InterRidge is being hosted by the Institute of Theoretical and Applied Geophysics, Peking University, Beijing, China. InterRidge is governed by a steering committee consisting of delegates from the principal and associate member nations and regions.
Main functions
InterRidge has four main functions, which may be summarised as:
Building a community of ridge scientists
Identifying important scientific questions through working groups and workshops
Acting as a voice for ridge scientists
Education and outreach.
InterRidge serves as a "clearinghouse" for information on mid-ocean ridge research across the globe. InterRidge publishes an annual newsletter with preliminary results from field work, national and regional reports, and working group updates. InterRidge maintains 3 databases:
member database
research cruise database (past and upcoming cruises to the ridge crest)
database of active hydrothermal vent fields, established in 2000 (InterRidge Japan office)
Development
First decade (1992 - 2003)
InterRidge began at a meeting in France in 1990 that gathered ridge
Document 2:::
Slab pull is a geophysical mechanism whereby the cooling and subsequent densifying of a subducting tectonic plate produces a downward force along the rest of the plate. In 1975 Forsyth and Uyeda used the inverse theory method to show that, of the many forces likely to be driving plate motion, slab pull was the strongest. Plate motion is partly driven by the weight of cold, dense plates sinking into the mantle at oceanic trenches. This force and slab suction account for almost all of the force driving plate tectonics. The ridge push at rifts contributes only 5 to 10%.
Carlson et al. (1983) in Lallemandet al. (2005) defined the slab pull force as:
Where:
K is (gravitational acceleration = 9.81 m/s2) according to McNutt (1984);
Δρ = 80 kg/m3 is the mean density difference between the slab and the surrounding asthenosphere;
L is the slab length calculated only for the part above 670 km (the upper/lower mantle boundary);
A is the slab age in Ma at the trench.
The slab pull force manifests itself between two extreme forms:
The aseismic back-arc extension as in the Izu–Bonin–Mariana Arc.
And as the Aleutian and Chile tectonics with strong earthquakes and back-arc thrusting.
Between these two examples there is the evolution of the Farallon Plate: from the huge slab width with the Nevada, the Sevier and Laramide orogenies; the Mid-Tertiary ignimbrite flare-up and later left as Juan de Fuca and Cocos plates, the Basin and Range Province under extension, with slab break off, smaller slab width, more edges and mantle return flow.
Some early models of plate tectonics envisioned the plates riding on top of convection cells like conveyor belts. However, most scientists working today believe that the asthenosphere does not directly cause motion by the friction of such basal forces. The North American Plate is nowhere being subducted, yet it is in motion. Likewise the African, Eurasian and Antarctic Plates. Ridge push is thought responsible for the motion of these plates
Document 3:::
The depth of the seafloor on the flanks of a mid-ocean ridge is determined mainly by the age of the oceanic lithosphere; older seafloor is deeper. During seafloor spreading, lithosphere and mantle cooling, contraction, and isostatic adjustment with age cause seafloor deepening. This relationship has come to be better understood since around 1969 with significant updates in 1974 and 1977. Two main theories have been put forward to explain this observation: one where the mantle including the lithosphere is cooling; the cooling mantle model, and a second where a lithosphere plate cools above a mantle at a constant temperature; the cooling plate model. The cooling mantle model explains the age-depth observations for seafloor younger than 80 million years. The cooling plate model explains the age-depth observations best for seafloor older that 20 million years. In addition, the cooling plate model explains the almost constant depth and heat flow observed in very old seafloor and lithosphere. In practice it is convenient to use the solution for the cooling mantle model for an age-depth relationship younger than 20 million years. Older than this the cooling plate model fits data as well. Beyond 80 million years the plate model fits better than the mantle model.
Background
The first theories for seafloor spreading in the early and mid twentieth century explained the elevations of the mid-ocean ridges as upwellings above convection currents in Earth's mantle.
The next idea connected seafloor spreading and continental drift in a model of plate tectonics. In 1969, the elevations of ridges was explained as thermal expansion of a lithospheric plate at the spreading center. This 'cooling plate model' was followed in 1974 by noting that elevations of ridges could be modeled by cooling of the whole upper mantle including any plate. This was followed in 1977 by a more refined plate model which explained data that showed that both the ocean depths and ocean crust heat flow approa
Document 4:::
The Vine–Matthews–Morley hypothesis, also known as the Morley–Vine–Matthews hypothesis, was the first key scientific test of the seafloor spreading theory of continental drift and plate tectonics. Its key impact was that it allowed the rates of plate motions at mid-ocean ridges to be computed. It states that the Earth's oceanic crust acts as a recorder of reversals in the geomagnetic field direction as seafloor spreading takes place.
History
Harry Hess proposed the seafloor spreading hypothesis in 1960 (published in 1962); the term "spreading of the seafloor" was introduced by geophysicist Robert S. Dietz in 1961. According to Hess, seafloor was created at mid-oceanic ridges by the convection of the earth's mantle, pushing and spreading the older crust away from the ridge. Geophysicist Frederick John Vine and the Canadian geologist Lawrence W. Morley independently realized that if Hess's seafloor spreading theory was correct, then the rocks surrounding the mid-oceanic ridges should show symmetric patterns of magnetization reversals using newly collected magnetic surveys. Both of Morley's letters to Nature (February 1963) and Journal of Geophysical Research (April 1963) were rejected, hence Vine and his PhD adviser at Cambridge University, Drummond Hoyle Matthews, were first to publish the theory in September 1963. Some colleagues were skeptical of the hypothesis because of the numerous assumptions made—seafloor spreading, geomagnetic reversals, and remanent magnetism—all hypotheses that were still not widely accepted. The Vine–Matthews–Morley hypothesis describes the magnetic reversals of oceanic crust. Further evidence for this hypothesis came from Allan V. Cox and colleagues (1964) when they measured the remanent magnetization of lavas from land sites. Walter C. Pitman and J. R. Heirtzler offered further evidence with a remarkably symmetric magnetic anomaly profile from the Pacific-Antarctic Ridge.
Marine magnetic anomalies
The Vine–Matthews-Morley hypothesis
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Upwelling mantle at the mid-ocean ridge pushes plates in which direction?
A. inward
B. westward
C. outward
D. eastward
Answer:
|
|
scienceQA-4049
|
multiple_choice
|
What do these two changes have in common?
breaking a piece of glass
slicing cheese
|
[
"Both are chemical changes.",
"Both are caused by heating.",
"Both are only physical changes.",
"Both are caused by cooling."
] |
C
|
Step 1: Think about each change.
Breaking a piece of glass is a physical change. The glass gets broken into pieces. But each piece is still made of the same type of matter.
Slicing cheese is a physical change. The cheese changes shape. But it is still made of the same type of matter.
Step 2: Look at each answer choice.
Both are only physical changes.
Both changes are physical changes. No new matter is created.
Both are chemical changes.
Both changes are physical changes. They are not chemical changes.
Both are caused by heating.
Neither change is caused by heating.
Both are caused by cooling.
Neither change is caused by cooling.
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.
Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate.
A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density.
An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge.
Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change.
Examples
Heating and cooling
Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation.
Magnetism
Ferro-magnetic materials can become magnetic. The process is reve
Document 2:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 3:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 4:::
In physics, a dynamical system is said to be mixing if the phase space of the system becomes strongly intertwined, according to at least one of several mathematical definitions. For example, a measure-preserving transformation T is said to be strong mixing if
whenever A and B are any measurable sets and μ is the associated measure. Other definitions are possible, including weak mixing and topological mixing.
The mathematical definition of mixing is meant to capture the notion of physical mixing. A canonical example is the Cuba libre: suppose one is adding rum (the set A) to a glass of cola. After stirring the glass, the bottom half of the glass (the set B) will contain rum, and it will be in equal proportion as it is elsewhere in the glass. The mixing is uniform: no matter which region B one looks at, some of A will be in that region. A far more detailed, but still informal description of mixing can be found in the article on mixing (mathematics).
Every mixing transformation is ergodic, but there are ergodic transformations which are not mixing.
Physical mixing
The mixing of gases or liquids is a complex physical process, governed by a convective diffusion equation that may involve non-Fickian diffusion as in spinodal decomposition. The convective portion of the governing equation contains fluid motion terms that are governed by the Navier–Stokes equations. When fluid properties such as viscosity depend on composition, the governing equations may be coupled. There may also be temperature effects. It is not clear that fluid mixing processes are mixing in the mathematical sense.
Small rigid objects (such as rocks) are sometimes mixed in a rotating drum or tumbler. The 1969 Selective Service draft lottery was carried out by mixing plastic capsules which contained a slip of paper (marked with a day of the year).
See also
Miscibility
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do these two changes have in common?
breaking a piece of glass
slicing cheese
A. Both are chemical changes.
B. Both are caused by heating.
C. Both are only physical changes.
D. Both are caused by cooling.
Answer:
|
sciq-8540
|
multiple_choice
|
Which phylum are all vertebrate organisms a member of?
|
[
"pylum protozoa",
"phylum arthropod",
"phylum hominid",
"phylum chordata"
] |
D
|
Relavent Documents:
Document 0:::
Animals are multicellular, eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. As of 2022, 2.16 million living animal species have been described—of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are around 7.77 million animal species. Animals range in length from to . They have complex interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology.
Most living animal species are in Bilateria, a clade whose members have a bilaterally symmetric body plan. The Bilateria include the protostomes, containing animals such as nematodes, arthropods, flatworms, annelids and molluscs, and the deuterostomes, containing the echinoderms and the chordates, the latter including the vertebrates. Life forms interpreted as early animals were present in the Ediacaran biota of the late Precambrian. Many modern animal phyla became clearly established in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago.
Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on ad
Document 1:::
History of Animals (, Ton peri ta zoia historion, "Inquiries on Animals"; , "History of Animals") is one of the major texts on biology by the ancient Greek philosopher Aristotle, who had studied at Plato's Academy in Athens. It was written in the fourth century BC; Aristotle died in 322 BC.
Generally seen as a pioneering work of zoology, Aristotle frames his text by explaining that he is investigating the what (the existing facts about animals) prior to establishing the why (the causes of these characteristics). The book is thus an attempt to apply philosophy to part of the natural world. Throughout the work, Aristotle seeks to identify differences, both between individuals and between groups. A group is established when it is seen that all members have the same set of distinguishing features; for example, that all birds have feathers, wings, and beaks. This relationship between the birds and their features is recognized as a universal.
The History of Animals contains many accurate eye-witness observations, in particular of the marine biology around the island of Lesbos, such as that the octopus had colour-changing abilities and a sperm-transferring tentacle, that the young of a dogfish grow inside their mother's body, or that the male of a river catfish guards the eggs after the female has left. Some of these were long considered fanciful before being rediscovered in the nineteenth century. Aristotle has been accused of making errors, but some are due to misinterpretation of his text, and others may have been based on genuine observation. He did however make somewhat uncritical use of evidence from other people, such as travellers and beekeepers.
The History of Animals had a powerful influence on zoology for some two thousand years. It continued to be a primary source of knowledge until zoologists in the sixteenth century, such as Conrad Gessner, all influenced by Aristotle, wrote their own studies of the subject.
Context
Aristotle (384–322 BC) studied at Plat
Document 2:::
Endoparasites
Protozoan organisms
Helminths (worms)
Helminth organisms (also called helminths or intestinal worms) include:
Tapeworms
Flukes
Roundworms
Other organisms
Ectoparasites
Document 3:::
Euphylliidae (Greek eu-, true; Greek phyllon, leaf) are known as a family of polyped stony corals under the order Scleractinia.
This family consists of multiple genera (more than one genus) and various species which are found among the ocean floor. These coral may be sparse or conspicuous in the wild. However, they are commonly kept in home-aquariums to be enjoyed for their beauty and protection by many fish and their owners.
Classification
Marine organisms are studied and classified just as any other member of the animal kingdom. However, marine taxa are observed and therefore classified differently than reptiles or mammals would be. When any marine animal is classified, there are a group of main characteristics that are observed and used to differentiate between phylum, class, (potentially subclass), order, family, and of course species. The key characteristics that scientists look for are categorized by body type, (symmetry, presence of segments, limbs, head or tail) reproduction, digestion,
As of the year 2000, the order Scleractinia was divided into 18 artificial families, known as the Acroporidae, Astrocoeniidae, Pocilloporidae, Euphyllidae, Oculinidae, Meandrinidae, Siderastreidae, Agariciidae, Fungiidae, Rhizangiidae, Pectiniidae, Merulinidae, Dendrophylliidae, Caryophylliidae, Mussidae, Faviidae, Trachyphylliidae, and Poritidae (sensu Veron 2000). During this time, only 11 families were known to contain corals that can be classified as truly reef-building. All scleractinian families considered here are zooxanthellates (contain photo-endo-symbiontic zooxanthellae). However, in 2022 there are more than 30 families determined under the Scleractinia (according to the World Register of Marine Species) order and 845 species of coral which are known to be reef-building.
Among the countless organisms in the Animalia kingdom, the families of coral will always remain as a unique group. Although they are stationary and stony structures, they belong in the same Cn
Document 4:::
Soft-bodied organisms are animals that lack skeletons. The group roughly corresponds to the group Vermes as proposed by Carl von Linné. All animals have muscles but, since muscles can only pull, never push, a number of animals have developed hard parts that the muscles can pull on, commonly called skeletons. Such skeletons may be internal, as in vertebrates, or external, as in arthropods. However, many animals groups do very well without hard parts. This include animals such as earthworms, jellyfish, tapeworms, squids and an enormous variety of animals from almost every part of the kingdom Animalia.
Commonality
Most soft-bodied animals are small, but they do make up the majority of the animal biomass. If we were to weigh up all animals on Earth with hard parts against soft-bodied ones, estimates indicate that the biomass of soft-bodied animals would be at least twice that of animals with hard parts, quite possibly much larger. Particularly the roundworms are extremely numerous. The nematodologist Nathan Cobb described the ubiquitous presence of nematodes on Earth as follows:
"In short, if all the matter in the universe except the nematodes were swept away, our world would still be dimly recognizable, and if, as disembodied spirits, we could then investigate it, we should find its mountains, hills, vales, rivers, lakes, and oceans represented by a film of nematodes. The location of towns would be decipherable, since for every massing of human beings there would be a corresponding massing of certain nematodes. Trees would still stand in ghostly rows representing our streets and highways. The location of the various plants and animals would still be decipherable, and, had we sufficient knowledge, in many cases even their species could be determined by an examination of their erstwhile nematode parasites."
Anatomy
Not being a true phylogenetic group, soft-bodied organisms vary enormously in anatomy. Cnidarians and flatworms have a single opening to the gut and a d
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which phylum are all vertebrate organisms a member of?
A. pylum protozoa
B. phylum arthropod
C. phylum hominid
D. phylum chordata
Answer:
|
|
sciq-5248
|
multiple_choice
|
Amplification that occurs in which cells often requires signal transduction pathways involving second messengers?
|
[
"catayst cells",
"sensory receptor",
"axons",
"optic nerves"
] |
B
|
Relavent Documents:
Document 0:::
In physiology, transduction is the translation of arriving stimulus into an action potential by a sensory receptor. It begins when stimulus changes the membrane potential of a receptor cell.
A receptor cell converts the energy in a stimulus into an electrical signal. Receptors are broadly split into two main categories: exteroceptors, which receive external sensory stimuli, and interoceptors, which receive internal sensory stimuli.
Transduction and the senses
The visual system
In the visual system, sensory cells called rod and cone cells in the retina convert the physical energy of light signals into electrical impulses that travel to the brain. The light causes a conformational change in a protein called rhodopsin. This conformational change sets in motion a series of molecular events that result in a reduction of the electrochemical gradient of the photoreceptor. The decrease in the electrochemical gradient causes a reduction in the electrical signals going to the brain. Thus, in this example, more light hitting the photoreceptor results in the transduction of a signal into fewer electrical impulses, effectively communicating that stimulus to the brain. A change in neurotransmitter release is mediated through a second messenger system. The change in neurotransmitter release is by rods. Because of the change, a change in light intensity causes the response of the rods to be much slower than expected (for a process associated with the nervous system).
The auditory system
In the auditory system, sound vibrations (mechanical energy) are transduced into electrical energy by hair cells in the inner ear. Sound vibrations from an object cause vibrations in air molecules, which in turn, vibrate the ear drum. The movement of the eardrum causes the bones of the middle ear (the ossicles) to vibrate. These vibrations then pass into the cochlea, the organ of hearing. Within the cochlea, the hair cells on the sensory epithelium of the organ of Corti bend and cause movement
Document 1:::
In biology, cell signaling (cell signalling in British English) or cell communication is the ability of a cell to receive, process, and transmit signals with its environment and with itself. Cell signaling is a fundamental property of all cellular life in prokaryotes and eukaryotes. Signals that originate from outside a cell (or extracellular signals) can be physical agents like mechanical pressure, voltage, temperature, light, or chemical signals (e.g., small molecules, peptides, or gas). Cell signaling can occur over short or long distances, and as a result can be classified as autocrine, juxtacrine, intracrine, paracrine, or endocrine. Signaling molecules can be synthesized from various biosynthetic pathways and released through passive or active transports, or even from cell damage.
Receptors play a key role in cell signaling as they are able to detect chemical signals or physical stimuli. Receptors are generally proteins located on the cell surface or within the interior of the cell such as the cytoplasm, organelles, and nucleus. Cell surface receptors usually bind with extracellular signals (or ligands), which causes a conformational change in the receptor that leads it to initiate enzymic activity, or to open or close ion channel activity. Some receptors do not contain enzymatic or channel-like domains but are instead linked to enzymes or transporters. Other intracellular receptors like nuclear receptors have a different mechanism such as changing their DNA binding properties and cellular localization to the nucleus.
Signal transduction begins with the transformation (or transduction) of a signal into a chemical one, which can directly activate an ion channel (ligand-gated ion channel) or initiate a second messenger system cascade that propagates the signal through the cell. Second messenger systems can amplify a signal, in which activation of a few receptors results in multiple secondary messengers being activated, thereby amplifying the initial sig
Document 2:::
Catherina Gwynne Becker (née Krüger) is an Alexander von Humboldt Professor at TU Dresden, and was formerly Professor of Neural Development and Regeneration at the University of Edinburgh.
Early life and education
Catherina Becker was born in Marburg, Germany in 1964. She was educated at the in Bremen, before going on to study at the University of Bremen where she obtained an MSci of Biology and her PhD (Dr. rer. nat.) in 1993, investigating visual system development and regeneration in frogs and salamanders under the supervision of Gerhard Roth. She then trained as post-doctorate at the Swiss Federal Institute of Technology in Zürich, the Department Dev Cell Biol funded by an EMBO long-term fellowship, at the University of California, Irvine in USA and the Centre for Molecular Neurobiology Hamburg (ZMNH), Germany where she took a position of group leader in 2000 and finished her ‚Habilitation‘ in neurobiology in 2012.
Career
Becker joined the University of Edinburgh in 2005 as senior Lecturer and was appointed personal chair in neural development and regeneration in 2013. She was also the Director of Postgraduate Training at the Centre for Neuroregeneration up to 2015, then centre director up to 2017. In 2021 she received an Alexander von Humboldt Professorship, joining the at the Technical University of Dresden.
Research
Becker's research focuses on a better understanding of the factors governing the generation of neurons and axonal pathfinding in the CNS during development and regeneration using the zebrafish model to identify fundamental mechanisms in vertebrates with clear translational implications for CNS injury and neurodegenerative diseases.
The Becker group established the zebrafish as a model for spinal cord regeneration.
Their research found that functional regeneration is near perfect, but anatomical repair does not fully recreate the previous network, instead, new neurons are generated and extensive rewiring occurs.
They have identified neurotra
Document 3:::
The adequate stimulus is a property of a sensory receptor that determines the type of energy to which a sensory receptor responds with the initiation of sensory transduction. Sensory receptor are specialized to respond to certain types of stimuli. The adequate stimulus is the amount and type of energy required to stimulate a specific sensory organ.
Many of the sensory stimuli are categorized by the mechanics by which they are able to function and their purpose. Sensory receptors that are present within the body typically are made to respond to a single stimulus. Sensory receptors are present all throughout the body, and they take a certain amount of a stimulus to trigger these receptors. The use of these sensory receptors allows the brain to interpret the signals to the body which allow a person to respond to the stimulus if the stimulus reaches a minimum threshold to signal the brain. The sensory receptors will activate the sensory transduction system which will in turn send an electrical or chemical stimulus to a cell, and the cell will then respond with electrical signals to the brain which were produced from action potentials. The minuscule signals, which result from the stimuli, enter the cells must be amplified and turned into an sufficient signal that will be sent to the brain.
A sensory receptor's adequate stimulus is determined by the signal transduction mechanisms and ion channels incorporated in the sensory receptor's plasma membrane. Adequate stimulus are often used in relation with sensory thresholds and absolute thresholds to describe the smallest amount of a stimulus needed to activate a feeling within the sensory organ.
Categorizations of receptors
They are categorized through the stimuli to which they respond. Adequate stimulus are also often categorized based on their purpose and locations within the body. The following are the categorizations of receptors within the body:
Visual – These are found in the visual organs of species and are respon
Document 4:::
Chemogenetics is the process by which macromolecules can be engineered to interact with previously unrecognized small molecules. Chemogenetics as a term was originally coined to describe the observed effects of mutations on chalcone isomerase activity on substrate specificities in the flowers of Dianthus caryophyllus. This method is very similar to optogenetics; however, it uses chemically engineered molecules and ligands instead of light and light-sensitive channels known as opsins.
In recent research projects, chemogenetics has been widely used to understand the relationship between brain activity and behavior. Prior to chemogenetics, researchers used methods such as transcranial magnetic stimulation and deep brain stimulation to study the relationship between neuronal activity and behavior.
Comparison to optogenetics
Optogenetics and chemogenetics are the more recent and popular methods used to study this relationship. Both of these methods target specific brain circuits and cell population to influence cell activity. However, they use different procedures to accomplish this task. Optogenetics uses light-sensitive channels and pumps that are virally introduced into neurons. Cells' activity, having these channels, can then be manipulated by light. Chemogenetics, on the other hand, uses chemically engineered receptors and exogenous molecules specific for those receptors, to affect the activity of those cells. The engineered macromolecules used to design these receptors include nucleic acid hybrids, kinases, variety of metabolic enzymes, and G-protein coupled receptors such as DREADDs.
DREADDs are the most common G protein–coupled receptors used in chemogenetics. These receptors solely get activated by the drug of interest (inert molecule) and influence physiological and neural processes that take place within and outside of the central nervous system.
Chemogenetics has recently been favored over optogenetics, and it avoids some of the challenges of optogenetic
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Amplification that occurs in which cells often requires signal transduction pathways involving second messengers?
A. catayst cells
B. sensory receptor
C. axons
D. optic nerves
Answer:
|
|
sciq-6394
|
multiple_choice
|
What do you call the high and low points of transverse waves?
|
[
"bands and troughs",
"waves and troughs",
"crests and troughs",
"echos and troughs"
] |
C
|
Relavent Documents:
Document 0:::
A crest point on a wave is the maximum value of upward displacement within a cycle. A crest is a point on a surface wave where the displacement of the medium is at a maximum. A trough is the opposite of a crest, so the minimum or lowest point in a cycle.
When the crests and troughs of two sine waves of equal amplitude and frequency intersect or collide, while being in phase with each other, the result is called constructive interference and the magnitudes double (above and below the line). When in antiphase – 180° out of phase – the result is destructive interference: the resulting wave is the undisturbed line having zero amplitude.
See also
Crest factor
Superposition principle
Wave
Document 1:::
In physics, a transverse wave is a wave that oscillates perpendicularly to the direction of the wave's advance. In contrast, a longitudinal wave travels in the direction of its oscillations. All waves move energy from place to place without transporting the matter in the transmission medium if there is one. Electromagnetic waves are transverse without requiring a medium. The designation “transverse” indicates the direction of the wave is perpendicular to the displacement of the particles of the medium through which it passes, or in the case of EM waves, the oscillation is perpendicular to the direction of the wave.
A simple example is given by the waves that can be created on a horizontal length of string by anchoring one end and moving the other end up and down. Another example is the waves that are created on the membrane of a drum. The waves propagate in directions that are parallel to the membrane plane, but each point in the membrane itself gets displaced up and down, perpendicular to that plane. Light is another example of a transverse wave, where the oscillations are the electric and magnetic fields, which point at right angles to the ideal light rays that describe the direction of propagation.
Transverse waves commonly occur in elastic solids due to the shear stress generated; the oscillations in this case are the displacement of the solid particles away from their relaxed position, in directions perpendicular to the propagation of the wave. These displacements correspond to a local shear deformation of the material. Hence a transverse wave of this nature is called a shear wave. Since fluids cannot resist shear forces while at rest, propagation of transverse waves inside the bulk of fluids is not possible. In seismology, shear waves are also called secondary waves or S-waves.
Transverse waves are contrasted with longitudinal waves, where the oscillations occur in the direction of the wave. The standard example of a longitudinal wave is a sound wave or "
Document 2:::
A wavenumber–frequency diagram is a plot displaying the relationship between the wavenumber (spatial frequency) and the frequency (temporal frequency) of certain phenomena. Usually frequencies are placed on the vertical axis, while wavenumbers are placed on the horizontal axis.
In the atmospheric sciences, these plots are a common way to visualize atmospheric waves.
In the geosciences, especially seismic data analysis, these plots also called f–k plot, in which energy density within a given time interval is contoured on a frequency-versus-wavenumber basis. They are used to examine the direction and apparent velocity of seismic waves and in velocity filter design.
Origins
In general, the relationship between wavelength , frequency , and the phase velocity of a sinusoidal wave is:
Using the wavenumber () and angular frequency () notation, the previous equation can be rewritten as
On the other hand, the group velocity is equal to the slope of the wavenumber–frequency diagram:
Analyzing such relationships in detail often yields information on the physical properties of the medium, such as density, composition, etc.
See also
Dispersion relation
Document 3:::
In physics, a mechanical wave is a wave that is an oscillation of matter, and therefore transfers energy through a medium. While waves can move over long distances, the movement of the medium of transmission—the material—is limited. Therefore, the oscillating material does not move far from its initial equilibrium position. Mechanical waves can be produced only in media which possess elasticity and inertia. There are three types of mechanical waves: transverse waves, longitudinal waves, and surface waves. Some of the most common examples of mechanical waves are water waves, sound waves, and seismic waves.
Like all waves, mechanical waves transport energy. This energy propagates in the same direction as the wave. A wave requires an initial energy input; once this initial energy is added, the wave travels through the medium until all its energy is transferred. In contrast, electromagnetic waves require no medium, but can still travel through one.
Transverse wave
A transverse wave is the form of a wave in which particles of medium vibrate about their mean position perpendicular to the direction of the motion of the wave.
To see an example, move an end of a Slinky (whose other end is fixed) to the left-and-right of the Slinky, as opposed to to-and-fro. Light also has properties of a transverse wave, although it is an electromagnetic wave.
Longitudinal wave
Longitudinal waves cause the medium to vibrate parallel to the direction of the wave. It consists of multiple compressions and rarefactions. The rarefaction is the farthest distance apart in the longitudinal wave and the compression is the closest distance together. The speed of the longitudinal wave is increased in higher index of refraction, due to the closer proximity of the atoms in the medium that is being compressed. Sound is a longitudinal wave.
Surface waves
This type of wave travels along the surface or interface between two media. An example of a surface wave would be waves in a pool, or in an ocean
Document 4:::
In fluid dynamics, the wave height of a surface wave is the difference between the elevations of a crest and a neighboring trough. Wave height is a term used by mariners, as well as in coastal, ocean and naval engineering.
At sea, the term significant wave height is used as a means to introduce a well-defined and standardized statistic to denote the characteristic height of the random waves in a sea state, including wind sea and swell. It is defined in such a way that it more or less corresponds to what a mariner observes when estimating visually the average wave height.
Definitions
Depending on context, wave height may be defined in different ways:
For a sine wave, the wave height H is twice the amplitude (i.e., the peak-to-peak amplitude):
For a periodic wave, it is simply the difference between the maximum and minimum of the surface elevation : with cp the phase speed (or propagation speed) of the wave. The sine wave is a specific case of a periodic wave.
In random waves at sea, when the surface elevations are measured with a wave buoy, the individual wave height Hm of each individual wave—with an integer label m, running from 1 to N, to denote its position in a sequence of N waves—is the difference in elevation between a wave crest and trough in that wave. For this to be possible, it is necessary to first split the measured time series of the surface elevation into individual waves. Commonly, an individual wave is denoted as the time interval between two successive downward-crossings through the average surface elevation (upward crossings might also be used). Then the individual wave height of each wave is again the difference between maximum and minimum elevation in the time interval of the wave under consideration.
Significant wave height
RMS wave height
Another wave-height statistic in common usage is the root-mean-square (or RMS) wave height Hrms, defined as: with Hm again denoting the individual wave heights in a certain time series.
See also
Se
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do you call the high and low points of transverse waves?
A. bands and troughs
B. waves and troughs
C. crests and troughs
D. echos and troughs
Answer:
|
|
sciq-10746
|
multiple_choice
|
What does sexual reproduction with gametes and fertilization produce?
|
[
"haploid zygote",
"identical twins",
"diploid sporophyte",
"sister chromatids"
] |
C
|
Relavent Documents:
Document 0:::
Sexual reproduction is a type of reproduction that involves a complex life cycle in which a gamete (haploid reproductive cells, such as a sperm or egg cell) with a single set of chromosomes combines with another gamete to produce a zygote that develops into an organism composed of cells with two sets of chromosomes (diploid). This is typical in animals, though the number of chromosome sets and how that number changes in sexual reproduction varies, especially among plants, fungi, and other eukaryotes.
Sexual reproduction is the most common life cycle in multicellular eukaryotes, such as animals, fungi and plants. Sexual reproduction also occurs in some unicellular eukaryotes. Sexual reproduction does not occur in prokaryotes, unicellular organisms without cell nuclei, such as bacteria and archaea. However, some processes in bacteria, including bacterial conjugation, transformation and transduction, may be considered analogous to sexual reproduction in that they incorporate new genetic information. Some proteins and other features that are key for sexual reproduction may have arisen in bacteria, but sexual reproduction is believed to have developed in an ancient eukaryotic ancestor.
In eukaryotes, diploid precursor cells divide to produce haploid cells in a process called meiosis. In meiosis, DNA is replicated to produce a total of four copies of each chromosome. This is followed by two cell divisions to generate haploid gametes. After the DNA is replicated in meiosis, the homologous chromosomes pair up so that their DNA sequences are aligned with each other. During this period before cell divisions, genetic information is exchanged between homologous chromosomes in genetic recombination. Homologous chromosomes contain highly similar but not identical information, and by exchanging similar but not identical regions, genetic recombination increases genetic diversity among future generations.
During sexual reproduction, two haploid gametes combine into one diploid ce
Document 1:::
Gametogenesis is a biological process by which diploid or haploid precursor cells undergo cell division and differentiation to form mature haploid gametes. Depending on the biological life cycle of the organism, gametogenesis occurs by meiotic division of diploid gametocytes into various gametes, or by mitosis. For example, plants produce gametes through mitosis in gametophytes. The gametophytes grow from haploid spores after sporic meiosis. The existence of a multicellular, haploid phase in the life cycle between meiosis and gametogenesis is also referred to as alternation of generations.
It is the biological process of gametogenesis; cells that are haploid or diploid divide to create other cells. matured haploid gametes. It can take place either through mitosis or meiotic division of diploid gametocytes into different depending on an organism's biological life cycle, gametes. For instance, gametophytes in plants undergo mitosis to produce gametes. Both male and female have different forms.
In animals
Animals produce gametes directly through meiosis from diploid mother cells in organs called gonads (testis in males and ovaries in females). In mammalian germ cell development, sexually dimorphic gametes differentiates into primordial germ cells from pluripotent cells during initial mammalian development. Males and females of a species that reproduce sexually have different forms of gametogenesis:
spermatogenesis (male): Immature germ cells are produced in a man's testes. To mature into sperms, males' immature germ cells, or spermatogonia, go through spermatogenesis during adolescence. Spermatogonia are diploid cells that become larger as they divide through mitosis. These primary spermatocytes. These diploid cells undergo meiotic division to create secondary spermatocytes. These secondary spermatocytes undergo a second meiotic division to produce immature sperms or spermatids. These spermatids undergo spermiogenesis in order to develop into sperm. LH, FSH, GnRH
Document 2:::
Microgametogenesis is the process in plant reproduction where a microgametophyte develops in a pollen grain to the three-celled stage of its development. In flowering plants it occurs with a microspore mother cell inside the anther of the plant.
When the microgametophyte is first formed inside the pollen grain four sets of fertile cells called sporogenous cells are apparent. These cells are surrounded by a wall of sterile cells called the tapetum, which supplies food to the cell and eventually becomes the cell wall for the pollen grain. These sets of sporogenous cells eventually develop into diploid microspore mother cells. These microspore mother cells, also called microsporocytes, then undergo meiosis and become four microspore haploid cells. These new microspore cells then undergo mitosis and form a tube cell and a generative cell. The generative cell then undergoes mitosis one more time to form two male gametes, also called sperm.
See also
Gametogenesis
Document 3:::
Male (symbol: ♂) is the sex of an organism that produces the gamete (sex cell) known as sperm, which fuses with the larger female gamete, or ovum, in the process of fertilization.
A male organism cannot reproduce sexually without access to at least one ovum from a female, but some organisms can reproduce both sexually and asexually. Most male mammals, including male humans, have a Y chromosome, which codes for the production of larger amounts of testosterone to develop male reproductive organs.
In humans, the word male can also be used to refer to gender, in the social sense of gender role or gender identity. The use of "male" in regard to sex and gender has been subject to discussion.
Overview
The existence of separate sexes has evolved independently at different times and in different lineages, an example of convergent evolution. The repeated pattern is sexual reproduction in isogamous species with two or more mating types with gametes of identical form and behavior (but different at the molecular level) to anisogamous species with gametes of male and female types to oogamous species in which the female gamete is very much larger than the male and has no ability to move. There is a good argument that this pattern was driven by the physical constraints on the mechanisms by which two gametes get together as required for sexual reproduction.
Accordingly, sex is defined across species by the type of gametes produced (i.e.: spermatozoa vs. ova) and differences between males and females in one lineage are not always predictive of differences in another.
Male/female dimorphism between organisms or reproductive organs of different sexes is not limited to animals; male gametes are produced by chytrids, diatoms and land plants, among others. In land plants, female and male designate not only the female and male gamete-producing organisms and structures but also the structures of the sporophytes that give rise to male and female plants.
Evolution
The evolution of ani
Document 4:::
In biology and genetics, the germline is the population of a multicellular organism's cells that pass on their genetic material to the progeny (offspring). In other words, they are the cells that form the egg, sperm and the fertilised egg. They are usually differentiated to perform this function and segregated in a specific place away from other bodily cells.
As a rule, this passing-on happens via a process of sexual reproduction; typically it is a process that includes systematic changes to the genetic material, changes that arise during recombination, meiosis and fertilization for example. However, there are many exceptions across multicellular organisms, including processes and concepts such as various forms of apomixis, autogamy, automixis, cloning or parthenogenesis. The cells of the germline are called germ cells. For example, gametes such as a sperm and an egg are germ cells. So are the cells that divide to produce gametes, called gametocytes, the cells that produce those, called gametogonia, and all the way back to the zygote, the cell from which an individual develops.
In sexually reproducing organisms, cells that are not in the germline are called somatic cells. According to this view, mutations, recombinations and other genetic changes in the germline may be passed to offspring, but a change in a somatic cell will not be. This need not apply to somatically reproducing organisms, such as some Porifera and many plants. For example, many varieties of citrus, plants in the Rosaceae and some in the Asteraceae, such as Taraxacum, produce seeds apomictically when somatic diploid cells displace the ovule or early embryo.
In an earlier stage of genetic thinking, there was a clear distinction between germline and somatic cells. For example, August Weismann proposed and pointed out, a germline cell is immortal in the sense that it is part of a lineage that has reproduced indefinitely since the beginning of life and, barring accident, could continue doing so indef
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What does sexual reproduction with gametes and fertilization produce?
A. haploid zygote
B. identical twins
C. diploid sporophyte
D. sister chromatids
Answer:
|
|
sciq-2436
|
multiple_choice
|
What results from the evaporation of sea water?
|
[
"oxygen",
"reef",
"carbon-di-oxide",
"salt"
] |
D
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Marine technology is defined by WEGEMT (a European association of 40 universities in 17 countries) as "technologies for the safe use, exploitation, protection of, and intervention in, the marine environment." In this regard, according to WEGEMT, the technologies involved in marine technology are the following: naval architecture, marine engineering, ship design, ship building and ship operations; oil and gas exploration, exploitation, and production; hydrodynamics, navigation, sea surface and sub-surface support, underwater technology and engineering; marine resources (including both renewable and non-renewable marine resources); transport logistics and economics; inland, coastal, short sea and deep sea shipping; protection of the marine environment; leisure and safety.
Education and training
According to the Cape Fear Community College of Wilmington, North Carolina, the curriculum for a marine technology program provides practical skills and academic background that are essential in succeeding in the area of marine scientific support. Through a marine technology program, students aspiring to become marine technologists will become proficient in the knowledge and skills required of scientific support technicians.
The educational preparation includes classroom instructions and practical training aboard ships, such as how to use and maintain electronic navigation devices, physical and chemical measuring instruments, sampling devices, and data acquisition and reduction systems aboard ocean-going and smaller vessels, among other advanced equipment.
As far as marine technician programs are concerned, students learn hands-on to trouble shoot, service and repair four- and two-stroke outboards, stern drive, rigging, fuel & lube systems, electrical including diesel engines.
Relationship to commerce
Marine technology is related to the marine science and technology industry, also known as maritime commerce. The Executive Office of Housing and Economic Development (EOHED
Document 2:::
The Bachelor of Science in Aquatic Resources and Technology (B.Sc. in AQT) (or Bachelor of Aquatic Resource) is an undergraduate degree that prepares students to pursue careers in the public, private, or non-profit sector in areas such as marine science, fisheries science, aquaculture, aquatic resource technology, food science, management, biotechnology and hydrography. Post-baccalaureate training is available in aquatic resource management and related areas.
The Department of Animal Science and Export Agriculture, at the Uva Wellassa University of Badulla, Sri Lanka, has the largest enrollment of undergraduate majors in Aquatic Resources and Technology, with about 200 students as of 2014.
The Council on Education for Aquatic Resources and Technology includes undergraduate AQT degrees in the accreditation review of Aquatic Resources and Technology programs and schools.
See also
Marine Science
Ministry of Fisheries and Aquatic Resources Development
Document 3:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 4:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What results from the evaporation of sea water?
A. oxygen
B. reef
C. carbon-di-oxide
D. salt
Answer:
|
|
ai2_arc-397
|
multiple_choice
|
The Moon orbits Earth at a speed of approximately one kilometer per second. The Moon is kept in orbit by which of the following?
|
[
"gravity",
"lunar phases",
"magnetism",
"ocean tides"
] |
A
|
Relavent Documents:
Document 0:::
Cassini's laws provide a compact description of the motion of the Moon. They were established in 1693 by Giovanni Domenico Cassini, a prominent scientist of his time.
Refinements of these laws to include physical librations have been made, and they have been generalized to treat other satellites and planets.
Cassini's laws
The Moon has a 1:1 spin–orbit resonance. This means that the rotation–orbit ratio of the Moon is such that the same side of it always faces the Earth.
The Moon's rotational axis maintains a constant angle of inclination from the ecliptic plane. The Moon's rotational axis precesses so as to trace out a cone that intersects the ecliptic plane as a circle.
A plane formed from a normal to the ecliptic plane and a normal to the Moon's orbital plane will contain the Moon's rotational axis.
In the case of the Moon, its rotational axis always points some 1.5 degrees away from the North ecliptic pole. The normal to the Moon's orbital plane and its rotational axis are always on opposite sides of the normal to the ecliptic.
Therefore, both the normal to the orbital plane and the Moon's rotational axis precess around the ecliptic pole with the same period. The period is about 18.6 years and the motion is retrograde.
Cassini state
A system obeying these laws is said to be in a Cassini state, that is: an evolved rotational state where the spin axis, orbit normal, and normal to the Laplace plane are coplanar while the obliquity remains constant. The Laplace plane is defined as the plane about which a planet or satellite orbit precesses with constant inclination. The normal to the Laplace plane for a moon is between the planet's spin axis and the planet's orbit normal, being closer to the latter if the moon is distant from the planet. If a planet itself is in a Cassini state, the Laplace plane is the invariable plane of the stellar system.
Cassini state 1 is defined as the situation in which both the spin axis and the orbit normal axis are on the same
Document 1:::
In astronomy, the variation of the Moon is one of the principal perturbations in the motion of the Moon.
Discovery
The variation was discovered by Tycho Brahe, who noticed that, starting from a lunar eclipse in December 1590, at the times of syzygy (new or full moon), the apparent velocity of motion of the Moon (along its orbit as seen against the background of stars) was faster than expected. On the other hand, at the times of first and last quarter, its velocity was correspondingly slower than expected. (Those expectations were based on the lunar tables widely used up to Tycho's time. They took some account of the two largest irregularities in the Moon's motion, i.e. those now known as the equation of the center and the evection, see also Lunar theory - History.)
Variation
The main visible effect (in longitude) of the variation of the Moon is that during the course of every month, at the octants of the Moon's phase that follow the syzygies (i.e. halfway between the new or the full moon and the next-following quarter), the Moon is about two thirds of a degree farther ahead than would be expected on the basis of its mean motion (as modified by the equation of the centre and by the evection). But at the octants that precede the syzygies, it is about two thirds of a degree behind. At the syzygies and quarters themselves, the main effect is on the Moon's velocity rather than its position.
In 1687 Newton published, in the 'Principia', his first steps in the gravitational analysis of the motion of three mutually-attracting bodies. This included a proof that the Variation is one of the results of the perturbation of the motion of the Moon caused by the action of the Sun, and that one of the effects is to distort the Moon's orbit in a practically elliptical manner (ignoring at this point the eccentricity of the Moon's orbit), with the centre of the ellipse occupied by the Earth, and the major axis perpendicular to a line drawn between the Earth and Sun.
The Variat
Document 2:::
Orbits
Astrodynamics
In orbital mechanics, a transfer orbit is an intermediate elliptical orbit that is used to move a spacecraft in an orbital maneuver from one circular, or largely circular, orbit to another.
There are several types of transfer orbits, which vary in their energy efficiency and speed of transfer. These include:
Hohmann transfer orbit, an elliptical orbit used to transfer a spacecraft between two circular orbits of different altitudes in the same plane
Bi-elliptic transfer, a slower method of transfer, but one that may be more efficient than a Hohmann transfer orbit
Geostationary transfer orbit or geosynchronous transfer orbit is usually also a Hohmann transfer orbit
Lunar transfer orbit is an orbit that touches Low Earth orbit and a lunar orbit.
Document 3:::
The habitability of natural satellites describes the study of a moon's potential to provide habitats for life, though is not an indicator that it harbors it. Natural satellites are expected to outnumber planets by a large margin and the study is therefore important to astrobiology and the search for extraterrestrial life. There are, nevertheless, significant environmental variables specific to moons.
It is projected that parameters for surface habitats will be comparable to those of planets like Earth - stellar properties, orbit, planetary mass, atmosphere and geology. Of the natural satellites in the Solar System's habitable zone —the Moon, two Martian satellites (though some estimates put those outside it) and numerous Minor-planet moons — all lack the conditions for surface water. Unlike the Earth, all planetary mass moons of the Solar System are tidally locked and it is not yet known to what extent this and tidal forces influence habitability.
Research suggests that deep biospheres like that of Earth are possible. The strongest candidates therefore are currently icy satellites such as those of Jupiter and Saturn—Europa and Enceladus respectively, in which subsurface liquid water is thought to exist. While the Lunar surface is hostile to life as we know it, a deep Lunar biosphere (or that of similar bodies) cannot yet be ruled out deep exploration would be required for confirmation.
Exomoons are not yet confirmed to exist and their detection may be limited to transit-timing variation which is not currently sufficiently sensitive. It is possible that some of their attributes could be found through study of their transits. Despite this, some scientists estimate that there are as many habitable exomoons as habitable exoplanets. Given the general planet-to-satellite(s) mass ratio of 10,000, gas giants in the habitable zone are thought to be the best candidates to harbour Earth-like moons.
Tidal forces are likely to play as significant a role providing heat as st
Document 4:::
This article is a list of notable unsolved problems in astronomy. Some of these problems are theoretical, meaning that existing theories may be incapable of explaining certain observed phenomena or experimental results. Others are experimental, meaning that experiments necessary to test proposed theory or investigate a phenomenon in greater detail have not yet been performed. Some pertain to unique events or occurrences that have not repeated themselves and whose causes remain unclear.
Planetary astronomy
Our solar system
Orbiting bodies and rotation:
Are there any non-dwarf planets beyond Neptune?
Why do extreme trans-Neptunian objects have elongated orbits?
Rotation rate of Saturn:
Why does the magnetosphere of Saturn rotate at a rate close to that at which the planet's clouds rotate?
What is the rotation rate of Saturn's deep interior?
Satellite geomorphology:
What is the origin of the chain of high mountains that closely follows the equator of Saturn's moon, Iapetus?
Are the mountains the remnant of hot and fast-rotating young Iapetus?
Are the mountains the result of material (either from the rings of Saturn or its own ring) that over time collected upon the surface?
Extra-solar
How common are Solar System-like planetary systems? Some observed planetary systems contain Super-Earths and Hot Jupiters that orbit very close to their stars. Systems with Jupiter-like planets in Jupiter-like orbits appear to be rare. There are several possibilities why Jupiter-like orbits are rare, including that data is lacking or the grand tack hypothesis.
Stellar astronomy and astrophysics
Solar cycle:
How does the Sun generate its periodically reversing large-scale magnetic field?
How do other Sol-like stars generate their magnetic fields, and what are the similarities and differences between stellar activity cycles and that of the Sun?
What caused the Maunder Minimum and other grand minima, and how does the solar cycle recover from a minimum state?
Coronal heat
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The Moon orbits Earth at a speed of approximately one kilometer per second. The Moon is kept in orbit by which of the following?
A. gravity
B. lunar phases
C. magnetism
D. ocean tides
Answer:
|
|
sciq-7580
|
multiple_choice
|
In which organ are bile acids made
|
[
"liver",
"kidney",
"spleen",
"gall bladder"
] |
A
|
Relavent Documents:
Document 0:::
The liver is a major metabolic organ only found in vertebrate animals, which performs many essential biological functions such as detoxification of the organism, and the synthesis of proteins and biochemicals necessary for digestion and growth. In humans, it is located in the right upper quadrant of the abdomen, below the diaphragm and mostly shielded by the lower right rib cage. Its other metabolic roles include carbohydrate metabolism, the production of hormones, conversion and storage of nutrients such as glucose and glycogen, and the decomposition of red blood cells.
The liver is also an accessory digestive organ that produces bile, an alkaline fluid containing cholesterol and bile acids, which emulsifies and aids the breakdown of dietary fat. The gallbladder, a small hollow pouch that sits just under the right lobe of liver, stores and concentrates the bile produced by the liver, which is later excreted to the duodenum to help with digestion. The liver's highly specialized tissue, consisting mostly of hepatocytes, regulates a wide variety of high-volume biochemical reactions, including the synthesis and breakdown of small and complex organic molecules, many of which are necessary for normal vital functions. Estimates regarding the organ's total number of functions vary, but is generally cited as being around 500.
It is not known how to compensate for the absence of liver function in the long term, although liver dialysis techniques can be used in the short term. Artificial livers have not been developed to promote long-term replacement in the absence of the liver. , liver transplantation is the only option for complete liver failure.
Structure
The liver is a dark reddish brown, wedge-shaped organ with two lobes of unequal size and shape. A human liver normally weighs approximately and has a width of about . There is considerable size variation between individuals, with the standard reference range for men being and for women . It is both the heaviest int
Document 1:::
Bile (from Latin bilis), or gall, is a yellow-green fluid produced by the liver of most vertebrates that aids the digestion of lipids in the small intestine. In humans, bile is primarily composed of water, produced continuously by the liver, and stored and concentrated in the gallbladder. After a human eats, this stored bile is discharged into the first section of their small intestine.
Composition
In the human liver, bile is composed of 97–98% water, 0.7% bile salts, 0.2% bilirubin, 0.51% fats (cholesterol, fatty acids, and lecithin), and 200 meq/L inorganic salts. The two main pigments of bile are bilirubin, which is yellow, and its oxidised form biliverdin, which is green. When mixed, they are responsible for the brown color of feces. About of bile is produced per day in adult human beings.
Function
Bile or gall acts to some extent as a surfactant, helping to emulsify the lipids in food. Bile salt anions are hydrophilic on one side and hydrophobic on the other side; consequently, they tend to aggregate around droplets of lipids (triglycerides and phospholipids) to form micelles, with the hydrophobic sides towards the fat and hydrophilic sides facing outwards. The hydrophilic sides are negatively charged, and this charge prevents fat droplets coated with bile from re-aggregating into larger fat particles. Ordinarily, the micelles in the duodenum have a diameter around 1–50 μm in humans.
The dispersion of food fat into micelles provides a greatly increased surface area for the action of the enzyme pancreatic lipase, which digests the triglycerides, and is able to reach the fatty core through gaps between the bile salts. A triglyceride is broken down into two fatty acids and a monoglyceride, which are absorbed by the villi on the intestine walls. After being transferred across the intestinal membrane, the fatty acids reform into triglycerides (), before being absorbed into the lymphatic system through lacteals. Without bile salts, most of the lipids in food wou
Document 2:::
In vertebrates, the gallbladder, also known as the cholecyst, is a small hollow organ where bile is stored and concentrated before it is released into the small intestine. In humans, the pear-shaped gallbladder lies beneath the liver, although the structure and position of the gallbladder can vary significantly among animal species. It receives and stores bile, produced by the liver, via the common hepatic duct, and releases it via the common bile duct into the duodenum, where the bile helps in the digestion of fats.
The gallbladder can be affected by gallstones, formed by material that cannot be dissolved – usually cholesterol or bilirubin, a product of hemoglobin breakdown. These may cause significant pain, particularly in the upper-right corner of the abdomen, and are often treated with removal of the gallbladder (called a cholecystectomy). Cholecystitis, inflammation of the gallbladder, has a wide range of causes, including result from the impaction of gallstones, infection, and autoimmune disease.
Structure
The gallbladder is a hollow grey-blue organ that sits in a shallow depression below the right lobe of the liver. In adults, the gallbladder measures approximately in length and in diameter when fully distended. The gallbladder has a capacity of about .
The gallbladder is shaped like a pear, with its tip opening into the cystic duct. The gallbladder is divided into three sections: the fundus, body, and neck. The fundus is the rounded base, angled so that it faces the abdominal wall. The body lies in a depression in the surface of the lower liver. The neck tapers and is continuous with the cystic duct, part of the biliary tree. The gallbladder fossa, against which the fundus and body of the gallbladder lie, is found beneath the junction of hepatic segments IVB and V. The cystic duct unites with the common hepatic duct to become the common bile duct. At the junction of the neck of the gallbladder and the cystic duct, there is an out-pouching of the gallbla
Document 3:::
A cholecystocyte is an epithelial cell found in the gallbladder.
See also
List of human cell types derived from the germ layers
Document 4:::
The sphincter of Boyden (also known as the choledochal sphincter) is a sphincter located in the common bile duct before it joins with the pancreatic duct to form the ampulla of vater. This sphincter controls the flow of bile into the pancreatic duct and it helps in filling up of the gallbladder with bile.
Structure
The sphincter of Boyden is a smooth muscle sphincter surrounding the common bile duct (ductus choledocus). It occurs just before the junction with the pancreatic duct, where the ampulla of Vater is formed. Occasionally, some fibres also surround the pancreatic duct.
It is subdivided into two parts - pars superior and pars inferior. The pars inferior is the strongest component of the sphincter of Oddi complex.
Function
The sphincter of Boyden controls the flow of bile from the common bile duct into the pancreatic duct. This helps with filling of the gallbladder with bile.
Its contractions regulate the passage of bile into the gall bladder or the duodenum.
History
This is named after the American anatomist Edward Allen Boyden (1886-1976).
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
In which organ are bile acids made
A. liver
B. kidney
C. spleen
D. gall bladder
Answer:
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.