id
stringlengths 6
15
| question_type
stringclasses 1
value | question
stringlengths 15
683
| choices
listlengths 4
4
| answer
stringclasses 5
values | explanation
stringclasses 481
values | prompt
stringlengths 1.75k
10.9k
|
---|---|---|---|---|---|---|
sciq-9775
|
multiple_choice
|
The ph scale is a scale used to express the concentration of hydrogen ions in solution. a neutral solution, neither acidic nor basic, has a ph of what?
|
[
"8",
"7",
"0",
"6"
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Advanced Placement (AP) Biology (also known as AP Bio) is an Advanced Placement biology course and exam offered by the College Board in the United States. For the 2012–2013 school year, the College Board unveiled a new curriculum with a greater focus on "scientific practices".
This course is designed for students who wish to pursue an interest in the life sciences. The College Board recommends successful completion of high school biology and high school chemistry before commencing AP Biology, although the actual prerequisites vary from school to school and from state to state. This course, nevertheless, is considered very challenging and one of the most difficult AP classes, as shown with AP Finals grade distributions.
Topic outline
The exam covers the following 8 units. The percentage indicates the portion of the multiple-choice section of the exam focused on each content area:
The course is based on and tests six skills, called scientific practices which include:
In addition to the topics above, students are required to be familiar with general lab procedure. Students should know how to collect data, analyze data to form conclusions, and apply those conclusions.
Exam
Students are allowed to use a four-function, scientific, or graphing calculator.
The exam has two sections: a 90 minute multiple choice section and a 90 minute free response section. There are 60 multiple choice questions and six free responses, two long and four short. Both sections are worth 50% of the score.
Score distribution
Commonly used textbooks
Biology, AP Edition by Sylvia Mader (2012, hardcover )
Life: The Science of Biology (Sadava, Heller, Orians, Purves, and Hillis, )
Campbell Biology AP Ninth Edition (Reece, Urry, Cain, Wasserman, Minorsky, and Andrew Jackson )
See also
Glossary of biology
A.P Bio (TV Show)
Document 2:::
Test equating traditionally refers to the statistical process of determining comparable scores on different forms of an exam. It can be accomplished using either classical test theory or item response theory.
In item response theory, equating is the process of placing scores from two or more parallel test forms onto a common score scale. The result is that scores from two different test forms can be compared directly, or treated as though they came from the same test form. When the tests are not parallel, the general process is called linking. It is the process of equating the units and origins of two scales on which the abilities of students have been estimated from results on different tests. The process is analogous to equating degrees Fahrenheit with degrees Celsius by converting measurements from one scale to the other. The determination of comparable scores is a by-product of equating that results from equating the scales obtained from test results.
Purpose
Suppose that Dick and Jane both take a test to become licensed in a certain profession. Because the high stakes (you get to practice the profession if you pass the test) may create a temptation to cheat, the organization that oversees the test creates two forms. If we know that Dick scored 60% on form A and Jane scored 70% on form B, do we know for sure which one has a better grasp of the material? What if form A is composed of very difficult items, while form B is relatively easy? Equating analyses are performed to address this very issue, so that scores are as fair as possible.
Equating in item response theory
In item response theory, person "locations" (measures of some quality being assessed by a test) are estimated on an interval scale; i.e., locations are estimated in relation to a unit and origin. It is common in educational assessment to employ tests in order to assess different groups of students with the intention of establishing a common scale by equating the origins, and when appropri
Document 3:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
Document 4:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The ph scale is a scale used to express the concentration of hydrogen ions in solution. a neutral solution, neither acidic nor basic, has a ph of what?
A. 8
B. 7
C. 0
D. 6
Answer:
|
|
sciq-8907
|
multiple_choice
|
What do a group of cells that work together form?
|
[
"organelle",
"organ",
"molecule",
"tissue"
] |
D
|
Relavent Documents:
Document 0:::
Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence.
Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism.
Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry.
See also
Cell (biology)
Cell biology
Biomolecule
Organelle
Tissue (biology)
External links
https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm
Document 1:::
The cell is the basic structural and functional unit of all forms of life. Every cell consists of cytoplasm enclosed within a membrane, and contains many macromolecules such as proteins, DNA and RNA, as well as many small molecules of nutrients and metabolites. The term comes from the Latin word meaning 'small room'.
Cells can acquire specified function and carry out various tasks within the cell such as replication, DNA repair, protein synthesis, and motility. Cells are capable of specialization and mobility within the cell.
Most plant and animal cells are only visible under a light microscope, with dimensions between 1 and 100 micrometres. Electron microscopy gives a much higher resolution showing greatly detailed cell structure. Organisms can be classified as unicellular (consisting of a single cell such as bacteria) or multicellular (including plants and animals). Most unicellular organisms are classed as microorganisms.
The study of cells and how they work has led to many other studies in related areas of biology, including: discovery of DNA, cancer systems biology, aging and developmental biology.
Cell biology is the study of cells, which were discovered by Robert Hooke in 1665, who named them for their resemblance to cells inhabited by Christian monks in a monastery. Cell theory, first developed in 1839 by Matthias Jakob Schleiden and Theodor Schwann, states that all organisms are composed of one or more cells, that cells are the fundamental unit of structure and function in all living organisms, and that all cells come from pre-existing cells. Cells emerged on Earth about 4 billion years ago.
Discovery
With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as i
Document 2:::
In biology, cell theory is a scientific theory first formulated in the mid-nineteenth century, that organisms are made up of cells, that they are the basic structural/organizational unit of all organisms, and that all cells come from pre-existing cells. Cells are the basic unit of structure in all organisms and also the basic unit of reproduction.
The theory was once universally accepted, but now some biologists consider non-cellular entities such as viruses living organisms, and thus disagree with the first tenet. As of 2021: "expert opinion remains divided roughly a third each between yes, no and don’t know". As there is no universally accepted definition of life, discussion still continues.
History
With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as it was believed no one else had seen these. To further support his theory, Matthias Schleiden and Theodor Schwann both also studied cells of both animal and plants. What they discovered were significant differences between the two types of cells. This put forth the idea that cells were not only fundamental to plants, but animals as well.
Microscopes
The discovery of the cell was made possible through the invention of the microscope. In the first century BC, Romans were able to make glass. They discovered that objects appeared to be larger under the glass. The expanded use of lenses in eyeglasses in the 13th century probably led to wider spread use of simple microscopes (magnifying glasses) with limited magnification. Compound microscopes, which combine an objective lens with an eyepiece to view a real image achieving much higher magnification, first appeared in Europe around 1620. In 1665, Robert Hooke used a microscope
Document 3:::
Cell physiology is the biological study of the activities that take place in a cell to keep it alive. The term physiology refers to normal functions in a living organism. Animal cells, plant cells and microorganism cells show similarities in their functions even though they vary in structure.
General characteristics
There are two types of cells: prokaryotes and eukaryotes.
Prokaryotes were the first of the two to develop and do not have a self-contained nucleus. Their mechanisms are simpler than later-evolved eukaryotes, which contain a nucleus that envelops the cell's DNA and some organelles.
Prokaryotes
Prokaryotes have DNA located in an area called the nucleoid, which is not separated from other parts of the cell by a membrane. There are two domains of prokaryotes: bacteria and archaea. Prokaryotes have fewer organelles than eukaryotes. Both have plasma membranes and ribosomes (structures that synthesize proteins and float free in cytoplasm). Two unique characteristics of prokaryotes are fimbriae (finger-like projections on the surface of a cell) and flagella (threadlike structures that aid movement).
Eukaryotes
Eukaryotes have a nucleus where DNA is contained. They are usually larger than prokaryotes and contain many more organelles. The nucleus, the feature of a eukaryote that distinguishes it from a prokaryote, contains a nuclear envelope, nucleolus and chromatin. In cytoplasm, endoplasmic reticulum (ER) synthesizes membranes and performs other metabolic activities. There are two types, rough ER (containing ribosomes) and smooth ER (lacking ribosomes). The Golgi apparatus consists of multiple membranous sacs, responsible for manufacturing and shipping out materials such as proteins. Lysosomes are structures that use enzymes to break down substances through phagocytosis, a process that comprises endocytosis and exocytosis. In the mitochondria, metabolic processes such as cellular respiration occur. The cytoskeleton is made of fibers that support the str
Document 4:::
A cell type is a classification used to identify cells that share morphological or phenotypical features. A multicellular organism may contain cells of a number of widely differing and specialized cell types, such as muscle cells and skin cells, that differ both in appearance and function yet have identical genomic sequences. Cells may have the same genotype, but belong to different cell types due to the differential regulation of the genes they contain. Classification of a specific cell type is often done through the use of microscopy (such as those from the cluster of differentiation family that are commonly used for this purpose in immunology). Recent developments in single cell RNA sequencing facilitated classification of cell types based on shared gene expression patterns. This has led to the discovery of many new cell types in e.g. mouse cortex, hippocampus, dorsal root ganglion and spinal cord.
Animals have evolved a greater diversity of cell types in a multicellular body (100–150 different cell types), compared
with 10–20 in plants, fungi, and protists. The exact number of cell types is, however, undefined, and the Cell Ontology, as of 2021, lists over 2,300 different cell types.
Multicellular organisms
All higher multicellular organisms contain cells specialised for different functions. Most distinct cell types arise from a single totipotent cell that differentiates into hundreds of different cell types during the course of development. Differentiation of cells is driven by different environmental cues (such as cell–cell interaction) and intrinsic differences (such as those caused by the uneven distribution of molecules during division). Multicellular organisms are composed of cells that fall into two fundamental types: germ cells and somatic cells. During development, somatic cells will become more specialized and form the three primary germ layers: ectoderm, mesoderm, and endoderm. After formation of the three germ layers, cells will continue to special
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do a group of cells that work together form?
A. organelle
B. organ
C. molecule
D. tissue
Answer:
|
|
sciq-3658
|
multiple_choice
|
An increase in what, across the periodic table, explains why elements go from metals to metalloids and then to nonmetals?
|
[
"protons",
"neutrons",
"temperature",
"electrons"
] |
D
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
A nonmetal is a chemical element that mostly lacks metallic properties. Seventeen elements are generally considered nonmetals, though some authors recognize more or fewer depending on the properties considered most representative of metallic or nonmetallic character. Some borderline elements further complicate the situation.
Nonmetals tend to have low density and high electronegativity (the ability of an atom in a molecule to attract electrons to itself). They range from colorless gases like hydrogen to shiny solids like the graphite form of carbon. Nonmetals are often poor conductors of heat and electricity, and when solid tend to be brittle or crumbly. In contrast, metals are good conductors and most are pliable. While compounds of metals tend to be basic, those of nonmetals tend to be acidic.
The two lightest nonmetals, hydrogen and helium, together make up about 98% of the observable ordinary matter in the universe by mass. Five nonmetallic elements—hydrogen, carbon, nitrogen, oxygen, and silicon—make up the overwhelming majority of the Earth's crust, atmosphere, oceans and biosphere.
The distinct properties of nonmetallic elements allow for specific uses that metals often cannot achieve. Elements like hydrogen, oxygen, carbon, and nitrogen are essential building blocks for life itself. Moreover, nonmetallic elements are integral to industries such as electronics, energy storage, agriculture, and chemical production.
Most nonmetallic elements were not identified until the 18th and 19th centuries. While a distinction between metals and other minerals had existed since antiquity, a basic classification of chemical elements as metallic or nonmetallic emerged only in the late 18th century. Since then nigh on two dozen properties have been suggested as single criteria for distinguishing nonmetals from metals.
Definition and applicable elements
Properties mentioned hereafter refer to the elements in their most stable forms in ambient conditions unless otherwise
Document 2:::
A metalloid is a type of chemical element which has a preponderance of properties in between, or that are a mixture of, those of metals and nonmetals. There is no standard definition of a metalloid and no complete agreement on which elements are metalloids. Despite the lack of specificity, the term remains in use in the literature of chemistry.
The six commonly recognised metalloids are boron, silicon, germanium, arsenic, antimony and tellurium. Five elements are less frequently so classified: carbon, aluminium, selenium, polonium and astatine. On a standard periodic table, all eleven elements are in a diagonal region of the p-block extending from boron at the upper left to astatine at lower right. Some periodic tables include a dividing line between metals and nonmetals, and the metalloids may be found close to this line.
Typical metalloids have a metallic appearance, but they are brittle and only fair conductors of electricity. Chemically, they behave mostly as nonmetals. They can form alloys with metals. Most of their other physical properties and chemical properties are intermediate in nature. Metalloids are usually too brittle to have any structural uses. They and their compounds are used in alloys, biological agents, catalysts, flame retardants, glasses, optical storage and optoelectronics, pyrotechnics, semiconductors, and electronics.
The electrical properties of silicon and germanium enabled the establishment of the semiconductor industry in the 1950s and the development of solid-state electronics from the early 1960s.
The term metalloid originally referred to nonmetals. Its more recent meaning, as a category of elements with intermediate or hybrid properties, became widespread in 1940–1960. Metalloids are sometimes called semimetals, a practice that has been discouraged, as the term semimetal has a different meaning in physics than in chemistry. In physics, it refers to a specific kind of electronic band structure of a substance. In this context, only
Document 3:::
The periodic table is an arrangement of the chemical elements, structured by their atomic number, electron configuration and recurring chemical properties. In the basic form, elements are presented in order of increasing atomic number, in the reading sequence. Then, rows and columns are created by starting new rows and inserting blank cells, so that rows (periods) and columns (groups) show elements with recurring properties (called periodicity). For example, all elements in group (column) 18 are noble gases that are largely—though not completely—unreactive.
The history of the periodic table reflects over two centuries of growth in the understanding of the chemical and physical properties of the elements, with major contributions made by Antoine-Laurent de Lavoisier, Johann Wolfgang Döbereiner, John Newlands, Julius Lothar Meyer, Dmitri Mendeleev, Glenn T. Seaborg, and others.
Early history
Nine chemical elements – carbon, sulfur, iron, copper, silver, tin, gold, mercury, and lead, have been known since before antiquity, as they are found in their native form and are relatively simple to mine with primitive tools. Around 330 BCE, the Greek philosopher Aristotle proposed that everything is made up of a mixture of one or more roots, an idea originally suggested by the Sicilian philosopher Empedocles. The four roots, which the Athenian philosopher Plato called elements, were earth, water, air and fire. Similar ideas about these four elements existed in other ancient traditions, such as Indian philosophy.
A few extra elements were known in the age of alchemy: zinc, arsenic, antimony, and bismuth. Platinum was also known to pre-Columbian South Americans, but knowledge of it did not reach Europe until the 16th century.
First categorizations
The history of the periodic table is also a history of the discovery of the chemical elements. The first person in recorded history to discover a new element was Hennig Brand, a bankrupt German merchant. Brand tried to discover
Document 4:::
Nonmetals show more variability in their properties than do metals. Metalloids are included here since they behave predominately as chemically weak nonmetals.
Physically, they nearly all exist as diatomic or monatomic gases, or polyatomic solids having more substantial (open-packed) forms and relatively small atomic radii, unlike metals, which are nearly all solid and close-packed, and mostly have larger atomic radii. If solid, they have a submetallic appearance (with the exception of sulfur) and are brittle, as opposed to metals, which are lustrous, and generally ductile or malleable; they usually have lower densities than metals; are mostly poorer conductors of heat and electricity; and tend to have significantly lower melting points and boiling points than those of most metals.
Chemically, the nonmetals mostly have higher ionisation energies, higher electron affinities (nitrogen and the noble gases have negative electron affinities) and higher electronegativity values than metals noting that, in general, the higher an element's ionisation energy, electron affinity, and electronegativity, the more nonmetallic that element is. Nonmetals, including (to a limited extent) xenon and probably radon, usually exist as anions or oxyanions in aqueous solution; they generally form ionic or covalent compounds when combined with metals (unlike metals, which mostly form alloys with other metals); and have acidic oxides whereas the common oxides of nearly all metals are basic.
Properties
Abbreviations used in this section are: AR Allred-Rochow; CN coordination number; and MH Moh's hardness
Group 1
Hydrogen is a colourless, odourless, and comparatively unreactive diatomic gas with a density of 8.988 × 10−5 g/cm3 and is about 14 times lighter than air. It condenses to a colourless liquid −252.879 °C and freezes into an ice- or snow-like solid at −259.16 °C. The solid form has a hexagonal crystalline structure and is soft and easily crushed. Hydrogen is an insulator in all of
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
An increase in what, across the periodic table, explains why elements go from metals to metalloids and then to nonmetals?
A. protons
B. neutrons
C. temperature
D. electrons
Answer:
|
|
sciq-4603
|
multiple_choice
|
What is the most common type of cancer in adult females?
|
[
"bone",
"breast",
"skin",
"lung"
] |
B
|
Relavent Documents:
Document 0:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
Document 1:::
Single Best Answer (SBA or One Best Answer) is a written examination form of multiple choice questions used extensively in medical education.
Structure
A single question is posed with typically five alternate answers, from which the candidate must choose the best answer. This method avoids the problems of past examinations of a similar form described as Single Correct Answer. The older form can produce confusion where more than one of the possible answers has some validity. The newer form makes it explicit that more than one answer may have elements that are correct, but that one answer will be superior.
Prior to the widespread introduction of SBAs into medical education, the typical form of examination was true-false multiple choice questions. But during the 2000s, educators found that SBAs would be superior.
Document 2:::
Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research.
Americas
Human Biology major at Stanford University, Palo Alto (since 1970)
Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government.
Human and Social Biology (Caribbean)
Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment.
Human Biology Program at University of Toronto
The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications.
Asia
BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002)
BSc (honours) Human Biology at AIIMS (New
Document 3:::
An atypical teratoid rhabdoid tumor (AT/RT) is a rare tumor usually diagnosed in childhood. Although usually a brain tumor, AT/RT can occur anywhere in the central nervous system (CNS), including the spinal cord. About 60% will be in the posterior cranial fossa (particularly the cerebellum). One review estimated 52% in the posterior fossa, 39% are supratentorial primitive neuroectodermal tumors (sPNET), 5% are in the pineal, 2% are spinal, and 2% are multifocal.
In the United States, three children per 1,000,000 or around 30 new AT/RT cases are diagnosed each year. AT/RT represents around 3% of pediatric cancers of the CNS.
Around 17% of all pediatric cancers involve the CNS, making these cancers the most common childhood solid tumor. The survival rate for CNS tumors is around 60%. Pediatric brain cancer is the second-leading cause of childhood cancer death, just after leukemia. Recent trends suggest that the rate of overall CNS tumor diagnosis is increasing by about 2.7% per year. As diagnostic techniques using genetic markers improve and are used more often, the proportion of AT/RT diagnoses is expected to increase.
AT/RT was only recognized as an entity in 1996 and added to the World Health Organization Brain Tumor Classification in 2000 (Grade IV). The relatively recent classification and rarity has contributed to initial misdiagnosis and nonoptimal therapy. This has led to a historically poor prognosis.
Current research is focusing on using chemotherapy protocols that are effective against rhabdomyosarcoma in combination with surgery and radiation therapy.
Recent studies using multimodal therapy have shown significantly improved survival data. In 2008,
the Dana-Farber Cancer Institute in Boston reported two-year overall survival of 53% and event-free survival of 70% (median age at diagnosis of 26 months).
In 2013, the Medical University of Vienna reported five-year overall survival of 100%, and event-free survival of 89% (median age at diagnosis
Document 4:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the most common type of cancer in adult females?
A. bone
B. breast
C. skin
D. lung
Answer:
|
|
sciq-3736
|
multiple_choice
|
What is the process of creating complementary strands of mrna called?
|
[
"division",
"mutation",
"differentiation",
"transcription"
] |
D
|
Relavent Documents:
Document 0:::
Transcription is the process of copying a segment of DNA into RNA. The segments of DNA transcribed into RNA molecules that can encode proteins are said to produce messenger RNA (mRNA). Other segments of DNA are copied into RNA molecules called non-coding RNAs (ncRNAs). mRNA comprises only 1–3% of total RNA samples. Less than 2% of the human genome can be transcribed into mRNA (Human genome#Coding vs. noncoding DNA), while at least 80% of mammalian genomic DNA can be actively transcribed (in one or more types of cells), with the majority of this 80% considered to be ncRNA.
Both DNA and RNA are nucleic acids, which use base pairs of nucleotides as a complementary language. During transcription, a DNA sequence is read by an RNA polymerase, which produces a complementary, antiparallel RNA strand called a primary transcript.
Transcription proceeds in the following general steps:
RNA polymerase, together with one or more general transcription factors, binds to promoter DNA.
RNA polymerase generates a transcription bubble, which separates the two strands of the DNA helix. This is done by breaking the hydrogen bonds between complementary DNA nucleotides.
RNA polymerase adds RNA nucleotides (which are complementary to the nucleotides of one DNA strand).
RNA sugar-phosphate backbone forms with assistance from RNA polymerase to form an RNA strand.
Hydrogen bonds of the RNA–DNA helix break, freeing the newly synthesized RNA strand.
If the cell has a nucleus, the RNA may be further processed. This may include polyadenylation, capping, and splicing.
The RNA may remain in the nucleus or exit the cytoplasm through the nuclear pore complex.
If the stretch of DNA is transcribed into an RNA molecule that encodes a protein, the RNA is termed messenger RNA (mRNA); the mRNA, in turn, serves as a template for the protein's synthesis through translation. Other stretches of DNA may be transcribed into small non-coding RNAs such as microRNA, transfer RNA (tRNA), small nucleolar
Document 1:::
Genomic deoxyribonucleic acid (abbreviated as gDNA) is chromosomal DNA, in contrast to extra-chromosomal DNAs like plasmids. Most organisms have the same genomic DNA in every cell; however, only certain genes are active in each cell to allow for cell function and differentiation within the body.
The genome of an organism (encoded by the genomic DNA) is the (biological) information of heredity which is passed from one generation of organism to the next. That genome is transcribed to produce various RNAs, which are necessary for the function of the organism. Precursor mRNA (pre-mRNA) is transcribed by RNA polymerase II in the nucleus. pre-mRNA is then processed by splicing to remove introns, leaving the exons in the mature messenger RNA (mRNA). Additional processing includes the addition of a 5' cap and a poly(A) tail to the pre-mRNA. The mature mRNA may then be transported to the cytosol and translated by the ribosome into a protein. Other types of RNA include ribosomal RNA (rRNA) and transfer RNA (tRNA). These types are transcribed by RNA polymerase I and RNA polymerase III, respectively, and are essential for protein synthesis. However 5s rRNA is the only rRNA which is transcribed by RNA Polymerase III.
Document 2:::
The central dogma of molecular biology is an explanation of the flow of genetic information within a biological system. It is often stated as "DNA makes RNA, and RNA makes protein", although this is not its original meaning. It was first stated by Francis Crick in 1957, then published in 1958:
He re-stated it in a Nature paper published in 1970: "The central dogma of molecular biology deals with the detailed residue-by-residue transfer of sequential information. It states that such information cannot be transferred back from protein to either protein or nucleic acid."
A second version of the central dogma is popular but incorrect. This is the simplistic DNA → RNA → protein pathway published by James Watson in the first edition of The Molecular Biology of the Gene (1965). Watson's version differs from Crick's because Watson describes a two-step (DNA → RNA and RNA → protein) process as the central dogma. While the dogma as originally stated by Crick remains valid today, Watson's version does not.
The dogma is a framework for understanding the transfer of sequence information between information-carrying biopolymers, in the most common or general case, in living organisms. There are 3 major classes of such biopolymers: DNA and RNA (both nucleic acids), and protein. There are conceivable direct transfers of information that can occur between these. The dogma classes these into 3 groups of 3: three general transfers (believed to occur normally in most cells), two special transfers (known to occur, but only under specific conditions in case of some viruses or in a laboratory), and four unknown transfers (believed never to occur). The general transfers describe the normal flow of biological information: DNA can be copied to DNA (DNA replication), DNA information can be copied into mRNA (transcription), and proteins can be synthesized using the information in mRNA as a template (translation). The special transfers describe: RNA being copied from RNA (RNA replication), D
Document 3:::
In molecular biology, a library is a collection of DNA fragments that is stored and propagated in a population of micro-organisms through the process of molecular cloning. There are different types of DNA libraries, including cDNA libraries (formed from reverse-transcribed RNA), genomic libraries (formed from genomic DNA) and randomized mutant libraries (formed by de novo gene synthesis where alternative nucleotides or codons are incorporated). DNA library technology is a mainstay of current molecular biology, genetic engineering, and protein engineering, and the applications of these libraries depend on the source of the original DNA fragments. There are differences in the cloning vectors and techniques used in library preparation, but in general each DNA fragment is uniquely inserted into a cloning vector and the pool of recombinant DNA molecules is then transferred into a population of bacteria (a Bacterial Artificial Chromosome or BAC library) or yeast such that each organism contains on average one construct (vector + insert). As the population of organisms is grown in culture, the DNA molecules contained within them are copied and propagated (thus, "cloned").
Terminology
The term "library" can refer to a population of organisms, each of which carries a DNA molecule inserted into a cloning vector, or alternatively to the collection of all of the cloned vector molecules.
cDNA libraries
A cDNA library represents a sample of the mRNA purified from a particular source (either a collection of cells, a particular tissue, or an entire organism), which has been converted back to a DNA template by the use of the enzyme reverse transcriptase. It thus represents the genes that were being actively transcribed in that particular source under the physiological, developmental, or environmental conditions that existed when the mRNA was purified. cDNA libraries can be generated using techniques that promote "full-length" clones or under conditions that generate shorter f
Document 4:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the process of creating complementary strands of mrna called?
A. division
B. mutation
C. differentiation
D. transcription
Answer:
|
|
sciq-7296
|
multiple_choice
|
If a solute is a gas, increasing the temperature will do what?
|
[
"have no effect",
"change to liquid",
"decrease its solubility",
"increase its solubility"
] |
C
|
Relavent Documents:
Document 0:::
Boiling-point elevation describes the phenomenon that the boiling point of a liquid (a solvent) will be higher when another compound is added, meaning that a solution has a higher boiling point than a pure solvent. This happens whenever a non-volatile solute, such as a salt, is added to a pure solvent, such as water. The boiling point can be measured accurately using an ebullioscope.
Explanation
The boiling point elevation is a colligative property, which means that it is dependent on the presence of dissolved particles and their number, but not their identity. It is an effect of the dilution of the solvent in the presence of a solute. It is a phenomenon that happens for all solutes in all solutions, even in ideal solutions, and does not depend on any specific solute–solvent interactions. The boiling point elevation happens both when the solute is an electrolyte, such as various salts, and a nonelectrolyte. In thermodynamic terms, the origin of the boiling point elevation is entropic and can be explained in terms of the vapor pressure or chemical potential of the solvent. In both cases, the explanation depends on the fact that many solutes are only present in the liquid phase and do not enter into the gas phase (except at extremely high temperatures).
Put in vapor pressure terms, a liquid boils at the temperature when its vapor pressure equals the surrounding pressure. For the solvent, the presence of the solute decreases its vapor pressure by dilution. A nonvolatile solute has a vapor pressure of zero, so the vapor pressure of the solution is less than the vapor pressure of the solvent. Thus, a higher temperature is needed for the vapor pressure to reach the surrounding pressure, and the boiling point is elevated.
Put in chemical potential terms, at the boiling point, the liquid phase and the gas (or vapor) phase have the same chemical potential (or vapor pressure) meaning that they are energetically equivalent. The chemical potential is dependent on the temper
Document 1:::
Superheated water is liquid water under pressure at temperatures between the usual boiling point, and the critical temperature, . It is also known as "subcritical water" or "pressurized hot water". Superheated water is stable because of overpressure that raises the boiling point, or by heating it in a sealed vessel with a headspace, where the liquid water is in equilibrium with vapour at the saturated vapor pressure. This is distinct from the use of the term superheating to refer to water at atmospheric pressure above its normal boiling point, which has not boiled due to a lack of nucleation sites (sometimes experienced by heating liquids in a microwave).
Many of water's anomalous properties are due to very strong hydrogen bonding. Over the superheated temperature range the hydrogen bonds break, changing the properties more than usually expected by increasing temperature alone. Water becomes less polar and behaves more like an organic solvent such as methanol or ethanol. Solubility of organic materials and gases increases by several orders of magnitude and the water itself can act as a solvent, reagent, and catalyst in industrial and analytical applications, including extraction, chemical reactions and cleaning.
Change of properties with temperature
All materials change with temperature, but superheated water exhibits greater changes than would be expected from temperature considerations alone. Viscosity and surface tension of water drop and diffusivity increases with increasing temperature.
Self-ionization of water increases with temperature, and the pKw of water at 250 °C is closer to 11 than the more familiar 14 at 25 °C. This means the concentration of hydronium ion () and the concentration of hydroxide () are increased while the pH remains neutral. Specific heat capacity at constant pressure also increases with temperature, from 4.187 kJ/kg at 25 °C to 8.138 kJ/kg at 350 °C. A significant effect on the behaviour of water at high temperatures is decreased di
Document 2:::
In chemistry, solubility is the ability of a substance, the solute, to form a solution with another substance, the solvent. Insolubility is the opposite property, the inability of the solute to form such a solution.
The extent of the solubility of a substance in a specific solvent is generally measured as the concentration of the solute in a saturated solution, one in which no more solute can be dissolved. At this point, the two substances are said to be at the solubility equilibrium. For some solutes and solvents, there may be no such limit, in which case the two substances are said to be "miscible in all proportions" (or just "miscible").
The solute can be a solid, a liquid, or a gas, while the solvent is usually solid or liquid. Both may be pure substances, or may themselves be solutions. Gases are always miscible in all proportions, except in very extreme situations, and a solid or liquid can be "dissolved" in a gas only by passing into the gaseous state first.
The solubility mainly depends on the composition of solute and solvent (including their pH and the presence of other dissolved substances) as well as on temperature and pressure. The dependency can often be explained in terms of interactions between the particles (atoms, molecules, or ions) of the two substances, and of thermodynamic concepts such as enthalpy and entropy.
Under certain conditions, the concentration of the solute can exceed its usual solubility limit. The result is a supersaturated solution, which is metastable and will rapidly exclude the excess solute if a suitable nucleation site appears.
The concept of solubility does not apply when there is an irreversible chemical reaction between the two substances, such as the reaction of calcium hydroxide with hydrochloric acid; even though one might say, informally, that one "dissolved" the other. The solubility is also not the same as the rate of solution, which is how fast a solid solute dissolves in a liquid solvent. This property de
Document 3:::
A characteristic property is a chemical or physical property that helps identify and classify substances. The characteristic properties of a substance are always the same whether the sample being observed is large or small. Thus, conversely, if the property of a substance changes as the sample size changes, that property is not a characteristic property. Examples of physical properties that are not characteristic properties are mass and volume. Examples of characteristic properties include melting points, boiling points, density, viscosity, solubility, crystal shape, and color. Substances with characteristic properties can be separated. For example, in fractional distillation, liquids are separated using the boiling point. The water Boiling point is 212 degrees Fahrenheit.
Identifying a substance
Every characteristic property is unique to one given substance. Scientists use characteristic properties to identify unknown substances. However, characteristic properties are most useful for distinguishing between two or more substances, not identifying a single substance. For example, isopropanol and water can be distinguished by the characteristic property of odor. Characteristic properties are used because the sample size and the shape of the substance does not matter. For example, 1 gram of lead is the same color as 100 tons of lead.
See also
Intensive and extensive properties
Document 4:::
Boiling is the rapid phase transition from liquid to gas or vapor; the reverse of boiling is condensation. Boiling occurs when a liquid is heated to its boiling point, so that the vapour pressure of the liquid is equal to the pressure exerted on the liquid by the surrounding atmosphere. Boiling and evaporation are the two main forms of liquid vapourization.
There are two main types of boiling: nucleate boiling where small bubbles of vapour form at discrete points, and critical heat flux boiling where the boiling surface is heated above a certain critical temperature and a film of vapour forms on the surface. Transition boiling is an intermediate, unstable form of boiling with elements of both types. The boiling point of water is 100 °C or 212 °F but is lower with the decreased atmospheric pressure found at higher altitudes.
Boiling water is used as a method of making it potable by killing microbes and viruses that may be present. The sensitivity of different micro-organisms to heat varies, but if water is held at for one minute, most micro-organisms and viruses are inactivated. Ten minutes at a temperature of 70 °C (158 °F) is also sufficient to inactivate most bacteria.
Boiling water is also used in several cooking methods including boiling, steaming, and poaching.
Types
Free convection
The lowest heat flux seen in boiling is only sufficient to cause [natural convection], where the warmer fluid rises due to its slightly lower density. This condition occurs only when the superheat is very low, meaning that the hot surface near the fluid is nearly the same temperature as the boiling point.
Nucleate
Nucleate boiling is characterised by the growth of bubbles or pops on a heated surface (heterogeneous nucleation), which rises from discrete points on a surface, whose temperature is only slightly above the temperature of the liquid. In general, the number of nucleation sites is increased by an increasing surface temperature.
An irregular surface of the boiling
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
If a solute is a gas, increasing the temperature will do what?
A. have no effect
B. change to liquid
C. decrease its solubility
D. increase its solubility
Answer:
|
|
sciq-708
|
multiple_choice
|
Bacteria reproduce through what process?
|
[
"binary fission",
"tertiary fission",
"residual fission",
"binary fusion"
] |
A
|
Relavent Documents:
Document 0:::
Microbial intelligence (known as bacterial intelligence) is the intelligence shown by microorganisms. The concept encompasses complex adaptive behavior shown by single cells, and altruistic or cooperative behavior in populations of like or unlike cells mediated by chemical signalling that induces physiological or behavioral changes in cells and influences colony structures.
Complex cells, like protozoa or algae, show remarkable abilities to organize themselves in changing circumstances. Shell-building by amoebae reveals complex discrimination and manipulative skills that are ordinarily thought to occur only in multicellular organisms.
Even bacteria can display more behavior as a population. These behaviors occur in single species populations, or mixed species populations. Examples are colonies or swarms of myxobacteria, quorum sensing, and biofilms.
It has been suggested that a bacterial colony loosely mimics a biological neural network. The bacteria can take inputs in form of chemical signals, process them and then produce output chemicals to signal other bacteria in the colony.
Bacteria communication and self-organization in the context of network theory has been investigated by Eshel Ben-Jacob research group at Tel Aviv University which developed a fractal model of bacterial colony and identified linguistic and social patterns in colony lifecycle.
Examples of microbial intelligence
Bacterial
Bacterial biofilms can emerge through the collective behavior of thousands or millions of cells
Biofilms formed by Bacillus subtilis can use electric signals (ion transmission) to synchronize growth so that the innermost cells of the biofilm do not starve.
Under nutritional stress bacterial colonies can organize themselves in such a way so as to maximize nutrient availability.
Bacteria reorganize themselves under antibiotic stress.
Bacteria can swap genes (such as genes coding antibiotic resistance) between members of mixed species colonies.
Individual cells of
Document 1:::
The branches of microbiology can be classified into pure and applied sciences. Microbiology can be also classified based on taxonomy, in the cases of bacteriology, mycology, protozoology, and phycology. There is considerable overlap between the specific branches of microbiology with each other and with other disciplines, and certain aspects of these branches can extend beyond the traditional scope of microbiology
In general the field of microbiology can be divided in the more fundamental branch (pure microbiology) and the applied microbiology (biotechnology). In the more fundamental field the organisms are studied as the subject itself on a deeper (theoretical) level.
Applied microbiology refers to the fields where the micro-organisms are applied in certain processes such as brewing or fermentation. The organisms itself are often not studied as such, but applied to sustain certain processes.
Pure microbiology
Bacteriology: the study of bacteria
Mycology: the study of fungi
Protozoology: the study of protozoa
Phycology/algology: the study of algae
Parasitology: the study of parasites
Immunology: the study of the immune system
Virology: the study of viruses
Nematology: the study of nematodes
Microbial cytology: the study of microscopic and submicroscopic details of microorganisms
Microbial physiology: the study of how the microbial cell functions biochemically. Includes the study of microbial growth, microbial metabolism and microbial cell structure
Microbial pathogenesis: the study of pathogens which happen to be microbes
Microbial ecology: the relationship between microorganisms and their environment
Microbial genetics: the study of how genes are organized and regulated in microbes in relation to their cellular functions Closely related to the field of molecular biology
Cellular microbiology: a discipline bridging microbiology and cell biology
Evolutionary microbiology: the study of the evolution of microbes. This field can be subdivided into:
Micr
Document 2:::
MicrobeLibrary is a permanent collection of over 1400 original peer-reviewed resources for teaching undergraduate microbiology. It is provided by the American Society for Microbiology, Washington DC, United States.
Contents include curriculum activities; images and animations; reviews of books, websites and other resources; and articles from Focus on Microbiology Education, Microbiology Education and Microbe. Around 40% of the materials are free to educators and students, the remainder require a subscription. the service is suspended with the message to:
"Please check back with us in 2017".
External links
MicrobeLibrary
Microbiology
Document 3:::
Bacillus subtilis is a rod-shaped, Gram-positive bacteria that is naturally found in soil and vegetation, and is known for its ability to form a small, tough, protective and metabolically dormant endospore. B. subtilis can divide symmetrically to make two daughter cells (binary fission), or asymmetrically, producing a single endospore that is resistant to environmental factors such as heat, desiccation, radiation and chemical insult which can persist in the environment for long periods of time. The endospore is formed at times of nutritional stress, allowing the organism to persist in the environment until conditions become favourable. The process of endospore formation has profound morphological and physiological consequences: radical post-replicative remodelling of two progeny cells, accompanied eventually by cessation of metabolic activity in one daughter cell (the spore) and death by lysis of the other (the ‘mother cell’).
Overview
Commitment to sporulation
Although sporulation in B. subtilis is induced by starvation, the sporulation developmental program is not initiated immediately when growth slows due to nutrient limitation. A variety of alternative responses can occur, including the activation of flagellar motility to seek new food sources by chemotaxis, the production of antibiotics to destroy competing soil microbes, the secretion of hydrolytic enzymes to scavenge extracellular proteins and polysaccharides, or the induction of ‘competence’ for uptake of exogenous DNA for consumption, with the occasional side-effect that new genetic information is stably integrated. Sporulation is the last-ditch response to starvation and is suppressed until alternative responses prove inadequate. Even then, certain conditions must be met such as chromosome integrity, the state of chromosomal replication, and the functioning of the Krebs cycle.
Nature of regulation
Sporulation requires a great deal of time and also a lot of energy and is essentially irreversible, maki
Document 4:::
Social motility describes the motile movement of groups of cells that communicate with each other to coordinate movement based on external stimuli. There are multiple varieties of each kingdom that express social motility that provides a unique evolutionary advantages that other species do not possess. This has made them lethal killers such as African trypanosomiasis, or Myxobacteria. These evolutionary advantages have proven to increase survival rate among socially motile bacteria whether it be the ability to evade predators or communication within a swarm to form spores for long term hibernation in times of low nutrients or toxic environments.
Communication
Bacterial cells are able to communicate with one another through the use of chemical messengers. These chemical messengers are passed from one cell to the next to control factors such as virulence, growth and nutrient conditions, etc. As first discovered in plants, diffusible signal factors (DSFs) have been found in bacteria such as Burkholderia cenocepacia and Pseudomonas aeruginosa. When individual cells are stimulated by DSF, it causes them to release their own DSF to spread the signal further and also to generate a response to the DSF often seen as growth, movement, or sporulation in unfavorable growth conditions. Via these chemical messengers, swarms of bacteria are able to increase the rate of survival compared to single cell bacteria on their own.
Benefits
Predation
Traveling in groups, often referred to as swarms, is beneficial to the organism. For instance, when Myxobacteria swarms and feeds on prey, all individual cells release hydrolytic enzymes. This abundance of metabolic enzymes allows the swarm to easily degrade and engulf the prey. Interactions between separate species of organisms in a given environment is very common. Production of toxins, usually in the form of antibodies, allows for cells to ward off other organisms from infringing on their niche. Similar to the combined release of degr
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Bacteria reproduce through what process?
A. binary fission
B. tertiary fission
C. residual fission
D. binary fusion
Answer:
|
|
sciq-3799
|
multiple_choice
|
Reactant concentrations are highest at which part of a reaction?
|
[
"concurrent",
"middle",
"ending",
"beginning"
] |
D
|
Relavent Documents:
Document 0:::
An elementary reaction is a chemical reaction in which one or more chemical species react directly to form products in a single reaction step and with a single transition state. In practice, a reaction is assumed to be elementary if no reaction intermediates have been detected or need to be postulated to describe the reaction on a molecular scale. An apparently elementary reaction may be in fact a stepwise reaction, i.e. a complicated sequence of chemical reactions, with reaction intermediates of variable lifetimes.
In a unimolecular elementary reaction, a molecule dissociates or isomerises to form the products(s)
At constant temperature, the rate of such a reaction is proportional to the concentration of the species
In a bimolecular elementary reaction, two atoms, molecules, ions or radicals, and , react together to form the product(s)
The rate of such a reaction, at constant temperature, is proportional to the product of the concentrations of the species and
The rate expression for an elementary bimolecular reaction is sometimes referred to as the Law of Mass Action as it was first proposed by Guldberg and Waage in 1864. An example of this type of reaction is a cycloaddition reaction.
This rate expression can be derived from first principles by using collision theory for ideal gases. For the case of dilute fluids equivalent results have been obtained from simple probabilistic arguments.
According to collision theory the probability of three chemical species reacting simultaneously with each other in a termolecular elementary reaction is negligible. Hence such termolecular reactions are commonly referred as non-elementary reactions and can be broken down into a more fundamental set of bimolecular reactions, in agreement with the law of mass action. It is not always possible to derive overall reaction schemes, but solutions based on rate equations are often possible in terms of steady-state or Michaelis-Menten approximations.
Notes
Chemical kinetics
Phy
Document 1:::
The limiting reagent (or limiting reactant or limiting agent) in a chemical reaction is a reactant that is totally consumed when the chemical reaction is completed. The amount of product formed is limited by this reagent, since the reaction cannot continue without it. If one or more other reagents are present in excess of the quantities required to react with the limiting reagent, they are described as excess reagents or excess reactants (sometimes abbreviated as "xs"), or to be in abundance.
The limiting reagent must be identified in order to calculate the percentage yield of a reaction since the theoretical yield is defined as the amount of product obtained when the limiting reagent reacts completely. Given the balanced chemical equation, which describes the reaction, there are several equivalent ways to identify the limiting reagent and evaluate the excess quantities of other reagents.
Method 1: Comparison of reactant amounts
This method is most useful when there are only two reactants. One reactant (A) is chosen, and the balanced chemical equation is used to determine the amount of the other reactant (B) necessary to react with A. If the amount of B actually present exceeds the amount required, then B is in excess and A is the limiting reagent. If the amount of B present is less than required, then B is the limiting reagent.
Example for two reactants
Consider the combustion of benzene, represented by the following chemical equation:
2 C6H6(l) + 15 O2(g) -> 12 CO2(g) + 6 H2O(l)
This means that 15 moles of molecular oxygen (O2) is required to react with 2 moles of benzene (C6H6)
The amount of oxygen required for other quantities of benzene can be calculated using cross-multiplication (the rule of three). For example,
if 1.5 mol C6H6 is present, 11.25 mol O2 is required:
If in fact 18 mol O2 are present, there will be an excess of (18 - 11.25) = 6.75 mol of unreacted oxygen when all the benzene is consumed. Benzene is then the limiting reagent.
This concl
Document 2:::
The theory of response reactions (RERs) was elaborated for systems in which several physico-chemical processes run simultaneously in mutual interaction, with local thermodynamic equilibrium, and in which state variables called extents of reaction are allowed, but thermodynamic equilibrium proper is not required. It is based on detailed analysis of the Hessian determinant, using either the Gibbs or the De Donder method of analysis. The theory derives the sensitivity coefficient as the sum of the contributions of individual RERs. Thus phenomena which are in contradiction to over-general statements of the Le Chatelier principle can be interpreted. With the help of RERs the equilibrium coupling was defined. RERs could be derived based either on the species, or on the stoichiometrically independent reactions of a parallel system. The set of RERs is unambiguous in a given system; and the number of them (M) is , where S denotes the number of species and C refers to the number of components. In the case of three-component systems, RERs can be visualized on a triangle diagram.
Document 3:::
In medicinal chemistry and pharmacology, a binding coefficient is a quantity representing the extent to which a chemical compound will bind to a macromolecule. The preferential binding coefficient can be derived from the Kirkwood-Buff solution theory of solutions. Preferential binding is defined as a thermodynamic expression that describes the binding of the cosolvent over the solvent. This is in a system that is open to both the solvent and cosolvent. Consequently, preferential interaction coefficients are measures of interactions that involve “solutes that participate in a reaction in solution.”
See also
Binding constant
Partition coefficient
Binding affinity
Document 4:::
In physical chemistry and chemical engineering, extent of reaction is a quantity that measures the extent to which the reaction has proceeded. Often, it refers specifically to the value of the extent of reaction when equilibrium has been reached. It is usually denoted by the Greek letter ξ. The extent of reaction is usually defined so that it has units of amount (moles). It was introduced by the Belgian scientist Théophile de Donder.
Definition
Consider the reaction
A ⇌ 2 B + 3 C
Suppose an infinitesimal amount of the reactant A changes into B and C. This requires that all three mole numbers change according to the stoichiometry of the reaction, but they will not change by the same amounts. However, the extent of reaction can be used to describe the changes on a common footing as needed. The change of the number of moles of A can be represented by the equation , the change of B is , and the change of C is .
The change in the extent of reaction is then defined as
where denotes the number of moles of the reactant or product and is the stoichiometric number of the reactant or product. Although less common, we see from this expression that since the stoichiometric number can either be considered to be dimensionless or to have units of moles, conversely the extent of reaction can either be considered to have units of moles or to be a unitless mole fraction.
The extent of reaction represents the amount of progress made towards equilibrium in a chemical reaction. Considering finite changes instead of infinitesimal changes, one can write the equation for the extent of a reaction as
The extent of a reaction is generally defined as zero at the beginning of the reaction. Thus the change of is the extent itself. Assuming that the system has come to equilibrium,
Although in the example above the extent of reaction was positive since the system shifted in the forward direction, this usage implies that in general the extent of reaction can be positive or negative,
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Reactant concentrations are highest at which part of a reaction?
A. concurrent
B. middle
C. ending
D. beginning
Answer:
|
|
sciq-115
|
multiple_choice
|
What renewable energy source converts energy from the sunlight into electricity?
|
[
"geothermal energy",
"hydrostatic energy",
"geophysical energy",
"solar energy"
] |
D
|
Relavent Documents:
Document 0:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 1:::
The Bionic Leaf is a biomimetic system that gathers solar energy via photovoltaic cells that can be stored or used in a number of different functions. Bionic leaves can be composed of both synthetic (metals, ceramics, polymers, etc.) and organic materials (bacteria), or solely made of synthetic materials. The Bionic Leaf has the potential to be implemented in communities, such as urbanized areas to provide clean air as well as providing needed clean energy.
History
In 2009 at MIT, Daniel Nocera's lab first developed the "artificial leaf", a device made from silicon and an anode electrocatalyst for the oxidation of water, capable of splitting water into hydrogen and oxygen gases. In 2012, Nocera came to Harvard and The Silver Lab of Harvard Medical School joined Nocera’s team. Together the teams expanded the existing technology to create the Bionic Leaf. It merged the concept of the artificial leaf with genetically engineered bacteria that feed on the hydrogen and convert CO2 in the air into alcohol fuels or chemicals.
The first version of the teams Bionic Leaf was created in 2015 but the catalyst used was harmful to the bacteria. In 2016, a new catalyst was designed to solve this issue, named the "Bionic Leaf 2.0". Other versions of artificial leaves have been developed by the California Institute of Technology and the Joint Center for Artificial Photosynthesis, the University of Waterloo, and the University of Cambridge.
Mechanics
Photosynthesis
In natural photosynthesis, photosynthetic organisms produce energy-rich organic molecules from water and carbon dioxide by using solar radiation. Therefore, the process of photosynthesis removes carbon dioxide, a greenhouse gas, from the air. Artificial photosynthesis, as performed by the Bionic Leaf, is approximately 10 times more efficient than natural photosynthesis. Using a catalyst, the Bionic Leaf can remove excess carbon dioxide in the air and convert that to useful alcohol fuels, like isopropanol and isobutan
Document 2:::
Primary energy (PE) is the energy found in nature that has not been subjected to any human engineered conversion process. It encompasses energy contained in raw fuels and other forms of energy, including waste, received as input to a system. Primary energy can be non-renewable or renewable.
Total primary energy supply (TPES) is the sum of production and imports, plus or minus stock changes, minus exports and international bunker storage.
The International Recommendations for Energy Statistics (IRES) prefers total energy supply (TES) to refer to this indicator. These expressions are often used to describe the total energy supply of a national territory.
Secondary energy is a carrier of energy, such as electricity. These are produced by conversion from a primary energy source.
Primary energy is used as a measure in energy statistics in the compilation of energy balances, as well as in the field of energetics. In energetics, a primary energy source (PES) refers to the energy forms required by the energy sector to generate the supply of energy carriers used by human society. Primary energy only counts raw energy and not usable energy and fails to account well for energy losses, particularly the large losses in thermal sources. It therefore generally grossly undercounts non thermal renewable energy sources .
Examples of sources
Primary energy sources should not be confused with the energy system components (or conversion processes) through which they are converted into energy carriers.
Usable energy
Primary energy sources are transformed in energy conversion processes to more convenient forms of energy that can directly be used by society, such as electrical energy, refined fuels, or synthetic fuels such as hydrogen fuel. In the field of energetics, these forms are called energy carriers and correspond to the concept of "secondary energy" in energy statistics.
Conversion to energy carriers (or secondary energy)
Energy carriers are energy forms which have been tra
Document 3:::
Energy quality is a measure of the ease with which a form of energy can be converted to useful work or to another form of energy: i.e. its content of thermodynamic free energy. A high quality form of energy has a high content of thermodynamic free energy, and therefore a high proportion of it can be converted to work; whereas with low quality forms of energy, only a small proportion can be converted to work, and the remainder is dissipated as heat. The concept of energy quality is also used in ecology, where it is used to track the flow of energy between different trophic levels in a food chain and in thermoeconomics, where it is used as a measure of economic output per unit of energy. Methods of evaluating energy quality often involve developing a ranking of energy qualities in hierarchical order.
Examples: Industrialization, Biology
The consideration of energy quality was a fundamental driver of industrialization from the 18th through 20th centuries. Consider for example the industrialization of New England in the 18th century. This refers to the construction of textile mills containing power looms for weaving cloth. The simplest, most economical and straightforward source of energy was provided by water wheels, extracting energy from a millpond behind a dam on a local creek. If another nearby landowner also decided to build a mill on the same creek, the construction of their dam would lower the overall hydraulic head to power the existing waterwheel, thus hurting power generation and efficiency. This eventually became an issue endemic to the entire region, reducing the overall profitability of older mills as newer ones were built. The search for higher quality energy was a major impetus throughout the 19th and 20th centuries. For example, burning coal to make steam to generate mechanical energy would not have been imaginable in the 18th century; by the end of the 19th century, the use of water wheels was long outmoded. Similarly, the quality of energy from elec
Document 4:::
Electrochemical energy conversion is a field of energy technology concerned with electrochemical methods of energy conversion including fuel cells and photoelectrochemical. This field of technology also includes electrical storage devices like batteries and supercapacitors. It is increasingly important in context of automotive propulsion systems. There has been the creation of more powerful, longer running batteries allowing longer run times for electric vehicles. These systems would include the energy conversion fuel cells and photoelectrochemical mentioned above.
See also
Bioelectrochemical reactor
Chemotronics
Electrochemical cell
Electrochemical engineering
Electrochemical reduction of carbon dioxide
Electrofuels
Electrohydrogenesis
Electromethanogenesis
Enzymatic biofuel cell
Photoelectrochemical cell
Photoelectrochemical reduction of CO2
Notes
External links
International Journal of Energy Research
MSAL
NIST
scientific journal article
Georgia tech
Electrochemistry
Electrochemical engineering
Energy engineering
Energy conversion
Biochemical engineering
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What renewable energy source converts energy from the sunlight into electricity?
A. geothermal energy
B. hydrostatic energy
C. geophysical energy
D. solar energy
Answer:
|
|
sciq-2346
|
multiple_choice
|
Reflected in their relatively high level of intelligence and their ability to learn new behaviors, what organs tend to be relatively large in primates?
|
[
"lungs",
"brains",
"hearts",
"kidneys"
] |
B
|
Relavent Documents:
Document 0:::
This is a list of large extant primate species (excluding humans) that can be ordered by average weight or height range. There is no fixed definition of a large primate, it is typically assessed empirically. Primates exhibit the highest levels of sexual dimorphism amongst mammals, therefore the maximum body dimensions included in this list generally refer to male specimens.
Mandrills and baboons are monkeys; the rest of the species on this list are apes. Typically, Old World monkeys (paleotropical) are larger than New World monkeys (neotropical); the reasons for this are not entirely understood but several hypotheses have been generated. As a rule, primate brains are "significantly larger" than those of other mammals with similar body sizes. Until well into the 19th century, juvenile orangutans were taken from the wild and died within short order, eventually leading naturalists to mistakenly assume that the living specimens they briefly encountered and skeletons of adult orangutans were entirely different species.
Largest non-human primates
See also
Largest wild canids
List of largest land carnivorans
Monkey
Great apes
List of heaviest land mammals
Largest mammals
Sexual dimorphism in non-human primates
Document 1:::
This article contains a list of organs of the human body. A general consensus is widely believed to be 79 organs (this number goes up if you count each bone and muscle as an organ on their own, which is becoming more common practice to do); however, there is no universal standard definition of what constitutes an organ, and some tissue groups' status as one is debated. Since there is no single standard definition of what an organ is, the number of organs varies depending on how one defines an organ. For example, this list contains more than 79 organs (about ~103).
It is still not clear which definition of an organ is used for all the organs in this list, it seemed that it may have been compiled based on what wikipedia articles were available on organs.
Musculoskeletal system
Skeleton
Joints
Ligaments
Muscular system
Tendons
Digestive system
Mouth
Teeth
Tongue
Lips
Salivary glands
Parotid glands
Submandibular glands
Sublingual glands
Pharynx
Esophagus
Stomach
Small intestine
Duodenum
Jejunum
Ileum
Large intestine
Cecum
Ascending colon
Transverse colon
Descending colon
Sigmoid colon
Rectum
Liver
Gallbladder
Mesentery
Pancreas
Anal canal
Appendix
Respiratory system
Nasal cavity
Pharynx
Larynx
Trachea
Bronchi
Bronchioles and smaller air passages
Lungs
Muscles of breathing
Urinary system
Kidneys
Ureter
Bladder
Urethra
Reproductive systems
Female reproductive system
Internal reproductive organs
Ovaries
Fallopian tubes
Uterus
Cervix
Vagina
External reproductive organs
Vulva
Clitoris
Male reproductive system
Internal reproductive organs
Testicles
Epididymis
Vas deferens
Prostate
External reproductive organs
Penis
Scrotum
Endocrine system
Pituitary gland
Pineal gland
Thyroid gland
Parathyroid glands
Adrenal glands
Pancreas
Circulatory system
Circulatory system
Heart
Arteries
Veins
Capillaries
Lymphatic system
Lymphatic vessel
Lymph node
Bone marrow
Thymus
Spleen
Gut-associated lymphoid tissue
Tonsils
Interstitium
Nervous system
Central nervous system
Document 2:::
In anatomy, a lobe is a clear anatomical division or extension of an organ (as seen for example in the brain, lung, liver, or kidney) that can be determined without the use of a microscope at the gross anatomy level. This is in contrast to the much smaller lobule, which is a clear division only visible under the microscope.
Interlobar ducts connect lobes and interlobular ducts connect lobules.
Examples of lobes
The four main lobes of the brain
the frontal lobe
the parietal lobe
the occipital lobe
the temporal lobe
The three lobes of the human cerebellum
the flocculonodular lobe
the anterior lobe
the posterior lobe
The two lobes of the thymus
The two and three lobes of the lungs
Left lung: superior and inferior
Right lung: superior, middle, and inferior
The four lobes of the liver
Left lobe of liver
Right lobe of liver
Quadrate lobe of liver
Caudate lobe of liver
The renal lobes of the kidney
Earlobes
Examples of lobules
the cortical lobules of the kidney
the testicular lobules of the testis
the lobules of the mammary gland
the pulmonary lobules of the lung
the lobules of the thymus
Document 3:::
Work
He is an associate professor of anatomy, Department of Anatomy, Howard University College of Medicine (US). He was among the most cited/influential anatomists in 2019.
Books
Single author or co-author books
DIOGO, R. (2021). Meaning of Life, Human Nature and Delusions - How Tales about Love, Sex, Races, Gods and Progress Affect Us and Earth's Splendor. Springer (New York, US).
MONTERO, R., ADESOMO, A. & R. DIOGO (2021). On viruses, pandemics, and us: a developing story [De virus, pandemias y nosotros: una historia en desarollo]. Independently published, Tucuman, Argentina. 495 pages.
DIOGO, R., J. ZIERMANN, J. MOLNAR, N. SIOMAVA & V. ABDALA (2018). Muscles of Chordates: development, homologies and evolution. Taylor & Francis (Oxford, UK). 650 pages.
DIOGO, R., B. SHEARER, J. M. POTAU, J. F. PASTOR, F. J. DE PAZ, J. ARIAS-MARTORELL, C. TURCOTTE, A. HAMMOND, E. VEREECKE, M. VANHOOF, S. NAUWELAERTS & B. WOOD (2017). Photographic and descriptive musculoskeletal atlas of bonobos - with notes on the weight, attachments, variations, and innervation of the muscles and comparisons with common chimpanzees and humans. Springer (New York, US). 259 pages.
DIOGO, R. (2017). Evolution driven by organismal behavior: a unifying view of life, function, form, mismatches and trends. Springer
Document 4:::
Anatomy () is the branch of biology concerned with the study of the structure of organisms and their parts. Anatomy is a branch of natural science that deals with the structural organization of living things. It is an old science, having its beginnings in prehistoric times. Anatomy is inherently tied to developmental biology, embryology, comparative anatomy, evolutionary biology, and phylogeny, as these are the processes by which anatomy is generated, both over immediate and long-term timescales. Anatomy and physiology, which study the structure and function of organisms and their parts respectively, make a natural pair of related disciplines, and are often studied together. Human anatomy is one of the essential basic sciences that are applied in medicine.
Anatomy is a complex and dynamic field that is constantly evolving as new discoveries are made. In recent years, there has been a significant increase in the use of advanced imaging techniques, such as MRI and CT scans, which allow for more detailed and accurate visualizations of the body's structures.
The discipline of anatomy is divided into macroscopic and microscopic parts. Macroscopic anatomy, or gross anatomy, is the examination of an animal's body parts using unaided eyesight. Gross anatomy also includes the branch of superficial anatomy. Microscopic anatomy involves the use of optical instruments in the study of the tissues of various structures, known as histology, and also in the study of cells.
The history of anatomy is characterized by a progressive understanding of the functions of the organs and structures of the human body. Methods have also improved dramatically, advancing from the examination of animals by dissection of carcasses and cadavers (corpses) to 20th-century medical imaging techniques, including X-ray, ultrasound, and magnetic resonance imaging.
Etymology and definition
Derived from the Greek anatomē "dissection" (from anatémnō "I cut up, cut open" from ἀνά aná "up", and τέμνω té
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Reflected in their relatively high level of intelligence and their ability to learn new behaviors, what organs tend to be relatively large in primates?
A. lungs
B. brains
C. hearts
D. kidneys
Answer:
|
|
ai2_arc-38
|
multiple_choice
|
Puddles on a sidewalk are evaporating quickly. What most likely causes the puddles to evaporate?
|
[
"heat",
"clouds",
"air",
"water"
] |
A
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 2:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 3:::
Computer science and engineering (CSE) is an academic program at many universities which comprises computer science classes (e.g. data structures and algorithms) and computer engineering classes (e.g computer architecture). There is no clear division in computing between science and engineering, just like in the field of materials science and engineering. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both undergraduate as well postgraduate with specializations.
Academic courses
Academic programs vary between colleges, but typically include a combination of topics in computer science, computer engineering, and electrical engineering. Undergraduate courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most CSE programs require introductory mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, probability, and statistics, as well as the basics of electrical and electronic engineering, physics, and electromagnetism.
Example universities with CSE majors and departments
APJ Abdul Kalam Technological University
American International University-B
Document 4:::
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Puddles on a sidewalk are evaporating quickly. What most likely causes the puddles to evaporate?
A. heat
B. clouds
C. air
D. water
Answer:
|
|
scienceQA-7455
|
multiple_choice
|
What do these two changes have in common?
an old sandwich rotting in a trashcan
chicken cooking in an oven
|
[
"Both are chemical changes.",
"Both are only physical changes.",
"Both are caused by cooling.",
"Both are caused by heating."
] |
A
|
Step 1: Think about each change.
A sandwich rotting is a chemical change. The matter in the sandwich breaks down and slowly turns into a different type of matter.
Cooking chicken is a chemical change. The heat causes the matter in the chicken to change. Cooked chicken and raw chicken are different types of matter.
Step 2: Look at each answer choice.
Both are only physical changes.
Both changes are chemical changes. They are not physical changes.
Both are chemical changes.
Both changes are chemical changes. The type of matter before and after each change is different.
Both are caused by heating.
Cooking is caused by heating. But a sandwich rotting is not.
Both are caused by cooling.
Neither change is caused by cooling.
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.
Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate.
A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density.
An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge.
Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change.
Examples
Heating and cooling
Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation.
Magnetism
Ferro-magnetic materials can become magnetic. The process is reve
Document 2:::
In cooking, proofing (also called proving) is a step in the preparation of yeast bread and other baked goods in which the dough is allowed to rest and rise a final time before baking. During this rest period, yeast ferments the dough and produces gases, thereby leavening the dough.
In contrast, proofing or blooming yeast (as opposed to proofing the dough) may refer to the process of first suspending yeast in warm water, a necessary hydration step when baking with active dry yeast. Proofing can also refer to the process of testing the viability of dry yeast by suspending it in warm water with carbohydrates (sugars). If the yeast is still alive, it will feed on the sugar and produce a visible layer of foam on the surface of the water mixture.
Fermentation rest periods are not always explicitly named, and can appear in recipes as "Allow dough to rise." When they are named, terms include "bulk fermentation", "first rise", "second rise", "final proof" and "shaped proof".
Dough processes
The process of making yeast-leavened bread involves a series of alternating work and rest periods. Work periods occur when the dough is manipulated by the baker. Some work periods are called mixing, kneading, and folding, as well as division, shaping, and panning. Work periods are typically followed by rest periods, which occur when dough is allowed to sit undisturbed. Particular rest periods include, but are not limited to, autolyse, bulk fermentation and proofing. Proofing, also sometimes called final fermentation, is the specific term for allowing dough to rise after it has been shaped and before it is baked.
Some breads begin mixing with an autolyse. This refers to a period of rest after the initial mixing of flour and water, a rest period that occurs sequentially before the addition of yeast, salt and other ingredients. This rest period allows for better absorption of water and helps the gluten and starches to align. The autolyse is credited to Raymond Calvel, who recommende
Document 3:::
Warmed-over flavor is an unpleasant characteristic usually associated with meat which has been cooked and then refrigerated. The deterioration of meat flavor is most noticeable upon reheating. As cooking and subsequent refrigeration is the case with most convenience foods containing meat, it is a significant challenge to the processed food industry. The flavor is variously described as "rancid," "stale," and like "cardboard," and even compared to "damp dog hair." Warmed-over flavor is caused by the oxidative decomposition of lipids (fatty substances) in the meat into chemicals (short-chain aldehydes or ketones) which have an unpleasant taste or odor. This decomposition process begins after cooking or processing and is aided by the release of naturally occurring iron in the meat.
Occurrence of warmed-over flavor
The occurrence of warmed-over flavor begins as lipids, primarily lipids from the cell membrane of cells in the meat, are attacked by oxygen. This process is aided by the release of iron from iron-containing proteins in the meat, including myoglobin and hemoglobin. The iron is released by the heat of cooking, or by mechanical grinding. The free iron then acts as a catalyst, or promoter, of oxidation reactions. The reactions break down some of the fats in the meat to form primary oxidation products. These chemicals are not directly responsible for the objectionable taste. Instead, they subsequently further decompose to secondary oxidation products including "alcohols, acids, ketones, lactones and unsaturated hydrocarbons which produce the [warmed-over flavor]." Many of these compounds, including pentanal, hexanal, pentylfuran, 2-pentylfuran, 2-octenal and 2,3-octanedione have a strong odor and can be tasted at concentrations as low as 1 part per billion.
Prevention
Warmed-over flavor can be prevented by the addition of preservatives to processed meat. Many of the preservatives are antioxidants, ranging from tocopherols (related to vitamin E) to plu
Document 4:::
Rancidification is the process of complete or incomplete autoxidation or hydrolysis of fats and oils when exposed to air, light, moisture, or bacterial action, producing short-chain aldehydes, ketones and free fatty acids.
When these processes occur in food, undesirable odors and flavors can result. In processed meats, these flavors are collectively known as warmed-over flavor. In certain cases, however, the flavors can be desirable (as in aged cheeses).
Rancidification can also detract from the nutritional value of food, as some vitamins are sensitive to oxidation. Similar to rancidification, oxidative degradation also occurs in other hydrocarbons, such as lubricating oils, fuels, and mechanical cutting fluids.
Pathways
Five pathways for rancidification are recognized:
Hydrolytic
Hydrolytic rancidity refers to the odor that develops when triglycerides are hydrolyzed and free fatty acids are released. This reaction of lipid with water may require a catalyst (such as a lipase, or acidic or alkaline conditions) leading to the formation of free fatty acids and glycerol. In particular, short-chain fatty acids, such as butyric acid, are malodorous. When short-chain fatty acids are produced, they serve as catalysts themselves, further accelerating the reaction, a form of autocatalysis.
Oxidative
Oxidative rancidity is associated with the degradation by oxygen in the air.
Free-radical oxidation
The double bonds of an unsaturated fatty acid can be cleaved by free-radical reactions involving molecular oxygen. This reaction causes the release of malodorous and highly volatile aldehydes and ketones. Because of the nature of free-radical reactions, the reaction is catalyzed by sunlight. Oxidation primarily occurs with unsaturated fats. For example, even though meat is held under refrigeration or in a frozen state, the poly-unsaturated fat will continue to oxidize and slowly become rancid. The fat oxidation process, potentially resulting in rancidity, begins immediately
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do these two changes have in common?
an old sandwich rotting in a trashcan
chicken cooking in an oven
A. Both are chemical changes.
B. Both are only physical changes.
C. Both are caused by cooling.
D. Both are caused by heating.
Answer:
|
sciq-1985
|
multiple_choice
|
Some metals, such as gold and platinum, do not corrode easily because they are very resistant to what?
|
[
"heat",
"precipitation",
"oxidation",
"evaporation"
] |
C
|
Relavent Documents:
Document 0:::
can be broadly divided into metals, metalloids, and nonmetals according to their shared physical and chemical properties. All metals have a shiny appearance (at least when freshly polished); are good conductors of heat and electricity; form alloys with other metals; and have at least one basic oxide. Metalloids are metallic-looking brittle solids that are either semiconductors or exist in semiconducting forms, and have amphoteric or weakly acidic oxides. Typical nonmetals have a dull, coloured or colourless appearance; are brittle when solid; are poor conductors of heat and electricity; and have acidic oxides. Most or some elements in each category share a range of other properties; a few elements have properties that are either anomalous given their category, or otherwise extraordinary.
Properties
Metals
Metals appear lustrous (beneath any patina); form mixtures (alloys) when combined with other metals; tend to lose or share electrons when they react with other substances; and each forms at least one predominantly basic oxide.
Most metals are silvery looking, high density, relatively soft and easily deformed solids with good electrical and thermal conductivity, closely packed structures, low ionisation energies and electronegativities, and are found naturally in combined states.
Some metals appear coloured (Cu, Cs, Au), have low densities (e.g. Be, Al) or very high melting points (e.g. W, Nb), are liquids at or near room temperature (e.g. Hg, Ga), are brittle (e.g. Os, Bi), not easily machined (e.g. Ti, Re), or are noble (hard to oxidise, e.g. Au, Pt), or have nonmetallic structures (Mn and Ga are structurally analogous to, respectively, white P and I).
Metals comprise the large majority of the elements, and can be subdivided into several different categories. From left to right in the periodic table, these categories include the highly reactive alkali metals; the less-reactive alkaline earth metals, lanthanides, and radioactive actinides; the archetypal tran
Document 1:::
Ductility is a mechanical property commonly described as a material's amenability to drawing (e.g. into wire). In materials science, ductility is defined by the degree to which a material can sustain plastic deformation under tensile stress before failure. Ductility is an important consideration in engineering and manufacturing. It defines a material's suitability for certain manufacturing operations (such as cold working) and its capacity to absorb mechanical overload. Some metals that are generally described as ductile include gold and copper, while platinum is the most ductile of all metals in pure form. However, not all metals experience ductile failure as some can be characterized with brittle failure like cast iron. Polymers generally can be viewed as ductile materials as they typically allow for plastic deformation.
Malleability, a similar mechanical property, is characterized by a material's ability to deform plastically without failure under compressive stress. Historically, materials were considered malleable if they were amenable to forming by hammering or rolling. Lead is an example of a material which is relatively malleable but not ductile.
Materials science
Ductility is especially important in metalworking, as materials that crack, break or shatter under stress cannot be manipulated using metal-forming processes such as hammering, rolling, drawing or extruding. Malleable materials can be formed cold using stamping or pressing, whereas brittle materials may be cast or thermoformed.
High degrees of ductility occur due to metallic bonds, which are found predominantly in metals; this leads to the common perception that metals are ductile in general. In metallic bonds valence shell electrons are delocalized and shared between many atoms. The delocalized electrons allow metal atoms to slide past one another without being subjected to strong repulsive forces that would cause other materials to shatter.
The ductility of steel varies depending on the al
Document 2:::
Nonmetals show more variability in their properties than do metals. Metalloids are included here since they behave predominately as chemically weak nonmetals.
Physically, they nearly all exist as diatomic or monatomic gases, or polyatomic solids having more substantial (open-packed) forms and relatively small atomic radii, unlike metals, which are nearly all solid and close-packed, and mostly have larger atomic radii. If solid, they have a submetallic appearance (with the exception of sulfur) and are brittle, as opposed to metals, which are lustrous, and generally ductile or malleable; they usually have lower densities than metals; are mostly poorer conductors of heat and electricity; and tend to have significantly lower melting points and boiling points than those of most metals.
Chemically, the nonmetals mostly have higher ionisation energies, higher electron affinities (nitrogen and the noble gases have negative electron affinities) and higher electronegativity values than metals noting that, in general, the higher an element's ionisation energy, electron affinity, and electronegativity, the more nonmetallic that element is. Nonmetals, including (to a limited extent) xenon and probably radon, usually exist as anions or oxyanions in aqueous solution; they generally form ionic or covalent compounds when combined with metals (unlike metals, which mostly form alloys with other metals); and have acidic oxides whereas the common oxides of nearly all metals are basic.
Properties
Abbreviations used in this section are: AR Allred-Rochow; CN coordination number; and MH Moh's hardness
Group 1
Hydrogen is a colourless, odourless, and comparatively unreactive diatomic gas with a density of 8.988 × 10−5 g/cm3 and is about 14 times lighter than air. It condenses to a colourless liquid −252.879 °C and freezes into an ice- or snow-like solid at −259.16 °C. The solid form has a hexagonal crystalline structure and is soft and easily crushed. Hydrogen is an insulator in all of
Document 3:::
The Zinagizado is an electrochemical process to provide a ferrous metal material with anti-corrosive properties. It involves the application of a constant electric current through a circuit to break the bonds and these are attached to the metal to be coated by forming a surface coating. The alloy used is called Zinag (Zn-Al-Ag); this alloy has excellent mechanical and corrosive properties, so the piece will have increased by 60% of life.
The deposition of Zinag provides environmental protection against corrosion and can be used in covering all kinds of steel metallic materials in contact with a corrosive medium. The anti-corrosive property has been obtained by the corrosion resistance of zinc achieved by the aluminium and silver addition, which is cathodically respect to the iron and steel. Cathodic protection
This process is an innovation by Said Robles Casolco and Adrianni Zanatta.
Patent called: Zinagizado as corrosion process for metals by electrolytic method. No. MX/a/2010/009200, IMPI-Mexico.
Document 4:::
Material is a substance or mixture of substances that constitutes an object. Materials can be pure or impure, living or non-living matter. Materials can be classified on the basis of their physical and chemical properties, or on their geological origin or biological function. Materials science is the study of materials, their properties and their applications.
Raw materials can be processed in different ways to influence their properties, by purification, shaping or the introduction of other materials. New materials can be produced from raw materials by synthesis.
In industry, materials are inputs to manufacturing processes to produce products or more complex materials.
Historical elements
Materials chart the history of humanity. The system of the three prehistoric ages (Stone Age, Bronze Age, Iron Age) were succeeded by historical ages: steel age in the 19th century, polymer age in the middle of the following century (plastic age) and silicon age in the second half of the 20th century.
Classification by use
Materials can be broadly categorized in terms of their use, for example:
Building materials are used for construction
Building insulation materials are used to retain heat within buildings
Refractory materials are used for high-temperature applications
Nuclear materials are used for nuclear power and weapons
Aerospace materials are used in aircraft and other aerospace applications
Biomaterials are used for applications interacting with living systems
Material selection is a process to determine which material should be used for a given application.
Classification by structure
The relevant structure of materials has a different length scale depending on the material. The structure and composition of a material can be determined by microscopy or spectroscopy.
Microstructure
In engineering, materials can be categorised according to their microscopic structure:
Plastics: a wide range of synthetic or semi-synthetic materials that use polymers as a main ingred
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Some metals, such as gold and platinum, do not corrode easily because they are very resistant to what?
A. heat
B. precipitation
C. oxidation
D. evaporation
Answer:
|
|
sciq-3206
|
multiple_choice
|
What in mammalian lungs, increases the surface area for gas exchange?
|
[
"thorax",
"alveoli",
"bronchioles",
"bronchi"
] |
B
|
Relavent Documents:
Document 0:::
Speech science refers to the study of production, transmission and perception of speech. Speech science involves anatomy, in particular the anatomy of the oro-facial region and neuroanatomy, physiology, and acoustics.
Speech production
The production of speech is a highly complex motor task that involves approximately 100 orofacial, laryngeal, pharyngeal, and respiratory muscles. Precise and expeditious timing of these muscles is essential for the production of temporally complex speech sounds, which are characterized by transitions as short as 10 ms between frequency bands and an average speaking rate of approximately 15 sounds per second. Speech production requires airflow from the lungs (respiration) to be phonated through the vocal folds of the larynx (phonation) and resonated in the vocal cavities shaped by the jaw, soft palate, lips, tongue and other articulators (articulation).
Respiration
Respiration is the physical process of gas exchange between an organism and its environment involving four steps (ventilation, distribution, perfusion and diffusion) and two processes (inspiration and expiration). Respiration can be described as the mechanical process of air flowing into and out of the lungs on the principle of Boyle's law, stating that, as the volume of a container increases, the air pressure will decrease. This relatively negative pressure will cause air to enter the container until the pressure is equalized. During inspiration of air, the diaphragm contracts and the lungs expand drawn by pleurae through surface tension and negative pressure. When the lungs expand, air pressure becomes negative compared to atmospheric pressure and air will flow from the area of higher pressure to fill the lungs. Forced inspiration for speech uses accessory muscles to elevate the rib cage and enlarge the thoracic cavity in the vertical and lateral dimensions. During forced expiration for speech, muscles of the trunk and abdomen reduce the size of the thoracic cavity by
Document 1:::
Lung receptors sense irritation or inflammation in the bronchi and alveoli.
Document 2:::
The control of ventilation is the physiological mechanisms involved in the control of breathing, which is the movement of air into and out of the lungs. Ventilation facilitates respiration. Respiration refers to the utilization of oxygen and balancing of carbon dioxide by the body as a whole, or by individual cells in cellular respiration.
The most important function of breathing is the supplying of oxygen to the body and balancing of the carbon dioxide levels. Under most conditions, the partial pressure of carbon dioxide (PCO2), or concentration of carbon dioxide, controls the respiratory rate.
The peripheral chemoreceptors that detect changes in the levels of oxygen and carbon dioxide are located in the arterial aortic bodies and the carotid bodies. Central chemoreceptors are primarily sensitive to changes in the pH of the blood, (resulting from changes in the levels of carbon dioxide) and they are located on the medulla oblongata near to the medullar respiratory groups of the respiratory center.
Information from the peripheral chemoreceptors is conveyed along nerves to the respiratory groups of the respiratory center. There are four respiratory groups, two in the medulla and two in the pons. The two groups in the pons are known as the pontine respiratory group.
Dorsal respiratory group – in the medulla
Ventral respiratory group – in the medulla
Pneumotaxic center – various nuclei of the pons
Apneustic center – nucleus of the pons
From the respiratory center, the muscles of respiration, in particular the diaphragm, are activated to cause air to move in and out of the lungs.
Control of respiratory rhythm
Ventilatory pattern
Breathing is normally an unconscious, involuntary, automatic process. The pattern of motor stimuli during breathing can be divided into an inhalation stage and an exhalation stage. Inhalation shows a sudden, ramped increase in motor discharge to the respiratory muscles (and the pharyngeal constrictor muscles). Before the end of inh
Document 3:::
In acid base physiology, the Davenport diagram is a graphical tool, developed by Horace W. Davenport, that allows a clinician or investigator to describe blood bicarbonate concentrations and blood pH following a respiratory and/or metabolic acid-base disturbance. The diagram depicts a three-dimensional surface describing all possible states of chemical equilibria between gaseous carbon dioxide, aqueous bicarbonate and aqueous protons at the physiologically complex interface of the alveoli of the lungs and the alveolar capillaries. Although the surface represented in the diagram is experimentally determined, the Davenport diagram is rarely used in the clinical setting, but allows the investigator to envision the effects of physiological changes on blood acid-base chemistry. For clinical use there are two recent innovations: an Acid-Base Diagram which provides Text Descriptions for the abnormalities and a High Altitude Version that provides text descriptions appropriate for the altitude.
Derivation
When a sample of blood is exposed to air, either in the alveoli of the lung or in an in vitro laboratory experiment, carbon dioxide in the air rapidly enters into equilibrium with carbon dioxide derivatives and other species in the aqueous solution. Figure 1 illustrates the most important equilibrium reactions of carbon dioxide in blood relating to acid-base physiology:
Note that in this equation, the HB/B- buffer system represents all non-bicarbonate buffers present in the blood, such as hemoglobin in its various protonated and deprotonated states. Because many different non-bicarbonate buffers are present in human blood, the final equilibrium state reached at any given pCO2 is highly complex and cannot be readily predicted using theory alone. By depicting experimental results, the Davenport diagram provides a simple approach to describing the behavior of this complex system.
Figure 2 shows a Davenport diagram as commonly depicted in textbooks and the literature. To un
Document 4:::
The thoracic diaphragm, or simply the diaphragm (; ), is a sheet of internal skeletal muscle in humans and other mammals that extends across the bottom of the thoracic cavity. The diaphragm is the most important muscle of respiration, and separates the thoracic cavity, containing the heart and lungs, from the abdominal cavity: as the diaphragm contracts, the volume of the thoracic cavity increases, creating a negative pressure there, which draws air into the lungs. Its high oxygen consumption is noted by the many mitochondria and capillaries present; more than in any other skeletal muscle.
The term diaphragm in anatomy, created by Gerard of Cremona, can refer to other flat structures such as the urogenital diaphragm or pelvic diaphragm, but "the diaphragm" generally refers to the thoracic diaphragm. In humans, the diaphragm is slightly asymmetric—its right half is higher up (superior) to the left half, since the large liver rests beneath the right half of the diaphragm. There is also speculation that the diaphragm is lower on the other side due to heart's presence.
Other mammals have diaphragms, and other vertebrates such as amphibians and reptiles have diaphragm-like structures, but important details of the anatomy may vary, such as the position of the lungs in the thoracic cavity.
Structure
The diaphragm is an upward curved, c-shaped structure of muscle and fibrous tissue that separates the thoracic cavity from the abdomen. The superior surface of the dome forms the floor of the thoracic cavity, and the inferior surface the roof of the abdominal cavity.
As a dome, the diaphragm has peripheral attachments to structures that make up the abdominal and chest walls. The muscle fibres from these attachments converge in a central tendon, which forms the crest of the dome. Its peripheral part consists of muscular fibers that take origin from the circumference of the inferior thoracic aperture and converge to be inserted into a central tendon.
The muscle fibres of t
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What in mammalian lungs, increases the surface area for gas exchange?
A. thorax
B. alveoli
C. bronchioles
D. bronchi
Answer:
|
|
sciq-9764
|
multiple_choice
|
What do we call the solid form of hydrocarbons?
|
[
"coal",
"methane",
"bauxite",
"shale"
] |
A
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Bisnorhopanes (BNH) are a group of demethylated hopanes found in oil shales across the globe and can be used for understanding depositional conditions of the source rock. The most common member, 28,30-bisnorhopane, can be found in high concentrations in petroleum source rocks, most notably the Monterey Shale, as well as in oil and tar samples. 28,30-Bisnorhopane was first identified in samples from the Monterey Shale Formation in 1985. It occurs in abundance throughout the formation and appears in stratigraphically analogous locations along the California coast. Since its identification and analysis, 28,30-bisnorhopane has been discovered in oil shales around the globe, including lacustrine and offshore deposits of Brazil, silicified shales of the Eocene in Gabon, the Kimmeridge Clay Formation in the North Sea, and in Western Australian oil shales.
Chemistry
28,30-bisnorhopane exists in three epimers: 17α,18α21β(H), 17β,18α,21α(H), and 17β,18α,21β(H). During GC-MS, the three epimers coelute at the same time and are nearly indistinguishable. However, mass spectral fragmentation of the 28,30-bisnorhopane is predominantly characterized by m/z 191, 177, and 163. The ratios of 163/191 fragments can be used to distinguish the epimers, where the βαβ orientation has the highest, m/z 163/191 ratio. Further, the D/E ring ratios can be used to create a hierarchy of epimer maturity. From this, it is believed that the ααβ epimer is the first-formed, diagenetically, supported also by its percent dominance in younger shales. 28,30-bisnorhopane is created independently from kerogen, instead derived from bitumen, unbound as free oil-hydrocarbons. As such, as oil generation increases with source maturation, the concentration of 28,30-bisnorhopane decreases. Bisnorhopane may not be a reliable diagnostic for oil maturity due to microbial biodegradation.
Nomenclature
Norhopanes are a family of demethylated hopanes, identical to the methylated hopane structure, minus indicated desmet
Document 2:::
Phyllocladane is a tricyclic diterpane which is generally found in gymnosperm resins. It has a formula of C20H34 and a molecular weight of 274.4840. As a biomarker, it can be used to learn about the gymnosperm input into a hydrocarbon deposit, and about the age of the deposit in general. It indicates a terrogenous origin of the source rock. Diterpanes, such as Phyllocladane are found in source rocks as early as the middle and late Devonian periods, which indicates any rock containing them must be no more than approximately 360 Ma. Phyllocladane is commonly found in lignite, and like other resinites derived from gymnosperms, is naturally enriched in 13C. This enrichment is a result of the enzymatic pathways used to synthesize the compound.
The compound can be identified by GC-MS. A peak of m/z 123 is indicative of tricyclic diterpenoids in general, and phyllocladane in particular is further characterized by strong peaks at m/z 231 and m/z 189. Presence of phyllocladane and its relative abundance to other tricyclic diterpanes can be used to differentiate between various oil fields.
Document 3:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 4:::
Clathrate hydrates, or gas hydrates, clathrates, or hydrates, are crystalline water-based solids physically resembling ice, in which small non-polar molecules (typically gases) or polar molecules with large hydrophobic moieties are trapped inside "cages" of hydrogen bonded, frozen water molecules. In other words, clathrate hydrates are clathrate compounds in which the host molecule is water and the guest molecule is typically a gas or liquid. Without the support of the trapped molecules, the lattice structure of hydrate clathrates would collapse into conventional ice crystal structure or liquid water. Most low molecular weight gases, including , , , , , , , , and , as well as some higher hydrocarbons and freons, will form hydrates at suitable temperatures and pressures. Clathrate hydrates are not officially chemical compounds, as the enclathrated guest molecules are never bonded to the lattice. The formation and decomposition of clathrate hydrates are first order phase transitions, not chemical reactions. Their detailed formation and decomposition mechanisms on a molecular level are still not well understood.
Clathrate hydrates were first documented in 1810 by Sir Humphry Davy who found that water was a primary component of what was earlier thought to be solidified chlorine.
Clathrates have been found to occur naturally in large quantities. Around 6.4 trillion () tonnes of methane is trapped in deposits of methane clathrate on the deep ocean floor. Such deposits can be found on the Norwegian continental shelf in the northern headwall flank of the Storegga Slide. Clathrates can also exist as permafrost, as at the Mallik gas hydrate site in the Mackenzie Delta of northwestern Canadian Arctic. These natural gas hydrates are seen as a potentially vast energy resource and several countries have dedicated national programs to develop this energy resource. Clathrate hydrate has also been of great interest as technology enabler for many applications like seawater desalina
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do we call the solid form of hydrocarbons?
A. coal
B. methane
C. bauxite
D. shale
Answer:
|
|
scienceQA-7692
|
multiple_choice
|
What do these two changes have in common?
a piece of avocado turning brown
roasting a marshmallow over a campfire
|
[
"Both are caused by heating.",
"Both are chemical changes.",
"Both are caused by cooling.",
"Both are only physical changes."
] |
B
|
Step 1: Think about each change.
A piece of avocado turning brown is a chemical change. The avocado reacts with oxygen in the air to form a different type of matter.
If you scrape off the brown part of the avocado, the inside will still be green. The inside hasn't touched the air. So the chemical change hasn't happened to that part of the avocado.
Roasting a marshmallow is a chemical change. The type of matter on the outside of the marshmallow changes. As a marshmallow is roasted, it turns brown and crispy.
Step 2: Look at each answer choice.
Both are only physical changes.
Both changes are chemical changes. They are not physical changes.
Both are chemical changes.
Both changes are chemical changes. The type of matter before and after each change is different.
Both are caused by heating.
Roasting is caused by heating. But a piece of avocado turning brown is not.
Both are caused by cooling.
Neither change is caused by cooling.
|
Relavent Documents:
Document 0:::
Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.
Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate.
A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density.
An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge.
Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change.
Examples
Heating and cooling
Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation.
Magnetism
Ferro-magnetic materials can become magnetic. The process is reve
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
A combustible material is a material that can burn (i.e., sustain a flame) in air under certain conditions. A material is flammable if it ignites easily at ambient temperatures. In other words, a combustible material ignites with some effort and a flammable material catches fire immediately on exposure to flame.
The degree of flammability in air depends largely upon the volatility of the material - this is related to its composition-specific vapour pressure, which is temperature dependent. The quantity of vapour produced can be enhanced by increasing the surface area of the material forming a mist or dust. Take wood as an example. Finely divided wood dust can undergo explosive flames and produce a blast wave. A piece of paper (made from wood) catches on fire quite easily. A heavy oak desk is much harder to ignite, even though the wood fibre is the same in all three materials.
Common sense (and indeed scientific consensus until the mid-1700s) would seem to suggest that material "disappears" when burned, as only the ash is left. In fact, there is an increase in weight because the flammable material reacts (or combines) chemically with oxygen, which also has mass. The original mass of flammable material and the mass of the oxygen required for flames equals the mass of the flame products (ash, water, carbon dioxide, and other gases). Antoine Lavoisier, one of the pioneers in these early insights, stated that Nothing is lost, nothing is created, everything is transformed, which would later be known as the law of conservation of mass. Lavoisier used the experimental fact that some metals gained mass when they burned to support his ideas.
Definitions
Historically, flammable, inflammable and combustible meant capable of burning. The word "inflammable" came through French from the Latin inflammāre = "to set fire to", where the Latin preposition "in-" means "in" as in "indoctrinate", rather than "not" as in "invisible" and "ineligible".
The word "inflammable" may be er
Document 3:::
Adaptive comparative judgement is a technique borrowed from psychophysics which is able to generate reliable results for educational assessment – as such it is an alternative to traditional exam script marking. In the approach, judges are presented with pairs of student work and are then asked to choose which is better, one or the other. By means of an iterative and adaptive algorithm, a scaled distribution of student work can then be obtained without reference to criteria.
Introduction
Traditional exam script marking began in Cambridge 1792 when, with undergraduate numbers rising, the importance of proper ranking of students was growing. So in 1792 the new Proctor of Examinations, William Farish, introduced marking, a process in which every examiner gives a numerical score to each response by every student, and the overall total mark puts the students in the final rank order. Francis Galton (1869) noted that, in an unidentified year about 1863, the Senior Wrangler scored 7,634 out of a maximum of 17,000, while the Second Wrangler scored 4,123. (The 'Wooden Spoon' scored only 237.)
Prior to 1792, a team of Cambridge examiners convened at 5pm on the last day of examining, reviewed the 19 papers each student had sat – and published their rank order at midnight. Marking solved the problems of numbers and prevented unfair personal bias, and its introduction was a step towards modern objective testing, the format it is best suited to. But the technology of testing that followed, with its major emphasis on reliability and the automatisation of marking, has been an uncomfortable partner for some areas of educational achievement: assessing writing or speaking, and other kinds of performance need something more qualitative and judgemental.
The technique of Adaptive Comparative Judgement is an alternative to marking. It returns to the pre-1792 idea of sorting papers according to their quality, but retains the guarantee of reliability and fairness. It is by far the most rel
Document 4:::
In physics, a dynamical system is said to be mixing if the phase space of the system becomes strongly intertwined, according to at least one of several mathematical definitions. For example, a measure-preserving transformation T is said to be strong mixing if
whenever A and B are any measurable sets and μ is the associated measure. Other definitions are possible, including weak mixing and topological mixing.
The mathematical definition of mixing is meant to capture the notion of physical mixing. A canonical example is the Cuba libre: suppose one is adding rum (the set A) to a glass of cola. After stirring the glass, the bottom half of the glass (the set B) will contain rum, and it will be in equal proportion as it is elsewhere in the glass. The mixing is uniform: no matter which region B one looks at, some of A will be in that region. A far more detailed, but still informal description of mixing can be found in the article on mixing (mathematics).
Every mixing transformation is ergodic, but there are ergodic transformations which are not mixing.
Physical mixing
The mixing of gases or liquids is a complex physical process, governed by a convective diffusion equation that may involve non-Fickian diffusion as in spinodal decomposition. The convective portion of the governing equation contains fluid motion terms that are governed by the Navier–Stokes equations. When fluid properties such as viscosity depend on composition, the governing equations may be coupled. There may also be temperature effects. It is not clear that fluid mixing processes are mixing in the mathematical sense.
Small rigid objects (such as rocks) are sometimes mixed in a rotating drum or tumbler. The 1969 Selective Service draft lottery was carried out by mixing plastic capsules which contained a slip of paper (marked with a day of the year).
See also
Miscibility
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do these two changes have in common?
a piece of avocado turning brown
roasting a marshmallow over a campfire
A. Both are caused by heating.
B. Both are chemical changes.
C. Both are caused by cooling.
D. Both are only physical changes.
Answer:
|
sciq-4066
|
multiple_choice
|
What do ants and termites eat to help them digest wood and leaves?
|
[
"pollen",
"berries",
"fungi",
"yeast"
] |
C
|
Relavent Documents:
Document 0:::
Myrmecophytes (; literally "ant-plant") are plants that live in a mutualistic association with a colony of ants. There are over 100 different genera of myrmecophytes. These plants possess structural adaptations that provide ants with food and/or shelter. These specialized structures include domatia, food bodies, and extrafloral nectaries. In exchange for food and shelter, ants aid the myrmecophyte in pollination, seed dispersal, gathering of essential nutrients, and/or defense. Specifically, domatia adapted to ants may be called myrmecodomatia.
Mutualism
Myrmecophytes share a mutualistic relationship with ants, benefiting both the plants and ants. This association may be either facultative or obligate.
Obligate
In obligate mutualisms, both of the organisms involved are interdependent; they cannot survive on their own. An example of this type of mutualism can be found in the plant genus Macaranga. All species of this genus provide food for ants in various forms, but only the obligate species produce domatia. Some of the most common species of myrmecophytic Macaranga interact with ants in the genus Crematogaster. C. borneensis have been found to be completely dependent on its partner plant, not being able to survive without the provided nesting spaces and food bodies. In laboratory tests, the worker ants did not survive away from the plants, and in their natural habitat they were never found anywhere else.
Facultative
Facultative mutualism is a type of relationship where the survival of both parties (plant and ants, in this instance) is not dependent upon the interaction. Both organisms can survive without the other species. Facultative mutualisms most often occur in plants that have extrafloral nectaries but no other specialized structures for the ants. These non-exclusive nectaries allow a variety of animal species to interact with the plant. Facultative relationships can also develop between non-native plant and ant species, where co-evolution
Document 1:::
Termites are a group of detritophagous eusocial insects which consume a wide variety of decaying plant material, generally in the form of wood, leaf litter, and soil humus. They are distinguished by their moniliform antennae and the soft-bodied and typically unpigmented worker caste for which they have been commonly termed "white ants"; however, they are not ants, to which they are distantly related. About 2,972 extant species are currently described, 2,105 of which are members of the family Termitidae.
Termites comprise the infraorder Isoptera, or alternatively the epifamily Termitoidae, within the order Blattodea (along with cockroaches). Termites were once classified in a separate order from cockroaches, but recent phylogenetic studies indicate that they evolved from cockroaches, as they are deeply nested within the group, and the sister group to wood eating cockroaches of the genus Cryptocercus. Previous estimates suggested the divergence took place during the Jurassic or Triassic. More recent estimates suggest that they have an origin during the Late Jurassic, with the first fossil records in the Early Cretaceous.
Similarly to ants and some bees and wasps from the separate order Hymenoptera, most termites have an analogous "worker" and "soldier" caste system consisting of mostly sterile individuals which are physically and behaviorally distinct. Unlike ants, most colonies begin from sexually mature individuals known as the "king" and "queen" that together form a lifelong monogamous pair. Also unlike ants, which undergo a complete metamorphosis, termites undergo an incomplete metamorphosis that proceeds through egg, nymph, and adult stages. Termite colonies are commonly described as superorganisms due to the collective behaviors of the individuals which form a self-governing entity: the colony itself. Their colonies range in size from a few hundred individuals to enormous societies with several million individuals. Most species are rarely seen, having a crypti
Document 2:::
An ant garden is a mutualistic interaction between certain species of arboreal ants and various epiphytic plants. It is a structure made in the tree canopy by the ants that is filled with debris and other organic matter in which epiphytes grow. The ants benefit from this arrangement by having a stable framework on which to build their nest while the plants benefit by obtaining nutrients from the soil and from the moisture retained there.
Description
Epiphytes are common in tropical rain forest and in cloud forest. An epiphyte normally derives its moisture and nutrients from the air, rain, mist and dew. Nitrogenous matter is in short supply and the epiphytes benefit significantly from the nutrients in the ant garden. The ant garden is made from "carton", a mixture of vegetable fibres, leaf debris, refuse, glandular secretions and ant faeces. The ants use this material to build their nests among the branches of the trees, to shelter the hemipteran insects that they tend in order to feed on their honeydew, and to make the pockets of material in which the epiphytes grow.
The ants harvest seeds from the epiphytic plants and deposit them in the carton material. The plants have evolved various traits to encourage ants to disperse their seeds by producing chemical attractants. Eleven unrelated epiphytes that grow in ant gardens have been found to contain methyl salicylate (oil of wintergreen) and it seems likely that this compound is an ant attractant.
Examples
Species of ant that make gardens include Crematogaster carinata, Camponotus femoratus and Solenopsis parabioticus, all of which are parabiotic species which routinely share their nests with unrelated species of ant. Epiphytic plants that they grow include various members of the Araceae, Bromeliaceae, Cactaceae, Gesneriaceae, Moraceae, Piperaceae and Solanaceae. Epiphytic plants in the genus Codonanthopsis, including those formerly placed in Codonanthe, grow almost exclusively in ant gardens, often associated with
Document 3:::
Myrmecotrophy is the ability of plants to obtain nutrients from ants, a form of mutualism. Due to this behaviour the invasion of vegetation into harsh environments is promoted. The dead remains of insects thrown out by the ants are absorbed by the lenticular warts in myrmecophytes like Hydnophytum and Myrmecodia. Myrmecodia uses its lenticular warts to suck nutrients from the insects thrown out by the ants. The ants in turn benefit with a secure location to form their colony. The pitcher plant Nepenthes bicalcarata obtains an estimated 42% of its total foliar nitrogen from ant waste.
Document 4:::
Consumer–resource interactions are the core motif of ecological food chains or food webs, and are an umbrella term for a variety of more specialized types of biological species interactions including prey-predator (see predation), host-parasite (see parasitism), plant-herbivore and victim-exploiter systems. These kinds of interactions have been studied and modeled by population ecologists for nearly a century. Species at the bottom of the food chain, such as algae and other autotrophs, consume non-biological resources, such as minerals and nutrients of various kinds, and they derive their energy from light (photons) or chemical sources. Species higher up in the food chain survive by consuming other species and can be classified by what they eat and how they obtain or find their food.
Classification of consumer types
The standard categorization
Various terms have arisen to define consumers by what they eat, such as meat-eating carnivores, fish-eating piscivores, insect-eating insectivores, plant-eating herbivores, seed-eating granivores, and fruit-eating frugivores and omnivores are meat eaters and plant eaters. An extensive classification of consumer categories based on a list of feeding behaviors exists.
The Getz categorization
Another way of categorizing consumers, proposed by South African American ecologist Wayne Getz, is based on a biomass transformation web (BTW) formulation that organizes resources into five components: live and dead animal, live and dead plant, and particulate (i.e. broken down plant and animal) matter. It also distinguishes between consumers that gather their resources by moving across landscapes from those that mine their resources by becoming sessile once they have located a stock of resources large enough for them to feed on during completion of a full life history stage.
In Getz's scheme, words for miners are of Greek etymology and words for gatherers are of Latin etymology. Thus a bestivore, such as a cat, preys on live animal
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do ants and termites eat to help them digest wood and leaves?
A. pollen
B. berries
C. fungi
D. yeast
Answer:
|
|
sciq-5941
|
multiple_choice
|
The human skeleton is an endoskeleton that consists of 206 bones in the adult. it has five main functions: providing support to the body, storing minerals and lipids, producing blood cells, protecting internal organs, and this?
|
[
"Forward movement",
"allowing movement",
"more movement",
"enough movement"
] |
B
|
Relavent Documents:
Document 0:::
Neuro Biomechanics is based upon the research of bioengineering researchers, neuro-surgery, orthopedic surgery and biomechanists. Neuro Biomechanics are utilized by neurosurgeons, orthopedic surgeons and primarily by integrated physical medicine practitioners. Practitioners are focused on aiding people in the restoration of biomechanics of the skeletal system in order to measurably improve nervous system function, health, function, quality of life, reduce pain and the progression of degenerative joint and disc disease.
Neuro: of or having to do with the nervous system. Nervous system: An organ system that coordinates the activities of muscles, monitors organs, constructs and processes data received from the senses and initiates actions. The human nervous system coordinates the functions of itself and all organ systems including but not limited to the cardiovascular system, respiratory system, skin, digestive system, immune system, hormonal, metabolic, musculoskeletal, endocrine system, blood and reproductive system. Optimal function of the organism as a whole depends upon the proper function of the nervous system.
Biomechanics: (biology, physics) The branch of biophysics that deals with the mechanics of the human or animal body; especially concerned with muscles and the skeleton. The study of biomechanical influences upon nervous system function and load bearing joints.
Research:
Research on established ideal mechanical models for the human locomotor system.
Panjabi MM, Journal of Biomechanics, 1974. A note on defining body parts configurations
Gracovetsky S. Spine 1986; The Optimum Spine
Yoganandan, Spine 1996
Harrison. Spine 2004 Modeling of the Sagittal Cervical Spine as a Method to Discriminate Hypolordosis: Results of Elliptical and Circular Modeling in 72 Asymptomatic Subjects, 52 Acute Neck Pain Subjects, and 70 Chronic Neck Pain Subjects; Spine 2004
Panjabi et al. Spine 1997 Whiplash produces and S-Shape curve...
Harrision DE, JMPT 2003, Increasing
Document 1:::
Comparative foot morphology involves comparing the form of distal limb structures of a variety of terrestrial vertebrates. Understanding the role that the foot plays for each type of organism must take account of the differences in body type, foot shape, arrangement of structures, loading conditions and other variables. However, similarities also exist among the feet of many different terrestrial vertebrates. The paw of the dog, the hoof of the horse, the manus (forefoot) and pes (hindfoot) of the elephant, and the foot of the human all share some common features of structure, organization and function. Their foot structures function as the load-transmission platform which is essential to balance, standing and types of locomotion (such as walking, trotting, galloping and running).
The discipline of biomimetics applies the information gained by comparing the foot morphology of a variety of terrestrial vertebrates to human-engineering problems. For instance, it may provide insights that make it possible to alter the foot's load transmission in people who wear an external orthosis because of paralysis from spinal-cord injury, or who use a prosthesis following the diabetes-related amputation of a leg. Such knowledge can be incorporated in technology that improves a person's balance when standing; enables them to walk more efficiently, and to exercise; or otherwise enhances their quality of life by improving their mobility.
Structure
Limb and foot structure of representative terrestrial vertebrates:
Variability in scaling and limb coordination
There is considerable variation in the scale and proportions of body and limb, as well as the nature of loading, during standing and locomotion both among and between quadrupeds and bipeds. The anterior-posterior body mass distribution varies considerably among mammalian quadrupeds, which affects limb loading. When standing, many terrestrial quadrupeds support more of their weight on their forelimbs rather than their hi
Document 2:::
The American Society of Biomechanics (ASB) is a scholarly society that focuses on biomechanics across a variety of academic fields. It was founded in 1977 by a group of scientists and clinicians. The ASB holds an annual conference as an arena to disseminate and learn about the most recent progress in the field, to distribute awards to recognize excellent work, and to engage in public outreach to expand the impact of its members.
Conferences
The society hosts an annual conference that takes place in North America (usually USA). These conferences are periodically joint conferences held in conjunction with the International Society of Biomechanics (ISB), the North American Congress on Biomechanics (NACOB), and the World Congress of Biomechanics (WCB). The annual conference, when not partnered with another conference, receives around 700 to 800 abstract submissions per year, with attendees in approximately the same numbers. The first conference was held in 1977.
Often, work presented at these conferences achieves media attention due to the ‘public interest’ nature of the findings or that new devices are introduced there. Examples include:
the effect of tablet reading on cervical spine posture;
the squeak of the basketball shoe;
‘underwear’ to address back-pain;
recovery after exercise;
exoskeleton boots for joint pain during exercise;
how flamingos stand on one leg.
National Biomechanics Day
The ASB is instrumental in promoting National Biomechanics Day (NBD), which has received international recognition.
In New Zealand, Massey University attracted NZ$48,000 of national funding
through the Unlocking Curious Minds programme to promote National Biomechanics Day, with the aim to engage 1,100 students from lower-decile schools in an experiential learning day focused on the science of biomechanics.
It was first held in 2016 on April 7, and consisted of ‘open house’ visits from middle and high school students to biomechanics research and teaching laboratories a
Document 3:::
The study of animal locomotion is a branch of biology that investigates and quantifies how animals move.
Kinematics
Kinematics is the study of how objects move, whether they are mechanical or living. In animal locomotion, kinematics is used to describe the motion of the body and limbs of an animal. The goal is ultimately to understand how the movement of individual limbs relates to the overall movement of an animal within its environment. Below highlights the key kinematic parameters used to quantify body and limb movement for different modes of animal locomotion.
Quantifying locomotion
Walking
Legged locomotion is a dominant form of terrestrial locomotion, the movement on land. The motion of limbs is quantified by intralimb and interlimb kinematic parameters. Intralimb kinematic parameters capture movement aspects of an individual limb, whereas, interlimb kinematic parameters characterize the coordination across limbs. Interlimb kinematic parameters are also referred to as gait parameters. The following are key intralimb and interlimb kinematic parameters of walking:
Characterizing swing and stance transitions
The calculation of the above intra- and interlimb kinematics relies on the classification of when the legs of an animal touches and leaves the ground. Stance onset is defined as when a leg first contacts the ground, whereas, swing onset occurs at the time when the leg leaves the ground. Typically, the transition between swing and stance, and vice versa, of a leg is determined by first recording the leg's motion with high-speed videography (see the description of high-speed videography below for more details). From the video recordings of the leg, a marker on the leg (usually placed at the distal tip of the leg) is then tracked manually or in an automated fashion to obtain the position signal of the leg's movement. The position signal associated with each leg is then normalized to that associated with a marker on the body; transforming the leg position
Document 4:::
Proprioception ( ), also called kinaesthesia (or kinesthesia), is the sense of self-movement, force, and body position.
Proprioception is mediated by proprioceptors, mechanosensory neurons located within muscles, tendons, and joints. Most animals possess multiple subtypes of proprioceptors, which detect distinct kinematic parameters, such as joint position, movement, and load. Although all mobile animals possess proprioceptors, the structure of the sensory organs can vary across species.
Proprioceptive signals are transmitted to the central nervous system, where they are integrated with information from other sensory systems, such as the visual system and the vestibular system, to create an overall representation of body position, movement, and acceleration. In many animals, sensory feedback from proprioceptors is essential for stabilizing body posture and coordinating body movement.
System overview
In vertebrates, limb movement and velocity (muscle length and the rate of change) are encoded by one group of sensory neurons (type Ia sensory fiber) and another type encode static muscle length (group II neurons). These two types of sensory neurons compose muscle spindles. There is a similar division of encoding in invertebrates; different subgroups of neurons of the Chordotonal organ encode limb position and velocity.
To determine the load on a limb, vertebrates use sensory neurons in the Golgi tendon organs: type Ib afferents. These proprioceptors are activated at given muscle forces, which indicate the resistance that muscle is experiencing. Similarly, invertebrates have a mechanism to determine limb load: the Campaniform sensilla. These proprioceptors are active when a limb experiences resistance.
A third role for proprioceptors is to determine when a joint is at a specific position. In vertebrates, this is accomplished by Ruffini endings and Pacinian corpuscles. These proprioceptors are activated when the joint is at a threshold position, usually at the extre
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The human skeleton is an endoskeleton that consists of 206 bones in the adult. it has five main functions: providing support to the body, storing minerals and lipids, producing blood cells, protecting internal organs, and this?
A. Forward movement
B. allowing movement
C. more movement
D. enough movement
Answer:
|
|
scienceQA-8095
|
multiple_choice
|
What do these two changes have in common?
photosynthesis
cooking an egg
|
[
"Both are only physical changes.",
"Both are chemical changes.",
"Both are caused by cooling.",
"Both are caused by heating."
] |
B
|
Step 1: Think about each change.
Photosynthesis is a chemical change. Plants make sugar using carbon dioxide, water, and energy from sunlight.
Cooking an egg is a chemical change. The heat causes the matter in the egg to change. Cooked egg and raw egg are different types of matter.
Step 2: Look at each answer choice.
Both are only physical changes.
Both changes are chemical changes. They are not physical changes.
Both are chemical changes.
Both changes are chemical changes. The type of matter before and after each change is different.
Both are caused by heating.
Cooking is caused by heating. But photosynthesis is not.
Both are caused by cooling.
Neither change is caused by cooling.
|
Relavent Documents:
Document 0:::
Biological processes are those processes that are vital for an organism to live, and that shape its capacities for interacting with its environment. Biological processes are made of many chemical reactions or other events that are involved in the persistence and transformation of life forms. Metabolism and homeostasis are examples.
Biological processes within an organism can also work as bioindicators. Scientists are able to look at an individual's biological processes to monitor the effects of environmental changes.
Regulation of biological processes occurs when any process is modulated in its frequency, rate or extent. Biological processes are regulated by many means; examples include the control of gene expression, protein modification or interaction with a protein or substrate molecule.
Homeostasis: regulation of the internal environment to maintain a constant state; for example, sweating to reduce temperature
Organization: being structurally composed of one or more cells – the basic units of life
Metabolism: transformation of energy by converting chemicals and energy into cellular components (anabolism) and decomposing organic matter (catabolism). Living things require energy to maintain internal organization (homeostasis) and to produce the other phenomena associated with life.
Growth: maintenance of a higher rate of anabolism than catabolism. A growing organism increases in size in all of its parts, rather than simply accumulating matter.
Response to stimuli: a response can take many forms, from the contraction of a unicellular organism to external chemicals, to complex reactions involving all the senses of multicellular organisms. A response is often expressed by motion; for example, the leaves of a plant turning toward the sun (phototropism), and chemotaxis.
Reproduction: the ability to produce new individual organisms, either asexually from a single parent organism or sexually from two parent organisms.
Interaction between organisms. the processes
Document 1:::
Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.
Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate.
A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density.
An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge.
Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change.
Examples
Heating and cooling
Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation.
Magnetism
Ferro-magnetic materials can become magnetic. The process is reve
Document 2:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 3:::
Plants depend on epigenetic processes for proper function. Epigenetics is defined as "the study of changes in gene function that are mitotically and/or meiotically heritable and that do not entail a change in DNA sequence" (Wu et al. 2001). The area of study examines protein interactions with DNA and its associated components, including histones and various other modifications such as methylation, which alter the rate or target of transcription. Epi-alleles and epi-mutants, much like their genetic counterparts, describe changes in phenotypes due to epigenetic mechanisms. Epigenetics in plants has attracted scientific enthusiasm because of its importance in agriculture.
Background and history
In the past, macroscopic observations on plants led to basic understandings of how plants respond to their environments and grow. While these investigations could somewhat correlate cause and effect as a plant develops, they could not truly explain the mechanisms at work without inspection at the molecular level.
Certain studies provided simplistic models with the groundwork for further exploration and eventual explanation through epigenetics. In 1918, Gassner published findings that noted the necessity of a cold phase in order for proper plant growth. Meanwhile, Garner and Allard examined the importance of the duration of light exposure to plant growth in 1920. Gassner's work would shape the conceptualization of vernalization which involves epigenetic changes in plants after a period of cold that leads to development of flowering (Heo and Sung et al. 2011). In a similar manner, Garner and Allard's efforts would gather an awareness of photoperiodism which involves epigenetic modifications following the duration of nighttime which enable flowering (Sun et al. 2014). Rudimentary comprehensions set precedent for later molecular evaluation and, eventually, a more complete view of how plants operate.
Modern epigenetic work depends heavily on bioinformatics to gather large quant
Document 4:::
Energy flow is the flow of energy through living things within an ecosystem. All living organisms can be organized into producers and consumers, and those producers and consumers can further be organized into a food chain. Each of the levels within the food chain is a trophic level. In order to more efficiently show the quantity of organisms at each trophic level, these food chains are then organized into trophic pyramids. The arrows in the food chain show that the energy flow is unidirectional, with the head of an arrow indicating the direction of energy flow; energy is lost as heat at each step along the way.
The unidirectional flow of energy and the successive loss of energy as it travels up the food web are patterns in energy flow that are governed by thermodynamics, which is the theory of energy exchange between systems. Trophic dynamics relates to thermodynamics because it deals with the transfer and transformation of energy (originating externally from the sun via solar radiation) to and among organisms.
Energetics and the carbon cycle
The first step in energetics is photosynthesis, wherein water and carbon dioxide from the air are taken in with energy from the sun, and are converted into oxygen and glucose. Cellular respiration is the reverse reaction, wherein oxygen and sugar are taken in and release energy as they are converted back into carbon dioxide and water. The carbon dioxide and water produced by respiration can be recycled back into plants.
Energy loss can be measured either by efficiency (how much energy makes it to the next level), or by biomass (how much living material exists at those levels at one point in time, measured by standing crop). Of all the net primary productivity at the producer trophic level, in general only 10% goes to the next level, the primary consumers, then only 10% of that 10% goes on to the next trophic level, and so on up the food pyramid. Ecological efficiency may be anywhere from 5% to 20% depending on how efficient
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do these two changes have in common?
photosynthesis
cooking an egg
A. Both are only physical changes.
B. Both are chemical changes.
C. Both are caused by cooling.
D. Both are caused by heating.
Answer:
|
sciq-11388
|
multiple_choice
|
What is the first stage of cellular respiration called?
|
[
"glycolysis",
"reproduction",
"appetite",
"photosynthesis"
] |
A
|
Relavent Documents:
Document 0:::
Cellular respiration is the process by which biological fuels are oxidized in the presence of an inorganic electron acceptor, such as oxygen, to drive the bulk production of adenosine triphosphate (ATP), which contains energy. Cellular respiration may be described as a set of metabolic reactions and processes that take place in the cells of organisms to convert chemical energy from nutrients into ATP, and then release waste products.
Cellular respiration is a vital process that happens in the cells of living organisms, including humans, plants, and animals. It's how cells produce energy to power all the activities necessary for life.
The reactions involved in respiration are catabolic reactions, which break large molecules into smaller ones, producing large amounts of energy (ATP). Respiration is one of the key ways a cell releases chemical energy to fuel cellular activity. The overall reaction occurs in a series of biochemical steps, some of which are redox reactions. Although cellular respiration is technically a combustion reaction, it is an unusual one because of the slow, controlled release of energy from the series of reactions.
Nutrients that are commonly used by animal and plant cells in respiration include sugar, amino acids and fatty acids, and the most common oxidizing agent is molecular oxygen (O2). The chemical energy stored in ATP (the bond of its third phosphate group to the rest of the molecule can be broken allowing more stable products to form, thereby releasing energy for use by the cell) can then be used to drive processes requiring energy, including biosynthesis, locomotion or transportation of molecules across cell membranes.
Aerobic respiration
Aerobic respiration requires oxygen (O2) in order to create ATP. Although carbohydrates, fats and proteins are consumed as reactants, aerobic respiration is the preferred method of pyruvate production in glycolysis, and requires pyruvate to the mitochondria in order to be fully oxidized by the c
Document 1:::
Cellular waste products are formed as a by-product of cellular respiration, a series of processes and reactions that generate energy for the cell, in the form of ATP. One example of cellular respiration creating cellular waste products are aerobic respiration and anaerobic respiration.
Each pathway generates different waste products.
Aerobic respiration
When in the presence of oxygen, cells use aerobic respiration to obtain energy from glucose molecules.
Simplified Theoretical Reaction: C6H12O6 (aq) + 6O2 (g) → 6CO2 (g) + 6H2O (l) + ~ 30ATP
Cells undergoing aerobic respiration produce 6 molecules of carbon dioxide, 6 molecules of water, and up to 30 molecules of ATP (adenosine triphosphate), which is directly used to produce energy, from each molecule of glucose in the presence of surplus oxygen.
In aerobic respiration, oxygen serves as the recipient of electrons from the electron transport chain. Aerobic respiration is thus very efficient because oxygen is a strong oxidant.
Aerobic respiration proceeds in a series of steps, which also increases efficiency - since glucose is broken down gradually and ATP is produced as needed, less energy is wasted as heat. This strategy results in the waste products H2O and CO2 being formed in different amounts at different phases of respiration. CO2 is formed in Pyruvate decarboxylation, H2O is formed in oxidative phosphorylation, and both are formed in the citric acid cycle.
The simple nature of the final products also indicates the efficiency of this method of respiration. All of the energy stored in the carbon-carbon bonds of glucose is released, leaving CO2 and H2O. Although there is energy stored in the bonds of these molecules, this energy is not easily accessible by the cell. All usable energy is efficiently extracted.
Anaerobic respiration
Anaerobic respiration is done by aerobic organisms when there is not sufficient oxygen in a cell to undergo aerobic respiration as well as by cells called anaerobes that
Document 2:::
Bioenergetics is a field in biochemistry and cell biology that concerns energy flow through living systems. This is an active area of biological research that includes the study of the transformation of energy in living organisms and the study of thousands of different cellular processes such as cellular respiration and the many other metabolic and enzymatic processes that lead to production and utilization of energy in forms such as adenosine triphosphate (ATP) molecules. That is, the goal of bioenergetics is to describe how living organisms acquire and transform energy in order to perform biological work. The study of metabolic pathways is thus essential to bioenergetics.
Overview
Bioenergetics is the part of biochemistry concerned with the energy involved in making and breaking of chemical bonds in the molecules found in biological organisms. It can also be defined as the study of energy relationships and energy transformations and transductions in living organisms. The ability to harness energy from a variety of metabolic pathways is a property of all living organisms. Growth, development, anabolism and catabolism are some of the central processes in the study of biological organisms, because the role of energy is fundamental to such biological processes. Life is dependent on energy transformations; living organisms survive because of exchange of energy between living tissues/ cells and the outside environment. Some organisms, such as autotrophs, can acquire energy from sunlight (through photosynthesis) without needing to consume nutrients and break them down. Other organisms, like heterotrophs, must intake nutrients from food to be able to sustain energy by breaking down chemical bonds in nutrients during metabolic processes such as glycolysis and the citric acid cycle. Importantly, as a direct consequence of the First Law of Thermodynamics, autotrophs and heterotrophs participate in a universal metabolic network—by eating autotrophs (plants), heterotrophs ha
Document 3:::
Reactions
The reactions of the MEP pathway are as follows, taken primarily from Eisenreich and co-workers, excep
Document 4:::
In enzymology, the committed step (also known as the first committed step) is an effectively irreversible enzymatic reaction that occurs at a branch point during the biosynthesis of some molecules.
As the name implies, after this step, the molecules are "committed" to the pathway and will ultimately end up in the pathway's final product. The first committed step should not be confused with the rate-determining step, which is the slowest step in a reaction or pathway. However, it is sometimes the case that the first committed step is in fact the rate-determining step as well.
Regulation
Metabolic pathways require tight regulation so that the proper compounds get produced in the proper amounts. Often, the first committed step is regulated by processes such as feedback inhibition and activation. Such regulation ensures that pathway intermediates do not accumulate, a situation that can be wasteful or even harmful to the cell.
Examples of enzymes that catalyze the first committed steps of metabolic pathways
Phosphofructokinase 1 catalyzes the first committed step of glycolysis.
LpxC catalyzes the first committed step of lipid A biosynthesis.
8-amino-7-oxononanoate synthase catalyzes the first committed step in plant biotin synthesis.
MurA catalyzes the first committed step of peptidoglycan biosynthesis.
Aspartate transcarbamoylase catalyzes the committed step in the pyrimidine biosynthetic pathway in E. coli.
3-deoxy-D-arabinose-heptulsonate 7-phosphate synthase catalyses the first committed step of the shikimate pathway responsible for the synthesis of the aromatic amino acids Tyrosine, Tryptophan and Phenylalanine in plants, bacteria, fungi and some lower eukaryotes.
Citrate synthase catalyzes the addition of acetyl-CoA to oxaloacetate and is the first committed step of the Citric Acid Cycle.
Acetyl-CoA carboxylase catalyzes the irreversible carboxylation of acetyl-CoA to malonyl-CoA in the first committed step of fatty acid biosynthesis.
Glucose-6-phosphate dehy
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the first stage of cellular respiration called?
A. glycolysis
B. reproduction
C. appetite
D. photosynthesis
Answer:
|
|
sciq-10701
|
multiple_choice
|
Lead shielding is used to block what type of rays?
|
[
"alpha",
"ultraviolet",
"toxic",
"gamma"
] |
D
|
Relavent Documents:
Document 0:::
Lead shielding refers to the use of lead as a form of radiation protection to shield people or objects from radiation so as to reduce the effective dose. Lead can effectively attenuate certain kinds of radiation because of its high density and high atomic number; principally, it is effective at stopping gamma rays and x-rays.
Operation
Lead's high density is caused by the combination of its high atomic number and the relatively short bond lengths and atomic radius. The high atomic number means that more electrons are needed to maintain a neutral charge and the short bond length and a small atomic radius means that many atoms can be packed into a particular lead structure.
Because of lead's density and large number of electrons, it is well suited to scattering x-rays and gamma-rays. These rays form photons, a type of boson, which impart energy onto electrons when they come into contact. Without a lead shield, the electrons within a person's body would be affected, which could damage their DNA. When the radiation attempts to pass through lead, its electrons absorb and scatter the energy. Eventually though, the lead will degrade from the energy to which it is exposed. However, lead is not effective against all types of radiation. High energy electrons (including beta radiation) incident on lead may create bremsstrahlung radiation, which is potentially more dangerous to tissue than the original radiation. Furthermore, lead is not a particularly effective absorber of neutron radiation.
Types
Lead is used for shielding in x-ray machines, nuclear power plants, labs, medical facilities, military equipment, and other places where radiation may be encountered. There is great variety in the types of shielding available both to protect people and to shield equipment and experiments. In gamma-spectroscopy for example, lead castles are constructed to shield the probe from environmental radiation. Personal shielding includes lead aprons (such as the familiar garment used d
Document 1:::
absorbed dose
Electromagnetic radiation
equivalent dose
hormesis
Ionizing radiation
Louis Harold Gray (British physicist)
rad (unit)
radar
radar astronomy
radar cross section
radar detector
radar gun
radar jamming
(radar reflector) corner reflector
radar warning receiver
(Radarange) microwave oven
radiance
(radiant: see) meteor shower
radiation
Radiation absorption
Radiation acne
Radiation angle
radiant barrier
(radiation belt: see) Van Allen radiation belt
Radiation belt electron
Radiation belt model
Radiation Belt Storm Probes
radiation budget
Radiation burn
Radiation cancer
(radiation contamination) radioactive contamination
Radiation contingency
Radiation damage
Radiation damping
Radiation-dominated era
Radiation dose reconstruction
Radiation dosimeter
Radiation effect
radiant energy
Radiation enteropathy
(radiation exposure) radioactive contamination
Radiation flux
(radiation gauge: see) gauge fixing
radiation hardening
(radiant heat) thermal radiation
radiant heating
radiant intensity
radiation hormesis
radiation impedance
radiation implosion
Radiation-induced lung injury
Radiation Laboratory
radiation length
radiation mode
radiation oncologist
radiation pattern
radiation poisoning (radiation sickness)
radiation pressure
radiation protection (radiation shield) (radiation shielding)
radiation resistance
Radiation Safety Officer
radiation scattering
radiation therapist
radiation therapy (radiotherapy)
(radiation treatment) radiation therapy
(radiation units: see) :Category:Units of radiation dose
(radiation weight factor: see) equivalent dose
radiation zone
radiative cooling
radiative forcing
radiator
radio
(radio amateur: see) amateur radio
(radio antenna) antenna (radio)
radio astronomy
radio beacon
(radio broadcasting: see) broadcasting
radio clock
(radio communications) radio
radio control
radio controlled airplane
radio controlled car
radio-controlled helicopter
radio control
Document 2:::
Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams.
Course content
E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are:
Electrostatics
Conductors, capacitors, and dielectrics
Electric circuits
Magnetic fields
Electromagnetism.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with
Document 3:::
Radiation sensitivity is the susceptibility of a material to physical or chemical changes induced by radiation. Examples of radiation sensitive materials are silver chloride, photoresists and biomaterials. Pine trees are more radiation susceptible than birch due to the complexity of the pine DNA in comparison to the birch. Examples of radiation insensitive materials are metals and ionic crystals such as quartz and sapphire. The radiation effect depends on the type of the irradiating particles, their energy, and the number of incident particles per unit volume. Radiation effects can be transient or permanent. The persistence of the radiation effect depends on the stability of the induced physical and chemical change. Physical radiation effects depending on diffusion properties can be thermally annealed whereby the original structure of the material is recovered. Chemical radiation effects usually cannot be recovered.
Document 4:::
A lead castle, also called a lead cave or a lead housing, is a structure composed of lead to provide shielding against gamma radiation in a variety of applications in the nuclear industry and other activities which use ionizing radiation.
Applications
Shielding of radioactive materials
Castles are widely used to shield radioactive "sources" (see notes) and radioactive materials, either in the laboratory or plant environment. The purpose of the castle is to shield people from gamma radiation. Lead will not efficiently attenuate neutrons. If an experiment or pilot plant is to be observed, a viewing window of lead glass may be used to give gamma shielding but allow visibility.
Shielding of radiation detectors
Plant radiation detectors that are operating in a high ambient gamma background are sometimes shielded to prevent the background swamping the detector. Such a detector may be looking for alpha and beta particles, and gamma radiation will affect this.
Laboratory or health physics detectors, even if remote from nuclear operations, may require shielding if very low levels of radiation are to be detected. This is the case with, for instance, a scintillation counter measuring low levels of contamination on a swab or sample.
Construction
The castle can be made from individual bricks; usually with interlocking chevron edges to prevent "shine paths" of direct radiation through the gaps. They can also be made from lead produced in bespoke shapes by machining or casting. Such an example would be the annular ring castle commonly used for shielding scintillation counters.
A typical lead brick weighs about ten kilograms. Lead castles can be made of hundreds of bricks and weigh thousands of kilograms, so the floor must be able to withstand a heavy load. It is best to set up on a floor designed to carry the weight, or in the basement of a building built on a concrete slab. If the castle is not put directly on such a floor, it will require a suitably strong structure
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Lead shielding is used to block what type of rays?
A. alpha
B. ultraviolet
C. toxic
D. gamma
Answer:
|
|
sciq-2764
|
multiple_choice
|
Saltwater is a homogeneous mixture, another term for what?
|
[
"solution",
"structure",
"lipid",
"element"
] |
A
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 2:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
Document 3:::
The Mathematics Subject Classification (MSC) is an alphanumerical classification scheme that has collaboratively been produced by staff of, and based on the coverage of, the two major mathematical reviewing databases, Mathematical Reviews and Zentralblatt MATH. The MSC is used by many mathematics journals, which ask authors of research papers and expository articles to list subject codes from the Mathematics Subject Classification in their papers. The current version is MSC2020.
Structure
The MSC is a hierarchical scheme, with three levels of structure. A classification can be two, three or five digits long, depending on how many levels of the classification scheme are used.
The first level is represented by a two-digit number, the second by a letter, and the third by another two-digit number. For example:
53 is the classification for differential geometry
53A is the classification for classical differential geometry
53A45 is the classification for vector and tensor analysis
First level
At the top level, 64 mathematical disciplines are labeled with a unique two-digit number. In addition to the typical areas of mathematical research, there are top-level categories for "History and Biography", "Mathematics Education", and for the overlap with different sciences. Physics (i.e. mathematical physics) is particularly well represented in the classification scheme with a number of different categories including:
Fluid mechanics
Quantum mechanics
Geophysics
Optics and electromagnetic theory
All valid MSC classification codes must have at least the first-level identifier.
Second level
The second-level codes are a single letter from the Latin alphabet. These represent specific areas covered by the first-level discipline. The second-level codes vary from discipline to discipline.
For example, for differential geometry, the top-level code is 53, and the second-level codes are:
A for classical differential geometry
B for local differential geometry
C for glo
Document 4:::
The Bachelor of Science in Aquatic Resources and Technology (B.Sc. in AQT) (or Bachelor of Aquatic Resource) is an undergraduate degree that prepares students to pursue careers in the public, private, or non-profit sector in areas such as marine science, fisheries science, aquaculture, aquatic resource technology, food science, management, biotechnology and hydrography. Post-baccalaureate training is available in aquatic resource management and related areas.
The Department of Animal Science and Export Agriculture, at the Uva Wellassa University of Badulla, Sri Lanka, has the largest enrollment of undergraduate majors in Aquatic Resources and Technology, with about 200 students as of 2014.
The Council on Education for Aquatic Resources and Technology includes undergraduate AQT degrees in the accreditation review of Aquatic Resources and Technology programs and schools.
See also
Marine Science
Ministry of Fisheries and Aquatic Resources Development
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Saltwater is a homogeneous mixture, another term for what?
A. solution
B. structure
C. lipid
D. element
Answer:
|
|
sciq-8752
|
multiple_choice
|
Which part of the nerve cell helps transmit nerve impulses?
|
[
"synapses",
"long, threadlike extensions",
"cell walls",
"dendrites"
] |
B
|
Relavent Documents:
Document 0:::
A motor nerve is a nerve that transmits motor signals from the central nervous system (CNS) to the muscles of the body. This is different from the motor neuron, which includes a cell body and branching of dendrites, while the nerve is made up of a bundle of axons. Motor nerves act as efferent nerves which carry information out from the CNS to muscles, as opposed to afferent nerves (also called sensory nerves), which transfer signals from sensory receptors in the periphery to the CNS. Efferent nerves can also connect to glands or other organs/issues instead of muscles (and so motor nerves are not equivalent to efferent nerves). In addition, there are nerves that serve as both sensory and motor nerves called mixed nerves.
Structure and function
Motor nerve fibers transduce signals from the CNS to peripheral neurons of proximal muscle tissue. Motor nerve axon terminals innervate skeletal and smooth muscle, as they are heavily involved in muscle control. Motor nerves tend to be rich in acetylcholine vesicles because the motor nerve, a bundle of motor nerve axons that deliver motor signals and signal for movement and motor control. Calcium vesicles reside in the axon terminals of the motor nerve bundles. The high calcium concentration outside of presynaptic motor nerves increases the size of end-plate potentials (EPPs).
Protective tissues
Within motor nerves, each axon is wrapped by the endoneurium, which is a layer of connective tissue that surrounds the myelin sheath. Bundles of axons are called fascicles, which are wrapped in perineurium. All of the fascicles wrapped in the perineurium are wound together and wrapped by a final layer of connective tissue known as the epineurium. These protective tissues defend nerves from injury, pathogens and help to maintain nerve function. Layers of connective tissue maintain the rate at which nerves conduct action potentials.
Spinal cord exit
Most motor pathways originate in the motor cortex of the brain. Signals run down th
Document 1:::
Non-spiking neurons are neurons that are located in the central and peripheral nervous systems and function as intermediary relays for sensory-motor neurons. They do not exhibit the characteristic spiking behavior of action potential generating neurons.
Non-spiking neural networks are integrated with spiking neural networks to have a synergistic effect in being able to stimulate some sensory or motor response while also being able to modulate the response.
Discovery
Animal models
There are an abundance of neurons that propagate signals via action potentials and the mechanics of this particular kind of transmission is well understood. Spiking neurons exhibit action potentials as a result of a neuron characteristic known as membrane potential. Through studying these complex spiking networks in animals, a neuron that did not exhibit characteristic spiking behavior was discovered. These neurons use a graded potential to transmit data as they lack the membrane potential that spiking neurons possess. This method of transmission has a huge effect on the fidelity, strength, and lifetime of the signal. Non-spiking neurons were identified as a special kind of interneuron and function as an intermediary point of process for sensory-motor systems. Animals have become substantial models for understanding more about non-spiking neural networks and the role they play in an animal’s ability to process information and its overall function. Animal models indicate that the interneurons modulate directional and posture coordinating behaviors.
Crustaceans and arthropods such as the crawfish have created many opportunities to learn about the modulatory role that these neurons have in addition to their potential to be modulated regardless of their lack of exhibiting spiking behavior. Most of the known information about nonspiking neurons is derived from animal models. Studies focus on neuromuscular junctions and modulation of abdominal motor cells. Modulatory interneurons are neurons
Document 2:::
In neuroscience, nerve conduction velocity (CV) is the speed at which an electrochemical impulse propagates down a neural pathway. Conduction velocities are affected by a wide array of factors, which include age, sex, and various medical conditions. Studies allow for better diagnoses of various neuropathies, especially demyelinating diseases as these conditions result in reduced or non-existent conduction velocities. CV is an important aspect of nerve conduction studies.
Normal conduction velocities
Ultimately, conduction velocities are specific to each individual and depend largely on an axon's diameter and the degree to which that axon is myelinated, but the majority of 'normal' individuals fall within defined ranges.
Nerve impulses are extremely slow compared to the speed of electricity, where the electric field can propagate with a speed on the order of 50–99% of the speed of light; however, it is very fast compared to the speed of blood flow, with some myelinated neurons conducting at speeds up to 120 m/s (432 km/h or 275 mph).
Different sensory receptors are innervated by different types of nerve fibers. Proprioceptors are innervated by type Ia, Ib and II sensory fibers, mechanoreceptors by type II and III sensory fibers, and nociceptors and thermoreceptors by type III and IV sensory fibers.
Normal impulses in peripheral nerves of the legs travel at 40–45 m/s, and those in peripheral nerves of the arms at 50–65 m/s.
Largely generalized, normal conduction velocities for any given nerve will be in the range of 50–60 m/s.
Testing methods
Nerve conduction studies
Nerve conduction velocity is just one of many measurements commonly made during a nerve conduction study (NCS). The purpose of these studies is to determine whether nerve damage is present and how severe that damage may be.
Nerve conduction studies are performed as follows:
Two electrodes are attached to the subject's skin over the nerve being tested.
Electrical impulses are sent through one elec
Document 3:::
In the nervous system, a synapse is a structure that permits a neuron (or nerve cell) to pass an electrical or chemical signal to another neuron or to the target effector cell.
Synapses are essential to the transmission of nervous impulses from one neuron to another. Neurons are specialized to pass signals to individual target cells, and synapses are the means by which they do so. At a synapse, the plasma membrane of the signal-passing neuron (the presynaptic neuron) comes into close apposition with the membrane of the target (postsynaptic) cell. Both the presynaptic and postsynaptic sites contain extensive arrays of molecular machinery that link the two membranes together and carry out the signaling process. In many synapses, the presynaptic part is located on an axon and the postsynaptic part is located on a dendrite or soma. Astrocytes also exchange information with the synaptic neurons, responding to synaptic activity and, in turn, regulating neurotransmission. Synapses (at least chemical synapses) are stabilized in position by synaptic adhesion molecules (SAMs) projecting from both the pre- and post-synaptic neuron and sticking together where they overlap; SAMs may also assist in the generation and functioning of synapses.
History
Santiago Ramón y Cajal proposed that neurons are not continuous throughout the body, yet still communicate with each other, an idea known as the neuron doctrine. The word "synapse" was introduced in 1897 by the English neurophysiologist Charles Sherrington in Michael Foster's Textbook of Physiology. Sherrington struggled to find a good term that emphasized a union between two separate elements, and the actual term "synapse" was suggested by the English classical scholar Arthur Woollgar Verrall, a friend of Foster. The word was derived from the Greek synapsis (), meaning "conjunction", which in turn derives from synaptein (), from syn () "together" and haptein () "to fasten".
However, while the synaptic gap remained a theoretical co
Document 4:::
Cutaneous innervation refers to an area of the skin which is supplied by a specific cutaneous nerve.
Dermatomes are similar; however, a dermatome only specifies the area served by a spinal nerve. In some cases, the dermatome is less specific (when a spinal nerve is the source for more than one cutaneous nerve), and in other cases it is more specific (when a cutaneous nerve is derived from multiple spinal nerves.)
Modern texts are in agreement about which areas of the skin are served by which nerves, but there are minor variations in some of the details. The borders designated by the diagrams in the 1918 edition of Gray's Anatomy are similar, but not identical, to those generally accepted today.
Importance of the peripheral nervous system
The peripheral nervous system (PNS) is divided into the somatic nervous system, the autonomic nervous system, and the enteric nervous system. However, it is the somatic nervous system, responsible for body movement and the reception of external stimuli, which allows one to understand how cutaneous innervation is made possible by the action of specific sensory fibers located on the skin, as well as the distinct pathways they take to the central nervous system. The skin, which is part of the integumentary system, plays an important role in the somatic nervous system because it contains a range of nerve endings that react to heat and cold, touch, pressure, vibration, and tissue injury.
Importance of the central nervous system
The central nervous system (CNS) works with the peripheral nervous system in cutaneous innervation. The CNS is responsible for processing the information it receives from the cutaneous nerves that detect a given stimulus, and then identifying the kind of sensory inputs which project to a specific region of the primary somatosensory cortex.
The role of nerve endings on the surface of the skin
Groups of nerve terminals located in the different layers of the skin are categorized depending on whether the skin
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which part of the nerve cell helps transmit nerve impulses?
A. synapses
B. long, threadlike extensions
C. cell walls
D. dendrites
Answer:
|
|
scienceQA-9035
|
multiple_choice
|
What do these two changes have in common?
water vapor condensing on a bathroom mirror
sediment settling to the bottom of a muddy puddle
|
[
"Both are caused by cooling.",
"Both are caused by heating.",
"Both are only physical changes.",
"Both are chemical changes."
] |
C
|
Step 1: Think about each change.
Water vapor condensing on a bathroom mirror is a change of state. So, it is a physical change. The water changes state from gas in the air to liquid water on the mirror. But the water vapor and the liquid water are both made of water.
Loose matter such as sand and dirt is called sediment. Sediment settling to the bottom of a muddy puddle is a physical change.
The sediment sinks, and the water above becomes clearer. This separates the water from the sediment. But separating a mixture does not form a different type of matter.
Step 2: Look at each answer choice.
Both are only physical changes.
Both changes are physical changes. No new matter is created.
Both are chemical changes.
Both changes are physical changes. They are not chemical changes.
Both are caused by heating.
Neither change is caused by heating.
Both are caused by cooling.
Water vapor condensing is caused by cooling. But sediment settling to the bottom of a muddy puddle is not.
|
Relavent Documents:
Document 0:::
Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.
Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate.
A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density.
An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge.
Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change.
Examples
Heating and cooling
Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation.
Magnetism
Ferro-magnetic materials can become magnetic. The process is reve
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
In physics, a dynamical system is said to be mixing if the phase space of the system becomes strongly intertwined, according to at least one of several mathematical definitions. For example, a measure-preserving transformation T is said to be strong mixing if
whenever A and B are any measurable sets and μ is the associated measure. Other definitions are possible, including weak mixing and topological mixing.
The mathematical definition of mixing is meant to capture the notion of physical mixing. A canonical example is the Cuba libre: suppose one is adding rum (the set A) to a glass of cola. After stirring the glass, the bottom half of the glass (the set B) will contain rum, and it will be in equal proportion as it is elsewhere in the glass. The mixing is uniform: no matter which region B one looks at, some of A will be in that region. A far more detailed, but still informal description of mixing can be found in the article on mixing (mathematics).
Every mixing transformation is ergodic, but there are ergodic transformations which are not mixing.
Physical mixing
The mixing of gases or liquids is a complex physical process, governed by a convective diffusion equation that may involve non-Fickian diffusion as in spinodal decomposition. The convective portion of the governing equation contains fluid motion terms that are governed by the Navier–Stokes equations. When fluid properties such as viscosity depend on composition, the governing equations may be coupled. There may also be temperature effects. It is not clear that fluid mixing processes are mixing in the mathematical sense.
Small rigid objects (such as rocks) are sometimes mixed in a rotating drum or tumbler. The 1969 Selective Service draft lottery was carried out by mixing plastic capsules which contained a slip of paper (marked with a day of the year).
See also
Miscibility
Document 3:::
At equilibrium, the relationship between water content and equilibrium relative humidity of a material can be displayed graphically by a curve, the so-called moisture sorption isotherm.
For each humidity value, a sorption isotherm indicates the corresponding water content value at a given, constant temperature. If the composition or quality of the material changes, then its sorption behaviour also changes. Because of the complexity of sorption process the isotherms cannot be determined explicitly by calculation, but must be recorded experimentally for each product.
The relationship between water content and water activity (aw) is complex. An increase in aw is usually accompanied by an increase in water content, but in a non-linear fashion. This relationship between water activity and moisture content at a given temperature is called the moisture sorption isotherm. These curves are determined experimentally and constitute the fingerprint of a food system.
BET theory (Brunauer-Emmett-Teller) provides a calculation to describe the physical adsorption of gas molecules on a solid surface. Because of the complexity of the process, these calculations are only moderately successful; however, Stephen Brunauer was able to classify sorption isotherms into five generalized shapes as shown in Figure 2. He found that Type II and Type III isotherms require highly porous materials or desiccants, with first monolayer adsorption, followed by multilayer adsorption and finally leading to capillary condensation, explaining these materials high moisture capacity at high relative humidity.
Care must be used in extracting data from isotherms, as the representation for each axis may vary in its designation. Brunauer provided the vertical axis as moles of gas adsorbed divided by the moles of the dry material, and on the horizontal axis he used the ratio of partial pressure of the gas just over the sample, divided by its partial pressure at saturation. More modern isotherms showing the
Document 4:::
The Stefan flow, occasionally called Stefan's flow, is a transport phenomenon concerning the movement of a chemical species by a flowing fluid (typically in the gas phase) that is induced to flow by the production or removal of the species at an interface. Any process that adds the species of interest to or removes it from the flowing fluid may cause the Stefan flow, but the most common processes include evaporation, condensation, chemical reaction, sublimation, ablation, adsorption, absorption, and desorption. It was named after the Slovenian physicist, mathematician, and poet Josef Stefan for his early work on calculating evaporation rates.
The Stefan flow is distinct from diffusion as described by Fick's law, but diffusion almost always also occurs in multi-species systems that are experiencing the Stefan flow. In systems undergoing one of the species addition or removal processes mentioned previously, the addition or removal generates a mean flow in the flowing fluid as the fluid next to the interface is displaced by the production or removal of additional fluid by the processes occurring at the interface. The transport of the species by this mean flow is the Stefan flow. When concentration gradients of the species are also present, diffusion transports the species relative to the mean flow. The total transport rate of the species is then given by a summation of the Stefan flow and diffusive contributions.
An example of the Stefan flow occurs when a droplet of liquid evaporates in air. In this case, the vapor/air mixture surrounding the droplet is the flowing fluid, and liquid/vapor boundary of the droplet is the interface. As heat is absorbed by the droplet from the environment, some of the liquid evaporates into vapor at the surface of the droplet, and flows away from the droplet as it is displaced by additional vapor evaporating from the droplet. This process causes the flowing medium to move away from the droplet at some mean speed that is dependent on
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do these two changes have in common?
water vapor condensing on a bathroom mirror
sediment settling to the bottom of a muddy puddle
A. Both are caused by cooling.
B. Both are caused by heating.
C. Both are only physical changes.
D. Both are chemical changes.
Answer:
|
sciq-5470
|
multiple_choice
|
What is the measure of sound intensity levels?
|
[
"centimeters",
"octaves",
"decibels",
"moles"
] |
C
|
Relavent Documents:
Document 0:::
Sound intensity, also known as acoustic intensity, is defined as the power carried by sound waves per unit area in a direction perpendicular to that area. The SI unit of intensity, which includes sound intensity, is the watt per square meter (W/m2). One application is the noise measurement of sound intensity in the air at a listener's location as a sound energy quantity.
Sound intensity is not the same physical quantity as sound pressure. Human hearing is sensitive to sound pressure which is related to sound intensity. In consumer audio electronics, the level differences are called "intensity" differences, but sound intensity is a specifically defined quantity and cannot be sensed by a simple microphone.
Sound intensity level is a logarithmic expression of sound intensity relative to a reference intensity.
Mathematical definition
Sound intensity, denoted I, is defined by
where
p is the sound pressure;
v is the particle velocity.
Both I and v are vectors, which means that both have a direction as well as a magnitude. The direction of sound intensity is the average direction in which energy is flowing.
The average sound intensity during time T is given by
For a plane wave ,
Where,
is frequency of sound,
is the amplitude of the sound wave particle displacement,
is density of medium in which sound is traveling, and
is speed of sound.
Inverse-square law
For a spherical sound wave, the intensity in the radial direction as a function of distance r from the centre of the sphere is given by
where
P is the sound power;
A(r) is the surface area of a sphere of radius r.
Thus sound intensity decreases as 1/r2 from the centre of the sphere:
This relationship is an inverse-square law.
Sound intensity level
Sound intensity level (SIL) or acoustic intensity level is the level (a logarithmic quantity) of the intensity of a sound relative to a reference value.
It is denoted LI, expressed in nepers, bels, or decibels, and defined by
where
I is the sound
Document 1:::
The absolute threshold of hearing (ATH), also known as the absolute hearing threshold or auditory threshold, is the minimum sound level of a pure tone that an average human ear with normal hearing can hear with no other sound present. The absolute threshold relates to the sound that can just be heard by the organism. The absolute threshold is not a discrete point and is therefore classed as the point at which a sound elicits a response a specified percentage of the time.
The threshold of hearing is generally reported in reference to the RMS sound pressure of 20 micropascals, i.e. 0 dB SPL, corresponding to a sound intensity of 0.98 pW/m2 at 1 atmosphere and 25 °C. It is approximately the quietest sound a young human with undamaged hearing can detect at 1,000 Hz. The threshold of hearing is frequency-dependent and it has been shown that the ear's sensitivity is best at frequencies between 2 kHz and 5 kHz, where the threshold reaches as low as −9 dB SPL.
Psychophysical methods for measuring thresholds
Measurement of the absolute hearing threshold provides some basic information about our auditory system. The tools used to collect such information are called psychophysical methods. Through these, the perception of a physical stimulus (sound) and our psychological response to the sound is measured.
Several psychophysical methods can measure absolute threshold. These vary, but certain aspects are identical. Firstly, the test defines the stimulus and specifies the manner in which the subject should respond. The test presents the sound to the listener and manipulates the stimulus level in a predetermined pattern. The absolute threshold is defined statistically, often as an average of all obtained hearing thresholds.
Some procedures use a series of trials, with each trial using the 'single-interval "yes"/"no" paradigm'. This means that sound may be present or absent in the single interval, and the listener has to say whether he thought the stimulus was there. When the
Document 2:::
In signal processing, the high frequency content measure is a simple measure, taken across a signal spectrum (usually a STFT spectrum), that can be used to characterize the amount of high-frequency content in the signal. The magnitudes of the spectral bins are added together, but multiplying each magnitude by the bin "position" (proportional to the frequency). Thus if X(k) is a discrete spectrum with N unique points, its high frequency content measure is:
In contrast to perceptual measures, this is not based on any evidence about its relevance to human hearing. Despite that, it can be useful for some applications, such as onset detection.
The measure has close similarities to the spectral centroid measure, being essentially the same calculation but without normalization according to overall magnitude.
Document 3:::
Sound energy density or sound density is the sound energy per unit volume. The SI unit of sound energy density is the pascal (Pa), which is 1 kg⋅m−1⋅s−2 in SI base units or 1 joule per cubic metre (J/m3).
Mathematical definition
Sound energy density, denoted w, is defined by
where
p is the sound pressure;
v is the particle velocity in the direction of propagation;
c is the speed of sound.
The terms instantaneous energy density, maximum energy density, and peak energy density have meanings analogous to the related terms used for sound pressure. In speaking of average energy density, it is necessary to distinguish between the space average (at a given instant) and the time average (at a given point).
Sound energy density level
The sound energy density level gives the ratio of a sound incidence as a sound energy value in comparison to the reference level of 1 pPa (= 10−12 pascals). It is a logarithmic measure of the ratio of two sound energy densities. The unit of the sound energy density level is the decibel (dB), a non-SI unit accepted for use with the SI Units.
The sound energy density level, L(E), for a given sound energy density, E1, in pascals, is
,
where E0 is the standard reference sound energy density
.
See also
Particle velocity level
Sound intensity level
Document 4:::
Auditory events describe the subjective perception, when listening to a certain sound situation. This term was introduced by Jens Blauert (Ruhr-University Bochum) in 1966, in order to distinguish clearly between the physical sound field and the auditory perception of the sound.
Auditory events are the central objects of psychoacoustical investigations.
Focus of these investigations is the relationship between the characteristics of a physical sound field and the corresponding perception of listeners. From this relationship conclusions can be drawn about the processing methods of the human auditory system.
Aspects of auditory event investigations can be:
is there an auditory event? Is a certain sound noticeable? => Determination of perception thresholds like hearing threshold, auditory masking thresholds etc.
Which characteristics has the auditory event? => Determination of loudness, pitch, sound, harshness etc.
How is the spatial impression of the auditory event? => Determination of sound localization, lateralization, perceived direction etc.
When can differences in auditory events be noticed? How big are the discrimination possibilities of the auditory system? => Determination of just noticeable differences
Relationships between sound field and auditory events
The sound field is described by physical quantities, while auditory events are described by quantities of psychoacoustical perception.
Below you can find a list with physical sound field quantities and the related psychoacoustical quantities of corresponding auditory events.
Mostly there is no simple or proportional relationship between sound field characteristics and auditory events.
For example, the auditory event property loudness depends not only on the physical quantity sound pressure but also on the spectral characteristics of the sound and on the sound history.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the measure of sound intensity levels?
A. centimeters
B. octaves
C. decibels
D. moles
Answer:
|
|
sciq-2600
|
multiple_choice
|
What do we call water that has been used for cleaning, washing, flushing, or manufacturing?
|
[
"wastewater",
"sewage",
"Grey water",
"groundwater"
] |
A
|
Relavent Documents:
Document 0:::
Wet Processing Engineering is one of the major streams in Textile Engineering or Textile manufacturing which refers to the engineering of textile chemical processes and associated applied science. The other three streams in textile engineering are yarn engineering, fabric engineering, and apparel engineering. The processes of this stream are involved or carried out in an aqueous stage. Hence, it is called a wet process which usually covers pre-treatment, dyeing, printing, and finishing.
The wet process is usually done in the manufactured assembly of interlacing fibers, filaments and yarns, having a substantial surface (planar) area in relation to its thickness, and adequate mechanical strength to give it a cohesive structure. In other words, the wet process is done on manufactured fiber, yarn and fabric.
All of these stages require an aqueous medium which is created by water. A massive amount of water is required in these processes per day. It is estimated that, on an average, almost 50–100 liters of water is used to process only 1 kilogram of textile goods, depending on the process engineering and applications. Water can be of various qualities and attributes. Not all water can be used in the textile processes; it must have some certain properties, quality, color and attributes of being used. This is the reason why water is a prime concern in wet processing engineering.
Water
Water consumption and discharge of wastewater are the two major concerns. The textile industry uses a large amount of water in its varied processes especially in wet operations such as pre-treatment, dyeing, and printing. Water is required as a solvent of various dyes and chemicals and it is used in washing or rinsing baths in different steps. Water consumption depends upon the application methods, processes, dyestuffs, equipment/machines and technology which may vary mill to mill and material composition. Longer processing sequences, processing of extra dark colors and reprocessing lead
Document 1:::
Purified water is water that has been mechanically filtered or processed to remove impurities and make it suitable for use. Distilled water was, formerly, the most common form of purified water, but, in recent years, water is more frequently purified by other processes including capacitive deionization, reverse osmosis, carbon filtering, microfiltration, ultrafiltration, ultraviolet oxidation, or electrodeionization. Combinations of a number of these processes have come into use to produce ultrapure water of such high purity that its trace contaminants are measured in parts per billion (ppb) or parts per trillion (ppt).
Purified water has many uses, largely in the production of medications, in science and engineering laboratories and industries, and is produced in a range of purities. It is also used in the commercial beverage industry as the primary ingredient of any given trademarked bottling formula, in order to maintain product consistency. It can be produced on-site for immediate use or purchased in containers. Purified water in colloquial English can also refer to water that has been treated ("rendered potable") to neutralize, but not necessarily remove contaminants considered harmful to humans or animals.
Parameters of water purity
Purified water is usually produced by the purification of drinking water or ground water. The impurities that may need to be removed are:
inorganic ions (typically monitored as electrical conductivity or resistivity or specific tests)
organic compounds (typically monitored as TOC or by specific tests)
bacteria (monitored by total viable counts or epifluorescence)
endotoxins and nucleases (monitored by LAL or specific enzyme tests)
particulates (typically controlled by filtration)
gases (typically managed by degassing when required)
Purification methods
Distillation
Distilled water is produced by a process of distillation. Distillation involves boiling the water and then condensing the vapor into a clean container, leaving sol
Document 2:::
Ultrapure water (UPW), high-purity water or highly purified water (HPW) is water that has been purified to uncommonly stringent specifications. Ultrapure water is a term commonly used in manufacturing to emphasize the fact that the water is treated to the highest levels of purity for all contaminant types, including: organic and inorganic compounds; dissolved and particulate matter; volatile and non-volatile; reactive, and inert; hydrophilic and hydrophobic; and dissolved gases.
UPW and the commonly used term deionized (DI) water are not the same. In addition to the fact that UPW has organic particles and dissolved gases removed, a typical UPW system has three stages: a pretreatment stage to produce purified water, a primary stage to further purify the water, and a polishing stage, the most expensive part of the treatment process.
A number of organizations and groups develop and publish standards associated with the production of UPW. For microelectronics and power, they include Semiconductor Equipment and Materials International (SEMI) (microelectronics and photovoltaic), American Society for Testing and Materials International (ASTM International) (semiconductor, power), Electric Power Research Institute (EPRI) (power), American Society of Mechanical Engineers (ASME) (power), and International Association for the Properties of Water and Steam (IAPWS) (power). Pharmaceutical plants follow water quality standards as developed by pharmacopeias, of which three examples are the United States Pharmacopeia, European Pharmacopeia, and Japanese Pharmacopeia.
The most widely used requirements for UPW quality are documented by ASTM D5127 "Standard Guide for Ultra-Pure Water Used in the Electronics and Semiconductor Industries" and SEMI F63 "Guide for ultrapure water used in semiconductor processing".
Ultra pure water is also used as boiler feedwater in the UK AGR fleet.
Sources and control
Bacteria, particles, organic, and inorganic sources of contamination vary depend
Document 3:::
Membrane bioreactors are combinations of some membrane processes like microfiltration or ultrafiltration with a biological wastewater treatment process, the activated sludge process. These technologies are now widely used for municipal and industrial wastewater treatment. The two basic membrane bioreactor configurations are the submerged membrane bioreactor and the side stream membrane bioreactor. In the submerged configuration, the membrane is located inside the biological reactor and submerged in the wastewater, while in a side stream membrane bioreactor, the membrane is located outside the reactor as an additional step after biological treatment.
Overview
Water scarcity has prompted efforts to reuse waste water once it has been properly treated, known as "water reclamation" (also called wastewater reuse, water reuse, or water recycling). Among the treatment technologies available to reclaim wastewater, membrane processes stand out for their capacity to retain solids and salts and even to disinfect water, producing water suitable for reuse in irrigation and other applications.
A semipermeable membrane is a material that allows the selective flow of certain substances.
In the case of water purification or regeneration, the aim is to allow the water to flow through the membrane whilst retaining undesirable particles on the originating side. By varying the type of membrane, it is possible to get better pollutant retention of different kinds. Some of the required characteristics in a membrane for wastewater treatment are chemical and mechanical resistance for five years of operation and capacity to operate stably over a wide pH range.
There are two main types of membrane materials available on the market: organic-based polymeric membranes and ceramic membranes. Polymeric membranes are the most commonly used materials in water and wastewater treatment. In particular, polyvinylidene difluoride (PVDF) is the most prevalent material due to its long lifetime and chemica
Document 4:::
WASH (or Watsan, WaSH) is an acronym that stands for "water, sanitation and hygiene". It is used widely by non-governmental organizations and aid agencies in developing countries. The purposes of providing access to WASH services include achieving public health gains, improving human dignity in the case of sanitation, implementing the human right to water and sanitation, reducing the burden of collecting drinking water for women, reducing risks of violence against women, improving education and health outcomes at schools and health facilities, and reducing water pollution. Access to WASH services is also an important component of water security. Universal, affordable and sustainable access to WASH is a key issue within international development and is the focus of the first two targets of Sustainable Development Goal 6 (SDG 6). Targets 6.1 and 6.2 aim at equitable and accessible water and sanitation for all. In 2017, it was estimated that 2.3 billion people live without basic sanitation facilities and 844 million people live without access to safe and clean drinking water.
The WASH-attributable burden of disease and injuries has been studied in depth. Typical diseases and conditions associated with lack of WASH include diarrhea, malnutrition and stunting, in addition to neglected tropical diseases. Lack of WASH poses additional health risks for women, for example during pregnancy, or in connection with menstrual hygiene management. Chronic diarrhea can have long-term negative effects on children, in terms of both physical and cognitive development. Still, collecting precise scientific evidence regarding health outcomes that result from improved access to WASH is difficult due to a range of complicating factors. Scholars suggest a need for longer-term studies of technology efficacy, greater analysis of sanitation interventions, and studies of combined effects from multiple interventions in order to better analyze WASH health outcomes.
Access to WASH needs to be pro
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do we call water that has been used for cleaning, washing, flushing, or manufacturing?
A. wastewater
B. sewage
C. Grey water
D. groundwater
Answer:
|
|
sciq-1323
|
multiple_choice
|
What are the three fundamental phases of matter?
|
[
"fast, slow, normal",
"air, water, and land",
"solid, liquid, and gas",
"big, small. and medium"
] |
C
|
Relavent Documents:
Document 0:::
States of matter are distinguished by changes in the properties of matter associated with external factors like pressure and temperature. States are usually distinguished by a discontinuity in one of those properties: for example, raising the temperature of ice produces a discontinuity at 0°C, as energy goes into a phase transition, rather than temperature increase. The three classical states of matter are solid, liquid and gas. In the 20th century, however, increased understanding of the more exotic properties of matter resulted in the identification of many additional states of matter, none of which are observed in normal conditions.
Low-energy states of matter
Classical states
Solid: A solid holds a definite shape and volume without a container. The particles are held very close to each other.
Amorphous solid: A solid in which there is no far-range order of the positions of the atoms.
Crystalline solid: A solid in which atoms, molecules, or ions are packed in regular order.
Plastic crystal: A molecular solid with long-range positional order but with constituent molecules retaining rotational freedom.
Quasicrystal: A solid in which the positions of the atoms have long-range order, but this is not in a repeating pattern.
Liquid: A mostly non-compressible fluid. Able to conform to the shape of its container but retains a (nearly) constant volume independent of pressure.
Liquid crystal: Properties intermediate between liquids and crystals. Generally, able to flow like a liquid but exhibiting long-range order.
Gas: A compressible fluid. Not only will a gas take the shape of its container but it will also expand to fill the container.
Modern states
Plasma: Free charged particles, usually in equal numbers, such as ions and electrons. Unlike gases, plasma may self-generate magnetic fields and electric currents and respond strongly and collectively to electromagnetic forces. Plasma is very uncommon on Earth (except for the ionosphere), although it is the mo
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
Solid is one of the four fundamental states of matter along with liquid, gas, and plasma. The molecules in a solid are closely packed together and contain the least amount of kinetic energy. A solid is characterized by structural rigidity (as in rigid bodies) and resistance to a force applied to the surface. Unlike a liquid, a solid object does not flow to take on the shape of its container, nor does it expand to fill the entire available volume like a gas. The atoms in a solid are bound to each other, either in a regular geometric lattice (crystalline solids, which include metals and ordinary ice), or irregularly (an amorphous solid such as common window glass). Solids cannot be compressed with little pressure whereas gases can be compressed with little pressure because the molecules in a gas are loosely packed.
The branch of physics that deals with solids is called solid-state physics, and is the main branch of condensed matter physics (which also includes liquids). Materials science is primarily concerned with the physical and chemical properties of solids. Solid-state chemistry is especially concerned with the synthesis of novel materials, as well as the science of identification and chemical composition.
Microscopic description
The atoms, molecules or ions that make up solids may be arranged in an orderly repeating pattern, or irregularly. Materials whose constituents are arranged in a regular pattern are known as crystals. In some cases, the regular ordering can continue unbroken over a large scale, for example diamonds, where each diamond is a single crystal. Solid objects that are large enough to see and handle are rarely composed of a single crystal, but instead are made of a large number of single crystals, known as crystallites, whose size can vary from a few nanometers to several meters. Such materials are called polycrystalline. Almost all common metals, and many ceramics, are polycrystalline.
In other materials, there is no long-range order in the
Document 3:::
This is a list of topics that are included in high school physics curricula or textbooks.
Mathematical Background
SI Units
Scalar (physics)
Euclidean vector
Motion graphs and derivatives
Pythagorean theorem
Trigonometry
Motion and forces
Motion
Force
Linear motion
Linear motion
Displacement
Speed
Velocity
Acceleration
Center of mass
Mass
Momentum
Newton's laws of motion
Work (physics)
Free body diagram
Rotational motion
Angular momentum (Introduction)
Angular velocity
Centrifugal force
Centripetal force
Circular motion
Tangential velocity
Torque
Conservation of energy and momentum
Energy
Conservation of energy
Elastic collision
Inelastic collision
Inertia
Moment of inertia
Momentum
Kinetic energy
Potential energy
Rotational energy
Electricity and magnetism
Ampère's circuital law
Capacitor
Coulomb's law
Diode
Direct current
Electric charge
Electric current
Alternating current
Electric field
Electric potential energy
Electron
Faraday's law of induction
Ion
Inductor
Joule heating
Lenz's law
Magnetic field
Ohm's law
Resistor
Transistor
Transformer
Voltage
Heat
Entropy
First law of thermodynamics
Heat
Heat transfer
Second law of thermodynamics
Temperature
Thermal energy
Thermodynamic cycle
Volume (thermodynamics)
Work (thermodynamics)
Waves
Wave
Longitudinal wave
Transverse waves
Transverse wave
Standing Waves
Wavelength
Frequency
Light
Light ray
Speed of light
Sound
Speed of sound
Radio waves
Harmonic oscillator
Hooke's law
Reflection
Refraction
Snell's law
Refractive index
Total internal reflection
Diffraction
Interference (wave propagation)
Polarization (waves)
Vibrating string
Doppler effect
Gravity
Gravitational potential
Newton's law of universal gravitation
Newtonian constant of gravitation
See also
Outline of physics
Physics education
Document 4:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are the three fundamental phases of matter?
A. fast, slow, normal
B. air, water, and land
C. solid, liquid, and gas
D. big, small. and medium
Answer:
|
|
sciq-9625
|
multiple_choice
|
What is the name for the amount of water vapor in the air?
|
[
"heat",
"humidity",
"ambient",
"viscosity"
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Humidity is the concentration of water vapor present in the air. Water vapor, the gaseous state of water, is generally invisible to the human eye. Humidity indicates the likelihood for precipitation, dew, or fog to be present.
Humidity depends on the temperature and pressure of the system of interest. The same amount of water vapor results in higher relative humidity in cool air than warm air. A related parameter is the dew point. The amount of water vapor needed to achieve saturation increases as the temperature increases. As the temperature of a parcel of air decreases it will eventually reach the saturation point without adding or losing water mass. The amount of water vapor contained within a parcel of air can vary significantly. For example, a parcel of air near saturation may contain 28 g of water per cubic metre of air at , but only 8 g of water per cubic metre of air at .
Three primary measurements of humidity are widely employed: absolute, relative, and specific. Absolute humidity is expressed as either mass of water vapor per volume of moist air (in grams per cubic meter) or as mass of water vapor per mass of dry air (usually in grams per kilogram). Relative humidity, often expressed as a percentage, indicates a present state of absolute humidity relative to a maximum humidity given the same temperature. Specific humidity is the ratio of water vapor mass to total moist air parcel mass.
Humidity plays an important role for surface life. For animal life dependent on perspiration (sweating) to regulate internal body temperature, high humidity impairs heat exchange efficiency by reducing the rate of moisture evaporation from skin surfaces. This effect can be calculated using a heat index table, also known as a humidex.
The notion of air "holding" water vapor or being "saturated" by it is often mentioned in connection with the concept of relative humidity. This, however, is misleading—the amount of water vapor that enters (or can enter) a given space at a g
Document 2:::
In atmospheric science, equivalent temperature is the temperature of air in a parcel from which all the water vapor has been extracted by an adiabatic process.
Air contains water vapor that has been evaporated into it from liquid sources (lakes, sea, etc...). The energy needed to do that has been taken from the air. Taking a volume of air at temperature and mixing ratio of , drying it by condensation will restore energy to the airmass. This will depend on the latent heat release as:
where:
: latent heat of evaporation (2400 kJ/kg at 25°C to 2600 kJ/kg at −40°C)
: specific heat at constant pressure for air (≈ 1004 J/(kg·K))
Tables exist for exact values of the last two coefficients.
See also
Wet-bulb temperature
Potential temperature
Atmospheric thermodynamics
Equivalent potential temperature
Bibliography
M Robitzsch, Aequivalenttemperatur und Aequivalentthemometer, Meteorologische Zeitschrift, 1928, pp. 313-315.
M K Yau and R.R. Rogers, Short Course in Cloud Physics, Third Edition, published by Butterworth-Heinemann, January 1, 1989, 304 pages.
J.V. Iribarne and W.L. Godson, Atmospheric Thermodynamics, published by D. Reidel Publishing Company, Dordrecht, Holland, 1973, 222 pages
Atmospheric thermodynamics
Atmospheric temperature
Meteorological quantities
Document 3:::
The Gas composition of any gas can be characterised by listing the pure substances it contains, and stating for each substance its proportion of the gas mixture's molecule count.Nitrogen 78.084
Oxygen 20.9476
Argon Ar 0.934
Carbon Dioxide 0.0314
Gas composition of air
To give a familiar example, air has a composition of:
Standard Dry Air is the agreed-upon gas composition for air from which all water vapour has been removed. There are various standards bodies which publish documents that define a dry air gas composition. Each standard provides a list of constituent concentrations, a gas density at standard conditions and a molar mass.
It is extremely unlikely that the actual composition of any specific sample of air will completely agree with any definition for standard dry air. While the various definitions for standard dry air all attempt to provide realistic information about the constituents of air, the definitions are important in and of themselves because they establish a standard which can be cited in legal contracts and publications documenting measurement calculation methodologies or equations of state.
The standards below are two examples of commonly used and cited publications that provide a composition for standard dry air:
ISO TR 29922-2017 provides a definition for standard dry air which specifies an air molar mass of 28,965 46 ± 0,000 17 kg·kmol-1.
GPA 2145:2009 is published by the Gas Processors Association. It provides a molar mass for air of 28.9625 g/mol, and provides a composition for standard dry air as a footnote.
Document 4:::
Vapour density is the density of a vapour in relation to that of hydrogen. It may be defined as mass of a certain volume of a substance divided by mass of same volume of hydrogen.
vapour density = mass of n molecules of gas / mass of n molecules of hydrogen gas .
vapour density = molar mass of gas / molar mass of H2
vapour density = molar mass of gas / 2.016
vapour density = × molar mass
(and thus: molar mass = ~2 × vapour density)
For example, vapour density of mixture of NO2 and N2O4 is 38.3. Vapour density is a dimensionless quantity.
Alternative definition
In many web sources, particularly in relation to safety considerations at commercial and industrial facilities in the U.S., vapour density is defined with respect to air, not hydrogen. Air is given a vapour density of one. For this use, air has a molecular weight of 28.97 atomic mass units, and all other gas and vapour molecular weights are divided by this number to derive their vapour density. For example, acetone has a vapour density of 2 in relation to air. That means acetone vapour is twice as heavy as air. This can be seen by dividing the molecular weight of Acetone, 58.1 by that of air, 28.97, which equals 2.
With this definition, the vapour density would indicate whether a gas is denser (greater than one) or less dense (less than one) than air. The density has implications for container storage and personnel safety—if a container can release a dense gas, its vapour could sink and, if flammable, collect until it is at a concentration sufficient for ignition. Even if not flammable, it could collect in the lower floor or level of a confined space and displace air, possibly presenting an asphyxiation hazard to individuals entering the lower part of that space.
See also
Relative density (also known as specific gravity)
Victor Meyer apparatus
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the name for the amount of water vapor in the air?
A. heat
B. humidity
C. ambient
D. viscosity
Answer:
|
|
sciq-8089
|
multiple_choice
|
What instrument made it possible to study pressure quantitatively?
|
[
"thermometer",
"seismograph",
"barometer",
"anemometer"
] |
C
|
Relavent Documents:
Document 0:::
Instrument mechanics in engineering are tradesmen who specialize in installing, troubleshooting, and repairing instrumentation, automation and control systems. The term "Instrument Mechanic" came about because it was a combination of light mechanical and specialised instrumentation skills. The term is still is used in certain industries; predominantly in industrial process control.
History
Instrumentation has existed for hundreds of years in one form or another; the oldest manometer was invented by Evangelista Torricelli in 1643, and the thermometer has been credited to many scientists of about the same period. Over that time, small and large scale industrial plants and manufacturing processes have always needed accurate and reliable process measurements. Originally the demand would only be for measurement instruments, but as process complexity grew, automatic control became more common.
The huge growth in process control instrumentation was boosted by the use of pneumatic controllers, which were used widely after 1930 when Clesson E Mason of the Foxboro Company invented a wide-band pneumatic controller by combining the nozzle and flapper high-gain pneumatic amplifier with negative feedback in a completely mechanical device. The repair and calibration of these devices required both fine mechanical skills and an understanding of the control operation. Likewise the use of control valves with positioners appeared, which required a similar combination of skills.
World War II also brought about a revolution in the use of instrumentation. Further advanced processes requires tighter control than people could provide, and advanced instruments were required to provide measurements in modern processes. Also, the war left industry with a substantially reduced workforce. Industrial instrumentation solved both problems, leading to a rise in its use. Pipe fitters had to learn more about instrumentation and control theory, and a new trade was born.
Today, instrument mechanics
Document 1:::
The sphygmograph ( ) was a mechanical device used to measure blood pressure in the mid-19th century. It was developed in 1854 by German physiologist Karl von Vierordt (1818–1884). It is considered the first external, non-intrusive device used to estimate blood pressure.
The device was a system of levers hooked to a scale-pan in which weights were placed to determine the amount of external pressure needed to stop blood flow in the radial artery. Although the instrument was cumbersome and its measurements imprecise, the basic concept of Vierordt's sphygmograph eventually led to the blood pressure cuff used today.
In 1863, Étienne-Jules Marey (1830–1904) improved the device by making it portable. Also he included a specialized instrument to be placed above the radial artery that was able to magnify pulse waves and record them on paper with an attached pen.
In 1872, Frederick Akbar Mahomed published a description of a modified sphygmograph. This modified version made the sphygmograph quantitative, so that it was able to measure arterial blood pressure.
In 1880, Samuel von Basch (1837–1905) invented the sphygmomanometer, which was then improved by Scipione Riva-Rocci (1863–1937) in the 1890s. In 1901 Harvey Williams Cushing improved it further, and Heinrich von Recklinghausen (1867–1942) used a wider cuff, and so it became the first accurate and practical instrument for measuring blood pressure.
Document 2:::
Temperature measurement (also known as thermometry) describes the process of measuring a current local temperature for immediate or later evaluation. Datasets consisting of repeated standardized measurements can be used to assess temperature trends.
History
Attempts at standardized temperature measurement prior to the 17th century were crude at best. For instance in 170 AD, physician Claudius Galenus mixed equal portions of ice and boiling water to create a "neutral" temperature standard. The modern scientific field has its origins in the works by Florentine scientists in the 1600s including Galileo constructing devices able to measure relative change in temperature, but subject also to confounding with atmospheric pressure changes. These early devices were called thermoscopes. The first sealed thermometer was constructed in 1654 by the Grand Duke of Tuscany, Ferdinand II. The development of today's thermometers and temperature scales began in the early 18th century, when Gabriel Fahrenheit produced a mercury thermometer and scale, both developed by Ole Christensen Rømer. Fahrenheit's scale is still in use, alongside the Celsius and Kelvin scales.
Technologies
Many methods have been developed for measuring temperature. Most of these rely on measuring some physical property of a working material that varies with temperature. One of the most common devices for measuring temperature is the glass thermometer. This consists of a glass tube filled with mercury or some other liquid, which acts as the working fluid. Temperature increase causes the fluid to expand, so the temperature can be determined by measuring the volume of the fluid. Such thermometers are usually calibrated so that one can read the temperature simply by observing the level of the fluid in the thermometer. Another type of thermometer that is not really used much in practice, but is important from a theoretical standpoint, is the gas thermometer.
Other important devices for measuring temperature inc
Document 3:::
Instrumentation is a collective term for measuring instruments, used for indicating, measuring and recording physical quantities. It is also a field of study about the art and science about making measurement instruments, involving the related areas of metrology, automation, and control theory.
The term has its origins in the art and science of scientific instrument-making.
Instrumentation can refer to devices as simple as direct-reading thermometers, or as complex as multi-sensor components of industrial control systems. Today, instruments can be found in laboratories, refineries, factories and vehicles, as well as in everyday household use (e.g., smoke detectors and thermostats)
Measurement parameters
Instrumentation is used to measure many parameters (physical values), including:
Pressure, either differential or static
Flow
Temperature
Levels of liquids, etc.
Density
Viscosity
ionising radiation
Frequency
Current
Voltage
Inductance
Capacitance
Resistivity
Chemical composition
Chemical properties
Position
Vibration
Weight
History
The history of instrumentation can be divided into several phases.
Pre-industrial
Elements of industrial instrumentation have long histories. Scales for comparing weights and simple pointers to indicate position are ancient technologies. Some of the earliest measurements were of time. One of the oldest water clocks was found in the tomb of the ancient Egyptian pharaoh Amenhotep I, buried around 1500 BCE. Improvements were incorporated in the clocks. By 270 BCE they had the rudiments of an automatic control system device.
In 1663 Christopher Wren presented the Royal Society with a design for a "weather clock". A drawing shows meteorological sensors moving pens over paper driven by clockwork. Such devices did not become standard in meteorology for two centuries. The concept has remained virtually unchanged as evidenced by pneumatic chart recorders, where a pressurized bellows displaces a pen. Integrating sensors, displays, recorder
Document 4:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What instrument made it possible to study pressure quantitatively?
A. thermometer
B. seismograph
C. barometer
D. anemometer
Answer:
|
|
sciq-10885
|
multiple_choice
|
An electron in an atom is completely described by four of what?
|
[
"photosynthesis numbers",
"quantum numbers",
"decay numbers",
"prime bumbers"
] |
B
|
Relavent Documents:
Document 0:::
In quantum mechanics, the principal quantum number (symbolized n) is one of four quantum numbers assigned to each electron in an atom to describe that electron's state. Its values are natural numbers (from 1) making it a discrete variable.
Apart from the principal quantum number, the other quantum numbers for bound electrons are the azimuthal quantum number ℓ, the magnetic quantum number ml, and the spin quantum number s.
Overview and history
As n increases, the electron is also at a higher energy and is, therefore, less tightly bound to the nucleus. For higher n the electron is farther from the nucleus, on average. For each value of n there are n accepted ℓ (azimuthal) values ranging from 0 to n − 1 inclusively, hence higher-n electron states are more numerous. Accounting for two states of spin, each n-shell can accommodate up to 2n2 electrons.
In a simplistic one-electron model described below, the total energy of an electron is a negative inverse quadratic function of the principal quantum number n, leading to degenerate energy levels for each n > 1. In more complex systems—those having forces other than the nucleus–electron Coulomb force—these levels split. For multielectron atoms this splitting results in "subshells" parametrized by ℓ. Description of energy levels based on n alone gradually becomes inadequate for atomic numbers starting from 5 (boron) and fails completely on potassium (Z = 19) and afterwards.
The principal quantum number was first created for use in the semiclassical Bohr model of the atom, distinguishing between different energy levels. With the development of modern quantum mechanics, the simple Bohr model was replaced with a more complex theory of atomic orbitals. However, the modern theory still requires the principal quantum number.
Derivation
There is a set of quantum numbers associated with the energy states of the atom. The four quantum numbers n, ℓ, m, and s specify the complete and unique quantum state of a single electron in a
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
In atomic physics and quantum chemistry, the electron configuration is the distribution of electrons of an atom or molecule (or other physical structure) in atomic or molecular orbitals. For example, the electron configuration of the neon atom is , meaning that the 1s, 2s and 2p subshells are occupied by 2, 2 and 6 electrons respectively.
Electronic configurations describe each electron as moving independently in an orbital, in an average field created by all other orbitals. Mathematically, configurations are described by Slater determinants or configuration state functions.
According to the laws of quantum mechanics, for systems with only one electron, a level of energy is associated with each electron configuration and in certain conditions, electrons are able to move from one configuration to another by the emission or absorption of a quantum of energy, in the form of a photon.
Knowledge of the electron configuration of different atoms is useful in understanding the structure of the periodic table of elements. This is also useful for describing the chemical bonds that hold atoms together, and for understanding the chemical formulas of compounds and the geometries of molecules. In bulk materials, this same idea helps explain the peculiar properties of lasers and semiconductors.
Shells and subshells
Electron configuration was first conceived under the Bohr model of the atom, and it is still common to speak of shells and subshells despite the advances in understanding of the quantum-mechanical nature of electrons.
An electron shell is the set of allowed states that share the same principal quantum number, n (the number before the letter in the orbital label), that electrons may occupy. An atom's nth electron shell can accommodate 2n2 electrons. For example, the first shell can accommodate 2 electrons, the second shell 8 electrons, the third shell 18 electrons and so on. The factor of two arises because the allowed states are doubled due to electron spin—each
Document 3:::
Secondary electrons are electrons generated as ionization products. They are called 'secondary' because they are generated by other radiation (the primary radiation). This radiation can be in the form of ions, electrons, or photons with sufficiently high energy, i.e. exceeding the ionization potential. Photoelectrons can be considered an example of secondary electrons where the primary radiation are photons; in some discussions photoelectrons with higher energy (>50 eV) are still considered "primary" while the electrons freed by the photoelectrons are "secondary".
Applications
Secondary electrons are also the main means of viewing images in the scanning electron microscope (SEM). The range of secondary electrons depends on the energy. Plotting the inelastic mean free path as a function of energy often shows characteristics of the "universal curve" familiar to electron spectroscopists and surface analysts. This distance is on the order of a few nanometers in metals and tens of nanometers in insulators. This small distance allows such fine resolution to be achieved in the SEM.
For SiO2, for a primary electron energy of 100 eV, the secondary electron range is up to 20 nm from the point of incidence.
See also
Delta ray
Everhart-Thornley detector
Document 4:::
In atomic theory and quantum mechanics, an atomic orbital () is a function describing the location and wave-like behavior of an electron in an atom. This function can be used to calculate the probability of finding any electron of an atom in any specific region around the atom's nucleus. The term atomic orbital may also refer to the physical region or space where the electron can be calculated to be present, as predicted by the particular mathematical form of the orbital.
Each orbital in an atom is characterized by a set of values of the three quantum numbers , , and , which respectively correspond to the electron's energy, its angular momentum, and an angular momentum vector component (magnetic quantum number). As an alternative to the magnetic quantum number, the orbitals are often labeled by the associated harmonic polynomials (e.g., xy, ). Each such orbital can be occupied by a maximum of two electrons, each with its own projection of spin . The simple names s orbital, p orbital, d orbital, and f orbital refer to orbitals with angular momentum quantum number and respectively. These names, together with the value of , are used to describe the electron configurations of atoms. They are derived from the description by early spectroscopists of certain series of alkali metal spectroscopic lines as sharp, principal, diffuse, and fundamental. Orbitals for > 3 continue alphabetically (g, h, i, k, ...), omitting j because some languages do not distinguish between the letters "i" and "j".
Atomic orbitals are the basic building blocks of the atomic orbital model (or electron cloud or wave mechanics model), a modern framework for visualizing the submicroscopic behavior of electrons in matter. In this model the electron cloud of an atom may be seen as being built up (in approximation) in an electron configuration that is a product of simpler hydrogen-like atomic orbitals. The repeating periodicity of blocks of 2, 6, 10, and 14 elements within sections of the periodic ta
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
An electron in an atom is completely described by four of what?
A. photosynthesis numbers
B. quantum numbers
C. decay numbers
D. prime bumbers
Answer:
|
|
sciq-4040
|
multiple_choice
|
What type of numbers specify the arrangement of electrons in orbitals?
|
[
"fusion numbers",
"quantum numbers",
"stream numbers",
"ionic numbers"
] |
B
|
Relavent Documents:
Document 0:::
Atomic and molecular orbitals
Before atomic orbitals were understood, spectroscopists discovered various distinctive series of spectral lines in atomic spectra, which they identified by letters. These letters were later associated with the azimuthal quantum number, ℓ. The letters, "s", "p", "d", and "f", for the first four values of ℓ were chosen to be the first letters of properties of the spectral series observed in alkali metals. Other letters for subsequent values of ℓ were assigned in alphabetical order, omitting the letter "j" because some languages do not distinguish between the letters "i" and "j":
{| class="wikitable"
|- align="center"
! width="40px" | letter !! name !! width="30px" | ℓ
|- align="center"
| s || align="left" | sharp || 0
|- align="center"
| p || align="left" | principal || 1
|- align="center"
| d || align="left" | diffuse || 2
|- align="center"
| f || align="left" | fundamental || 3
|- align="center"
| g
|
| 4
|- align="center"
| h
|
| 5
|- align="center"
| i
|
| 6
|- align="center"
| k
|
| 7
|- align="center"
| l
|
| 8
|- align="center"
| m
|
| 9
|- align="center"
| n
|
| 10
|- align="center"
| o
|
| 11
|- align="center"
| q
|
| 12
|- align="center"
| r
|
| 13
|- align="center"
| t
|
| 14
|- align="center"
| u
|
| 15
|- align="center"
Document 1:::
In quantum mechanics, the azimuthal quantum number is a quantum number for an atomic orbital that determines its orbital angular momentum and describes the shape of the orbital. The azimuthal quantum number is the second of a set of quantum numbers that describe the unique quantum state of an electron (the others being the principal quantum number , the magnetic quantum number , and the spin quantum number ). It is also known as the orbital angular momentum quantum number, orbital quantum number, subsidiary quantum number, or second quantum number, and is symbolized as (pronounced ell).
Derivation
Connected with the energy states of the atom's electrons are four quantum numbers: n, ℓ, mℓ, and ms. These specify the complete, unique quantum state of a single electron in an atom, and make up its wavefunction or orbital. When solving to obtain the wave function, the Schrödinger equation reduces to three equations that lead to the first three quantum numbers. Therefore, the equations for the first three quantum numbers are all interrelated. The azimuthal quantum number arose in the solution of the polar part of the wave equation as shown below , reliant on the spherical coordinate system, which generally works best with models having some glimpse of spherical symmetry.
An atomic electron's angular momentum, , is related to its quantum number by the following equation:
where is the reduced Planck's constant, is the orbital angular momentum operator and is the wavefunction of the electron. The quantum number is always a non-negative integer: 0, 1, 2, 3, etc. has no real meaning except in its use as the angular momentum operator. When referring to angular momentum, it is better to simply use the quantum number .
Atomic orbitals have distinctive shapes denoted by letters. In the illustration, the letters s, p, and d (a convention originating in spectroscopy) describe the shape of the atomic orbital.
Their wavefunctions take the form of spherical harmonics, and
Document 2:::
In atomic theory and quantum mechanics, an atomic orbital () is a function describing the location and wave-like behavior of an electron in an atom. This function can be used to calculate the probability of finding any electron of an atom in any specific region around the atom's nucleus. The term atomic orbital may also refer to the physical region or space where the electron can be calculated to be present, as predicted by the particular mathematical form of the orbital.
Each orbital in an atom is characterized by a set of values of the three quantum numbers , , and , which respectively correspond to the electron's energy, its angular momentum, and an angular momentum vector component (magnetic quantum number). As an alternative to the magnetic quantum number, the orbitals are often labeled by the associated harmonic polynomials (e.g., xy, ). Each such orbital can be occupied by a maximum of two electrons, each with its own projection of spin . The simple names s orbital, p orbital, d orbital, and f orbital refer to orbitals with angular momentum quantum number and respectively. These names, together with the value of , are used to describe the electron configurations of atoms. They are derived from the description by early spectroscopists of certain series of alkali metal spectroscopic lines as sharp, principal, diffuse, and fundamental. Orbitals for > 3 continue alphabetically (g, h, i, k, ...), omitting j because some languages do not distinguish between the letters "i" and "j".
Atomic orbitals are the basic building blocks of the atomic orbital model (or electron cloud or wave mechanics model), a modern framework for visualizing the submicroscopic behavior of electrons in matter. In this model the electron cloud of an atom may be seen as being built up (in approximation) in an electron configuration that is a product of simpler hydrogen-like atomic orbitals. The repeating periodicity of blocks of 2, 6, 10, and 14 elements within sections of the periodic ta
Document 3:::
In atomic physics and quantum chemistry, the electron configuration is the distribution of electrons of an atom or molecule (or other physical structure) in atomic or molecular orbitals. For example, the electron configuration of the neon atom is , meaning that the 1s, 2s and 2p subshells are occupied by 2, 2 and 6 electrons respectively.
Electronic configurations describe each electron as moving independently in an orbital, in an average field created by all other orbitals. Mathematically, configurations are described by Slater determinants or configuration state functions.
According to the laws of quantum mechanics, for systems with only one electron, a level of energy is associated with each electron configuration and in certain conditions, electrons are able to move from one configuration to another by the emission or absorption of a quantum of energy, in the form of a photon.
Knowledge of the electron configuration of different atoms is useful in understanding the structure of the periodic table of elements. This is also useful for describing the chemical bonds that hold atoms together, and for understanding the chemical formulas of compounds and the geometries of molecules. In bulk materials, this same idea helps explain the peculiar properties of lasers and semiconductors.
Shells and subshells
Electron configuration was first conceived under the Bohr model of the atom, and it is still common to speak of shells and subshells despite the advances in understanding of the quantum-mechanical nature of electrons.
An electron shell is the set of allowed states that share the same principal quantum number, n (the number before the letter in the orbital label), that electrons may occupy. An atom's nth electron shell can accommodate 2n2 electrons. For example, the first shell can accommodate 2 electrons, the second shell 8 electrons, the third shell 18 electrons and so on. The factor of two arises because the allowed states are doubled due to electron spin—each
Document 4:::
In quantum mechanics, the principal quantum number (symbolized n) is one of four quantum numbers assigned to each electron in an atom to describe that electron's state. Its values are natural numbers (from 1) making it a discrete variable.
Apart from the principal quantum number, the other quantum numbers for bound electrons are the azimuthal quantum number ℓ, the magnetic quantum number ml, and the spin quantum number s.
Overview and history
As n increases, the electron is also at a higher energy and is, therefore, less tightly bound to the nucleus. For higher n the electron is farther from the nucleus, on average. For each value of n there are n accepted ℓ (azimuthal) values ranging from 0 to n − 1 inclusively, hence higher-n electron states are more numerous. Accounting for two states of spin, each n-shell can accommodate up to 2n2 electrons.
In a simplistic one-electron model described below, the total energy of an electron is a negative inverse quadratic function of the principal quantum number n, leading to degenerate energy levels for each n > 1. In more complex systems—those having forces other than the nucleus–electron Coulomb force—these levels split. For multielectron atoms this splitting results in "subshells" parametrized by ℓ. Description of energy levels based on n alone gradually becomes inadequate for atomic numbers starting from 5 (boron) and fails completely on potassium (Z = 19) and afterwards.
The principal quantum number was first created for use in the semiclassical Bohr model of the atom, distinguishing between different energy levels. With the development of modern quantum mechanics, the simple Bohr model was replaced with a more complex theory of atomic orbitals. However, the modern theory still requires the principal quantum number.
Derivation
There is a set of quantum numbers associated with the energy states of the atom. The four quantum numbers n, ℓ, m, and s specify the complete and unique quantum state of a single electron in a
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of numbers specify the arrangement of electrons in orbitals?
A. fusion numbers
B. quantum numbers
C. stream numbers
D. ionic numbers
Answer:
|
|
sciq-830
|
multiple_choice
|
What is the type of cancer in which bone marrow produces abnormal white blood cells that cannot fight infections?
|
[
"pneumonia",
"leukemia",
"lymphedema",
"melanoma"
] |
B
|
Relavent Documents:
Document 0:::
White blood cells, also called leukocytes or immune cells also called immunocytes, are cells of the immune system that are involved in protecting the body against both infectious disease and foreign invaders. White blood cells include three main subtypes; granulocytes, lymphocytes and monocytes.
All white blood cells are produced and derived from multipotent cells in the bone marrow known as hematopoietic stem cells. Leukocytes are found throughout the body, including the blood and lymphatic system. All white blood cells have nuclei, which distinguishes them from the other blood cells, the anucleated red blood cells (RBCs) and platelets. The different white blood cells are usually classified by cell lineage (myeloid cells or lymphoid cells). White blood cells are part of the body's immune system. They help the body fight infection and other diseases. Types of white blood cells are granulocytes (neutrophils, eosinophils, and basophils), and agranulocytes (monocytes, and lymphocytes (T cells and B cells)). Myeloid cells (myelocytes) include neutrophils, eosinophils, mast cells, basophils, and monocytes. Monocytes are further subdivided into dendritic cells and macrophages. Monocytes, macrophages, and neutrophils are phagocytic. Lymphoid cells (lymphocytes) include T cells (subdivided into helper T cells, memory T cells, cytotoxic T cells), B cells (subdivided into plasma cells and memory B cells), and natural killer cells. Historically, white blood cells were classified by their physical characteristics (granulocytes and agranulocytes), but this classification system is less frequently used now. Produced in the bone marrow, white blood cells defend the body against infections and disease. An excess of white blood cells is usually due to infection or inflammation. Less commonly, a high white blood cell count could indicate certain blood cancers or bone marrow disorders.
The number of leukocytes in the blood is often an indicator of disease, and thus the white blood
Document 1:::
In haematology atypical localization of immature precursors (ALIP) refers to finding of atypically localized precursors (myeloblasts and promyelocytes) on bone marrow biopsy. In healthy humans, precursors are rare and are found localized near the endosteum, and consist of 1-2 cells. In some cases of myelodysplastic syndromes, immature precursors might be located in the intertrabecular region and occasionally aggregate as clusters of 3 ~ 5 cells. The presence of ALIPs is associated with worse prognosis of MDS . Recently, in bone marrow sections of patients with acute myeloid leukemia cells similar to ALIPs were defined as ALIP-like clusters. The presence of ALIP-like clusters in AML patients within remission was reported to be associated with early relapse of the disease.
Document 2:::
Megakaryocyte–erythroid progenitor cells, among other blood cells, are generated as a result of hematopoiesis, which occurs in the bone marrow. Hematopoietic stem cells can differentiate into one of two progenitor cells: the common lymphoid progenitor and the common myeloid progenitor. MEPs derive from the common myeloid progenitor lineage. Megakaryocyte/erythrocyte progenitor cells must commit to becoming either platelet-producing megakaryocytes via megakaryopoiesis or erythrocyte-producing erythroblasts via erythropoiesis. Most of the blood cells produced in the bone marrow during hematopoiesis come from megakaryocyte/erythrocyte progenitor cells.
Document 3:::
Bone marrow is a semi-solid tissue found within the spongy (also known as cancellous) portions of bones. In birds and mammals, bone marrow is the primary site of new blood cell production (or haematopoiesis). It is composed of hematopoietic cells, marrow adipose tissue, and supportive stromal cells. In adult humans, bone marrow is primarily located in the ribs, vertebrae, sternum, and bones of the pelvis. Bone marrow comprises approximately 5% of total body mass in healthy adult humans, such that a man weighing 73 kg (161 lbs) will have around 3.7 kg (8 lbs) of bone marrow.
Human marrow produces approximately 500 billion blood cells per day, which join the systemic circulation via permeable vasculature sinusoids within the medullary cavity. All types of hematopoietic cells, including both myeloid and lymphoid lineages, are created in bone marrow; however, lymphoid cells must migrate to other lymphoid organs (e.g. thymus) in order to complete maturation.
Bone marrow transplants can be conducted to treat severe diseases of the bone marrow, including certain forms of cancer such as leukemia. Several types of stem cells are related to bone marrow. Hematopoietic stem cells in the bone marrow can give rise to hematopoietic lineage cells, and mesenchymal stem cells, which can be isolated from the primary culture of bone marrow stroma, can give rise to bone, adipose, and cartilage tissue.
Structure
The composition of marrow is dynamic, as the mixture of cellular and non-cellular components (connective tissue) shifts with age and in response to systemic factors. In humans, marrow is colloquially characterized as "red" or "yellow" marrow (, , respectively) depending on the prevalence of hematopoietic cells vs fat cells. While the precise mechanisms underlying marrow regulation are not understood, compositional changes occur according to stereotypical patterns. For example, a newborn baby's bones exclusively contain hematopoietically active "red" marrow, and there is a pro
Document 4:::
Hematopoietic stem cells (HSCs) have high regenerative potentials and are capable of differentiating into all blood and immune system cells. Despite this impressive potential, HSCs have limited potential to produce more multipotent stem cells. This limited self-renewal potential is protected through maintenance of a quiescent state in HSCs. Stem cells maintained in this quiescent state are known as long term HSCs (LT-HSCs). During quiescence, HSCs maintain a low level of metabolic activity and do not divide. LT-HSCs can be signaled to proliferate, producing either myeloid or lymphoid progenitors. Production of these progenitors does not come without a cost: When grown under laboratory conditions that induce proliferation, HSCs lose their ability to divide and produce new progenitors. Therefore, understanding the pathways that maintain proliferative or quiescent states in HSCs could reveal novel pathways to improve existing therapeutics involving HSCs.
Background
All adult stem cells can undergo two types of division: symmetric and asymmetric. When a cell undergoes symmetric division, it can either produce two differentiated cells or two new stem cells. When a cell undergoes asymmetric division, it produces one stem and one differentiated cell. Production of new stem cells is necessary to maintain this population within the body. Like all cells, hematopoietic stem cells undergo metabolic shifts to meet their bioenergetic needs throughout development. These metabolic shifts play an important role in signaling, generating biomass, and protecting the cell from damage. Metabolic shifts also guide development in HSCs and are one key factor in determining if an HSC will remain quiescent, symmetrically divide, or asymmetrically divide. As mentioned above, quiescent cells maintain a low level of oxidative phosphorylation and primarily rely on glycolysis to generate energy. Fatty acid beta-oxidation has been shown to influence fate decisions in HSCs. In contrast, proliferat
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the type of cancer in which bone marrow produces abnormal white blood cells that cannot fight infections?
A. pneumonia
B. leukemia
C. lymphedema
D. melanoma
Answer:
|
|
sciq-7135
|
multiple_choice
|
What happens to the reaction rate over the course of a reaction?
|
[
"speeds up",
"reverses",
"slows down",
"stays the same"
] |
C
|
Relavent Documents:
Document 0:::
An elementary reaction is a chemical reaction in which one or more chemical species react directly to form products in a single reaction step and with a single transition state. In practice, a reaction is assumed to be elementary if no reaction intermediates have been detected or need to be postulated to describe the reaction on a molecular scale. An apparently elementary reaction may be in fact a stepwise reaction, i.e. a complicated sequence of chemical reactions, with reaction intermediates of variable lifetimes.
In a unimolecular elementary reaction, a molecule dissociates or isomerises to form the products(s)
At constant temperature, the rate of such a reaction is proportional to the concentration of the species
In a bimolecular elementary reaction, two atoms, molecules, ions or radicals, and , react together to form the product(s)
The rate of such a reaction, at constant temperature, is proportional to the product of the concentrations of the species and
The rate expression for an elementary bimolecular reaction is sometimes referred to as the Law of Mass Action as it was first proposed by Guldberg and Waage in 1864. An example of this type of reaction is a cycloaddition reaction.
This rate expression can be derived from first principles by using collision theory for ideal gases. For the case of dilute fluids equivalent results have been obtained from simple probabilistic arguments.
According to collision theory the probability of three chemical species reacting simultaneously with each other in a termolecular elementary reaction is negligible. Hence such termolecular reactions are commonly referred as non-elementary reactions and can be broken down into a more fundamental set of bimolecular reactions, in agreement with the law of mass action. It is not always possible to derive overall reaction schemes, but solutions based on rate equations are often possible in terms of steady-state or Michaelis-Menten approximations.
Notes
Chemical kinetics
Phy
Document 1:::
The Hatta number (Ha) was developed by Shirôji Hatta, who taught at Tohoku University. It is a dimensionless parameter that compares the rate of reaction in a liquid film to the rate of diffusion through the film. For a second order reaction (), the maximum rate of reaction assumes that the liquid film is saturated with gas at the interfacial concentration ; thus, the maximum rate of reaction is .
For a reaction order in and order in :
It is an important parameter used in Chemical Reaction Engineering.
Document 2:::
Grote–Hynes theory is a theory of reaction rate in a solution phase. This rate theory was developed by James T. Hynes with his graduate student Richard F. Grote in 1980.
The theory is based on the generalized Langevin equation (GLE). This theory introduced the concept of frequency dependent friction for chemical rate processes in solution phase. Because of inclusion of the frequency dependent friction instead of constant friction, the theory successfully predicts the rate constant including where the reaction barrier is large and of high frequency, where the diffusion over the barrier starts decoupling from viscosity of the medium. This was the weakness of Kramer's rate theory, which underestimated the reaction rate having large barrier with high frequency.
Document 3:::
In biochemistry, a rate-limiting step is a step that controls the rate of a series of biochemical reactions. The statement is, however, a misunderstanding of how a sequence of enzyme catalyzed reaction steps operate. Rather than a single step controlling the rate, it has been discovered that multiple steps control the rate. Moreover, each controlling step controls the rate to varying degrees.
Blackman (1905) stated as an axiom: "when a process is conditioned as to its rapidity by a number of separate factors, the rate of the process is limited by the pace of the slowest factor." This implies that it should be possible, by studying the behavior of a complicated system such as a metabolic pathway, to characterize a single factor or reaction (namely the slowest), which plays the role of a master or rate-limiting step. In other words, the study of flux control can be simplified to the study of a single enzyme since, by definition, there can only be one 'rate-limiting' step. Since its conception, the 'rate-limiting' step has played a significant role in suggesting how metabolic pathways are controlled. Unfortunately, the notion of a 'rate-limiting' step is erroneous, at least under steady-state conditions. Modern biochemistry textbooks have begun to play down the concept. For example, the seventh edition of Lehninger Principles of Biochemistry explicitly states: "It has now become clear that, in most pathways, the control of flux is distributed among several enzymes, and the extent to which each contributes to the control varies with metabolic circumstances". However, the concept is still incorrectly used in research articles.
Historical perspective
From the 1920s to the 1950s, there were a number of authors who discussed the concept of rate-limiting steps, also known as master reactions. Several authors have stated that the concept of the 'rate-limiting' step is incorrect. Burton (1936) was one of the first to point out that: "In the steady state of reaction chain
Document 4:::
Conversion and its related terms yield and selectivity are important terms in chemical reaction engineering. They are described as ratios of how much of a reactant has reacted (X — conversion, normally between zero and one), how much of a desired product was formed (Y — yield, normally also between zero and one) and how much desired product was formed in ratio to the undesired product(s) (S — selectivity).
There are conflicting definitions in the literature for selectivity and yield, so each author's intended definition should be verified.
Conversion can be defined for (semi-)batch and continuous reactors and as instantaneous and overall conversion.
Assumptions
The following assumptions are made:
The following chemical reaction takes place:
,
where and are the stoichiometric coefficients. For multiple parallel reactions, the definitions can also be applied, either per reaction or using the limiting reaction.
Batch reaction assumes all reactants are added at the beginning.
Semi-Batch reaction assumes some reactants are added at the beginning and the rest fed during the batch.
Continuous reaction assumes reactants are fed and products leave the reactor continuously and in steady state.
Conversion
Conversion can be separated into instantaneous conversion and overall conversion. For continuous processes the two are the same, for batch and semi-batch there are important differences. Furthermore, for multiple reactants, conversion can be defined overall or per reactant.
Instantaneous conversion
Semi-batch
In this setting there are different definitions. One definition regards the instantaneous conversion as the ratio of the instantaneously converted amount to
the amount fed at any point in time:
.
with as the change of moles with time of species i.
This ratio can become larger than 1. It can be used to indicate whether reservoirs are built
up and it is ideally close to 1. When the feed stops, its value is not defined.
In semi-batch polymerisation,
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What happens to the reaction rate over the course of a reaction?
A. speeds up
B. reverses
C. slows down
D. stays the same
Answer:
|
|
sciq-1646
|
multiple_choice
|
Evolution occurs by what process whereby better-adapted members pass along their traits, according to darwin?
|
[
"organic selection",
"natural selection",
"natural change",
"spontaneous variation"
] |
B
|
Relavent Documents:
Document 0:::
Tinbergen's four questions, named after 20th century biologist Nikolaas Tinbergen, are complementary categories of explanations for animal behaviour. These are also commonly referred to as levels of analysis. It suggests that an integrative understanding of behaviour must include ultimate (evolutionary) explanations, in particular:
behavioural adaptive functions
phylogenetic history; and the proximate explanations
underlying physiological mechanisms
ontogenetic/developmental history.
Four categories of questions and explanations
When asked about the purpose of sight in humans and animals, even elementary-school children can answer that animals have vision to help them find food and avoid danger (function/adaptation). Biologists have three additional explanations: sight is caused by a particular series of evolutionary steps (phylogeny), the mechanics of the eye (mechanism/causation), and even the process of an individual's development (ontogeny).
This schema constitutes a basic framework of the overlapping behavioural fields of ethology, behavioural ecology, comparative psychology, sociobiology, evolutionary psychology, and anthropology. Julian Huxley identified the first three questions. Niko Tinbergen gave only the fourth question, as Huxley's questions failed to distinguish between survival value and evolutionary history; Tinbergen's fourth question helped resolve this problem.
Evolutionary (ultimate) explanations
First question: Function (adaptation)
Darwin's theory of evolution by natural selection is the only scientific explanation for why an animal's behaviour is usually well adapted for survival and reproduction in its environment. However, claiming that a particular mechanism is well suited to the present environment is different from claiming that this mechanism was selected for in the past due to its history of being adaptive.
The literature conceptualizes the relationship between function and evolution in two ways. On the one hand, function
Document 1:::
In biology, evolution is the process of change in all forms of life over generations, and evolutionary biology is the study of how evolution occurs. Biological populations evolve through genetic changes that correspond to changes in the organisms' observable traits. Genetic changes include mutations, which are caused by damage or replication errors in organisms' DNA. As the genetic variation of a population drifts randomly over generations, natural selection gradually leads traits to become more or less common based on the relative reproductive success of organisms with those traits.
The age of the Earth is about 4.5 billion years. The earliest undisputed evidence of life on Earth dates from at least 3.5 billion years ago. Evolution does not attempt to explain the origin of life (covered instead by abiogenesis), but it does explain how early lifeforms evolved into the complex ecosystem that we see today. Based on the similarities between all present-day organisms, all life on Earth is assumed to have originated through common descent from a last universal ancestor from which all known species have diverged through the process of evolution.
All individuals have hereditary material in the form of genes received from their parents, which they pass on to any offspring. Among offspring there are variations of genes due to the introduction of new genes via random changes called mutations or via reshuffling of existing genes during sexual reproduction. The offspring differs from the parent in minor random ways. If those differences are helpful, the offspring is more likely to survive and reproduce. This means that more offspring in the next generation will have that helpful difference and individuals will not have equal chances of reproductive success. In this way, traits that result in organisms being better adapted to their living conditions become more common in descendant populations. These differences accumulate resulting in changes within the population. This proce
Document 2:::
Evolutionary biology is the subfield of biology that studies the evolutionary processes (natural selection, common descent, speciation) that produced the diversity of life on Earth. It is also defined as the study of the history of life forms on Earth. Evolution holds that all species are related and gradually change over generations. In a population, the genetic variations affect the phenotypes (physical characteristics) of an organism. These changes in the phenotypes will be an advantage to some organisms, which will then be passed onto their offspring. Some examples of evolution in species over many generations are the peppered moth and flightless birds. In the 1930s, the discipline of evolutionary biology emerged through what Julian Huxley called the modern synthesis of understanding, from previously unrelated fields of biological research, such as genetics and ecology, systematics, and paleontology.
The investigational range of current research has widened to encompass the genetic architecture of adaptation, molecular evolution, and the different forces that contribute to evolution, such as sexual selection, genetic drift, and biogeography. Moreover, the newer field of evolutionary developmental biology ("evo-devo") investigates how embryogenesis is controlled, thus yielding a wider synthesis that integrates developmental biology with the fields of study covered by the earlier evolutionary synthesis.
Subfields
Evolution is the central unifying concept in biology. Biology can be divided into various ways. One way is by the level of biological organization, from molecular to cell, organism to population. Another way is by perceived taxonomic group, with fields such as zoology, botany, and microbiology, reflecting what was once seen as the major divisions of life. A third way is by approaches, such as field biology, theoretical biology, experimental evolution, and paleontology. These alternative ways of dividing up the subject have been combined with evolution
Document 3:::
Developmental systems theory (DST) is an overarching theoretical perspective on biological development, heredity, and evolution. It emphasizes the shared contributions of genes, environment, and epigenetic factors on developmental processes. DST, unlike conventional scientific theories, is not directly used to help make predictions for testing experimental results; instead, it is seen as a collection of philosophical, psychological, and scientific models of development and evolution. As a whole, these models argue the inadequacy of the modern evolutionary synthesis on the roles of genes and natural selection as the principal explanation of living structures. Developmental systems theory embraces a large range of positions that expand biological explanations of organismal development and hold modern evolutionary theory as a misconception of the nature of living processes.
Overview
All versions of developmental systems theory espouse the view that:
All biological processes (including both evolution and development) operate by continually assembling new structures.
Each such structure transcends the structures from which it arose and has its own systematic characteristics, information, functions and laws.
Conversely, each such structure is ultimately irreducible to any lower (or higher) level of structure, and can be described and explained only on its own terms.
Furthermore, the major processes through which life as a whole operates, including evolution, heredity and the development of particular organisms, can only be accounted for by incorporating many more layers of structure and process than the conventional concepts of ‘gene’ and ‘environment’ normally allow for.
In other words, although it does not claim that all structures are equal, development systems theory is fundamentally opposed to reductionism of all kinds. In short, developmental systems theory intends to formulate a perspective which does not presume the causal (or ontological) priority of any p
Document 4:::
Biological Evolution: Facts and Theories was a five-day conference held in March 2009 by the Pontifical Gregorian University in Rome, marking the 150th anniversary of the publication of the Origin of Species. The conference was sponsored by the Catholic Church.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Evolution occurs by what process whereby better-adapted members pass along their traits, according to darwin?
A. organic selection
B. natural selection
C. natural change
D. spontaneous variation
Answer:
|
|
sciq-8810
|
multiple_choice
|
What is the common term for animals in the phylum porifera?
|
[
"sharks",
"corals",
"crustaceans",
"sponges"
] |
D
|
Relavent Documents:
Document 0:::
Animals are multicellular, eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. As of 2022, 2.16 million living animal species have been described—of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are around 7.77 million animal species. Animals range in length from to . They have complex interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology.
Most living animal species are in Bilateria, a clade whose members have a bilaterally symmetric body plan. The Bilateria include the protostomes, containing animals such as nematodes, arthropods, flatworms, annelids and molluscs, and the deuterostomes, containing the echinoderms and the chordates, the latter including the vertebrates. Life forms interpreted as early animals were present in the Ediacaran biota of the late Precambrian. Many modern animal phyla became clearly established in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago.
Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on ad
Document 1:::
Parazoa (Parazoa, gr. Παρα-, para, "next to", and ζωα, zoa, "animals") are a taxon with sub-kingdom category that is located at the base of the phylogenetic tree of the animal kingdom in opposition to the sub-kingdom Eumetazoa; they group together the most primitive forms, characterized by not having proper tissues or that, in any case, these tissues are only partially differentiated. They generally group a single phylum, Porifera, which lack muscles, nerves and internal organs, which in many cases resembles a cell colony rather than a multicellular organism itself. All other animals are eumetazoans, which do have differentiated tissues.
On occasion, Parazoa reunites Porifera with Archaeocyatha, a group of extinct sponges sometimes considered a separate phylum. In other cases, Placozoa is included, depending on the authors.
Porifera and Archaeocyatha
Porifera and Archaeocyatha show similarities such as benthic and sessile habitat and the presence of pores, with differences such as the presence of internal walls and septa in Archaeocyatha. They have been considered separate phyla, however, the consensus is growing that Archaeocyatha was in fact a type of sponge that can be classified into Porifera.
Porifera and Placozoa
Some authors include in Parazoa the poriferous or sponge phyla and Placozoa—comprising only the Trichoplax adhaerens species – on the basis of shared primitive characteristics: Both are simple, show a lack of true tissues and organs, have both asexual and sexual reproduction, and are invariably aquatic. As animals, they are a group that in various studies are at the base of the phylogenetic tree, albeit in a paraphyletic form. Of this group only surviving sponges, which belong to the phylum Porifera, and Trichoplax in the phylum Placozoa.
Parazoa do not show any body symmetry (they are asymmetric); all other groups of animals show some kind of symmetry. There are currently 5000 species, 150 of which are freshwater. The larvae are planktonic and th
Document 2:::
Form classification is the classification of organisms based on their morphology, which does not necessarily reflect their biological relationships. Form classification, generally restricted to palaeontology, reflects uncertainty; the goal of science is to move "form taxa" to biological taxa whose affinity is known.
Form taxonomy is restricted to fossils that preserve too few characters for a conclusive taxonomic definition or assessment of their biological affinity, but whose study is made easier if a binomial name is available by which to identify them. The term "form classification" is preferred to "form taxonomy"; taxonomy suggests that the classification implies a biological affinity, whereas form classification is about giving a name to a group of morphologically-similar organisms that may not be related.
A "parataxon" (not to be confused with parataxonomy), or "sciotaxon" (Gr. "shadow taxon"), is a classification based on incomplete data: for instance, the larval stage of an organism that cannot be matched up with an adult. It reflects a paucity of data that makes biological classification impossible. A sciotaxon is defined as a taxon thought to be equivalent to a true taxon (orthotaxon), but whose identity cannot be established because the two candidate taxa are preserved in different ways and thus cannot be compared directly.
Examples
In zoology
Form taxa are groupings that are based on common overall forms. Early attempts at classification of labyrinthodonts was based on skull shape (the heavily armoured skulls often being the only preserved part). The amount of convergent evolution in the many groups lead to a number of polyphyletic taxa. Such groups are united by a common mode of life, often one that is generalist, in consequence acquiring generally similar body shapes by convergent evolution. Ediacaran biota — whether they are the precursors of the Cambrian explosion of the fossil record, or are unrelated to any modern phylum — can currently on
Document 3:::
Animals are multicellular eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, are able to move, reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Over 1.5 million living animal species have been described—of which around 1 million are insects—but it has been estimated there are over 7 million in total. Animals range in size from 8.5 millionths of a metre to long and have complex interactions with each other and their environments, forming intricate food webs. The study of animals is called zoology.
Animals may be listed or indexed by many criteria, including taxonomy, status as endangered species, their geographical location, and their portrayal and/or naming in human culture.
By common name
List of animal names (male, female, young, and group)
By aspect
List of common household pests
List of animal sounds
List of animals by number of neurons
By domestication
List of domesticated animals
By eating behaviour
List of herbivorous animals
List of omnivores
List of carnivores
By endangered status
IUCN Red List endangered species (Animalia)
United States Fish and Wildlife Service list of endangered species
By extinction
List of extinct animals
List of extinct birds
List of extinct mammals
List of extinct cetaceans
List of extinct butterflies
By region
Lists of amphibians by region
Lists of birds by region
Lists of mammals by region
Lists of reptiles by region
By individual (real or fictional)
Real
Lists of snakes
List of individual cats
List of oldest cats
List of giant squids
List of individual elephants
List of historical horses
List of leading Thoroughbred racehorses
List of individual apes
List of individual bears
List of giant pandas
List of individual birds
List of individual bovines
List of individual cetaceans
List of individual dogs
List of oldest dogs
List of individual monkeys
List of individual pigs
List of w
Document 4:::
History of Animals (, Ton peri ta zoia historion, "Inquiries on Animals"; , "History of Animals") is one of the major texts on biology by the ancient Greek philosopher Aristotle, who had studied at Plato's Academy in Athens. It was written in the fourth century BC; Aristotle died in 322 BC.
Generally seen as a pioneering work of zoology, Aristotle frames his text by explaining that he is investigating the what (the existing facts about animals) prior to establishing the why (the causes of these characteristics). The book is thus an attempt to apply philosophy to part of the natural world. Throughout the work, Aristotle seeks to identify differences, both between individuals and between groups. A group is established when it is seen that all members have the same set of distinguishing features; for example, that all birds have feathers, wings, and beaks. This relationship between the birds and their features is recognized as a universal.
The History of Animals contains many accurate eye-witness observations, in particular of the marine biology around the island of Lesbos, such as that the octopus had colour-changing abilities and a sperm-transferring tentacle, that the young of a dogfish grow inside their mother's body, or that the male of a river catfish guards the eggs after the female has left. Some of these were long considered fanciful before being rediscovered in the nineteenth century. Aristotle has been accused of making errors, but some are due to misinterpretation of his text, and others may have been based on genuine observation. He did however make somewhat uncritical use of evidence from other people, such as travellers and beekeepers.
The History of Animals had a powerful influence on zoology for some two thousand years. It continued to be a primary source of knowledge until zoologists in the sixteenth century, such as Conrad Gessner, all influenced by Aristotle, wrote their own studies of the subject.
Context
Aristotle (384–322 BC) studied at Plat
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the common term for animals in the phylum porifera?
A. sharks
B. corals
C. crustaceans
D. sponges
Answer:
|
|
sciq-10404
|
multiple_choice
|
What is the term for water that falls from clouds?
|
[
"lightning",
"wind",
"precipitation",
"atmosphere"
] |
C
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 2:::
In aviation, ceiling is a measurement of the height of the base of the lowest clouds (not to be confused with cloud base which has a specific definition) that cover more than half of the sky (more than 4 oktas) relative to the ground. Ceiling is not specifically reported as part of the METAR (METeorological Aviation Report) used for flight planning by pilots worldwide, but can be deduced from the lowest height with broken (BKN) or overcast (OVC) reported. A ceiling listed as "unlimited" means either that the sky is mostly free of cloud cover, or that the cloud is high enough not to impede Visual Flight Rules (VFR) operation.
Definitions
ICAO The height above the ground or water of the base of the lowest layer of cloud below 6000 meters (20,000 feet) covering more than half the sky.
United Kingdom The vertical distance from the elevation of an aerodrome to the lowest part of any cloud visible from the aerodrome which is sufficient to obscure more than half of the sky.
United States The height above the Earth's surface of the lowest layer of clouds or obscuring phenomena that is reported as broken, overcast, or obscuration, and not classified as thin or partial.
See also
Cloud base
Document 3:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 4:::
Evapotranspiration (ET) is the combined processes which move water from the Earth's surface into the atmosphere. It covers both water evaporation (movement of water to the air directly from soil, canopies, and water bodies) and transpiration (evaporation that occurs through the stomata, or openings, in plant leaves). Evapotranspiration is an important part of the local water cycle and climate, and measurement of it plays a key role in agricultural irrigation and water resource management.
Definition of evapotranspiration
Evapotranspiration is a combination of evaporation and transpiration, measured in order to better understand crop water requirements, irrigation scheduling, and watershed management. The two key components of evapotranspiration are:
Evaporation: the movement of water directly to the air from sources such as the soil and water bodies. It can be affected by factors including heat, humidity, solar radiation and wind speed.
Transpiration: the movement of water from root systems, through a plant, and exit into the air as water vapor. This exit occurs through stomata in the plant. Rate of transpiration can be influenced by factors including plant type, soil type, weather conditions and water content, and also cultivation practices.
Evapotranspiration is typically measured in millimeters of water (i.e. volume of water moved per unit area of the Earth's surface) in a set unit of time. Globally, it is estimated that on average between three-fifths and three-quarters of land precipitation is returned to the atmosphere via evapotranspiration.
Evapotranspiration does not, in general, account for other mechanisms which are involved in returning water to the atmosphere, though some of these, such as snow and ice sublimation in regions of high elevation or high latitude, can make a large contribution to atmospheric moisture even under standard conditions.
Factors that impact evapotranspiration levels
Primary factors
Because evaporation and transpiration
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the term for water that falls from clouds?
A. lightning
B. wind
C. precipitation
D. atmosphere
Answer:
|
|
sciq-10960
|
multiple_choice
|
Which type of cell division halves the number of chromosomes?
|
[
"budding",
"fragmentation",
"mitosis",
"meiosis"
] |
D
|
Relavent Documents:
Document 0:::
An asymmetric cell division produces two daughter cells with different cellular fates. This is in contrast to symmetric cell divisions which give rise to daughter cells of equivalent fates. Notably, stem cells divide asymmetrically to give rise to two distinct daughter cells: one copy of the original stem cell as well as a second daughter programmed to differentiate into a non-stem cell fate. (In times of growth or regeneration, stem cells can also divide symmetrically, to produce two identical copies of the original cell.)
In principle, there are two mechanisms by which distinct properties may be conferred on the daughters of a dividing cell. In one, the daughter cells are initially equivalent but a difference is induced by signaling between the cells, from surrounding cells, or from the precursor cell. This mechanism is known as extrinsic asymmetric cell division. In the second mechanism, the prospective daughter cells are inherently different at the time of division of the mother cell. Because this latter mechanism does not depend on interactions of cells with each other or with their environment, it must rely on intrinsic asymmetry. The term asymmetric cell division usually refers to such intrinsic asymmetric divisions.
Intrinsic asymmetry
In order for asymmetric division to take place the mother cell must be polarized, and the mitotic spindle must be aligned with the axis of polarity. The cell biology of these events has been most studied in three animal models: the mouse, the nematode Caenorhabditis elegans, and the fruit fly Drosophila melanogaster. A later focus has been on development in spiralia.
In C. elegans development
In C. elegans, a series of asymmetric cell divisions in the early embryo are critical in setting up the anterior/posterior, dorsal/ventral, and left/right axes of the body plan. After fertilization, events are already occurring in the zygote to allow for the first asymmetric cell division. This first division produces two distin
Document 1:::
Cytogenetics is essentially a branch of genetics, but is also a part of cell biology/cytology (a subdivision of human anatomy), that is concerned with how the chromosomes relate to cell behaviour, particularly to their behaviour during mitosis and meiosis. Techniques used include karyotyping, analysis of G-banded chromosomes, other cytogenetic banding techniques, as well as molecular cytogenetics such as fluorescence in situ hybridization (FISH) and comparative genomic hybridization (CGH).
History
Beginnings
Chromosomes were first observed in plant cells by Carl Nägeli in 1842. Their behavior in animal (salamander) cells was described by Walther Flemming, the discoverer of mitosis, in 1882. The name was coined by another German anatomist, von Waldeyer in 1888.
The next stage took place after the development of genetics in the early 20th century, when it was appreciated that the set of chromosomes (the karyotype) was the carrier of the genes. Levitsky seems to have been the first to define the karyotype as the phenotypic appearance of the somatic chromosomes, in contrast to their genic contents. Investigation into the human karyotype took many years to settle the most basic question: how many chromosomes does a normal diploid human cell contain? In 1912, Hans von Winiwarter reported 47 chromosomes in spermatogonia and 48 in oogonia, concluding an XX/XO sex determination mechanism. Painter in 1922 was not certain whether the diploid number of humans was 46 or 48, at first favoring 46. He revised his opinion later from 46 to 48, and he correctly insisted on humans having an XX/XY system of sex-determination. Considering their techniques, these results were quite remarkable. In science books, the number of human chromosomes remained at 48 for over thirty years. New techniques were needed to correct this error. Joe Hin Tjio working in Albert Levan's lab was responsible for finding the approach:
Using cells in culture
Pre-treating cells in a hypotonic solution, whi
Document 2:::
Aneuploidy is the presence of an abnormal number of chromosomes in a cell, for example a human cell having 45 or 47 chromosomes instead of the usual 46. It does not include a difference of one or more complete sets of chromosomes. A cell with any number of complete chromosome sets is called a euploid cell.
An extra or missing chromosome is a common cause of some genetic disorders. Some cancer cells also have abnormal numbers of chromosomes. About 68% of human solid tumors are aneuploid. Aneuploidy originates during cell division when the chromosomes do not separate properly between the two cells (nondisjunction). Most cases of aneuploidy in the autosomes result in miscarriage, and the most common extra autosomal chromosomes among live births are 21, 18 and 13. Chromosome abnormalities are detected in 1 of 160 live human births. Autosomal aneuploidy is more dangerous than sex chromosome aneuploidy, as autosomal aneuploidy is almost always lethal to embryos that cease developing because of it.
Chromosomes
Most cells in the human body have 23 pairs of chromosomes, or a total of 46 chromosomes. (The sperm and egg, or gametes, each have 23 unpaired chromosomes, and red blood cells in bone marrow have a nucleus at first but those red blood cells that are active in blood lose their nucleus and thus they end up having no nucleus and therefore no chromosomes.)
One copy of each pair is inherited from the mother and the other copy is inherited from the father. The first 22 pairs of chromosomes (called autosomes) are numbered from 1 to 22, from largest to smallest. The 23rd pair of chromosomes are the sex chromosomes. Typical females have two X chromosomes, while typical males have one X chromosome and one Y chromosome. The characteristics of the chromosomes in a cell as they are seen under a light microscope are called the karyotype.
During meiosis, when germ cells divide to create sperm and egg (gametes), each half should have the same number of chromosomes. But sometim
Document 3:::
A kinetochore (, ) is a disc-shaped protein structure associated with duplicated chromatids in eukaryotic cells where the spindle fibers attach during cell division to pull sister chromatids apart. The kinetochore assembles on the centromere and links the chromosome to microtubule polymers from the mitotic spindle during mitosis and meiosis. The term kinetochore was first used in a footnote in a 1934 Cytology book by Lester W. Sharp and commonly accepted in 1936. Sharp's footnote reads: "The convenient term kinetochore (= movement place) has been suggested to the author by J. A. Moore", likely referring to John Alexander Moore who had joined Columbia University as a freshman in 1932.
Monocentric organisms, including vertebrates, fungi, and most plants, have a single centromeric region on each chromosome which assembles a single, localized kinetochore. Holocentric organisms, such as nematodes and some plants, assemble a kinetochore along the entire length of a chromosome.
Kinetochores start, control, and supervise the striking movements of chromosomes during cell division. During mitosis, which occurs after the amount of DNA is doubled in each chromosome (while maintaining the same number of chromosomes) in S phase, two sister chromatids are held together by a centromere. Each chromatid has its own kinetochore, which face in opposite directions and attach to opposite poles of the mitotic spindle apparatus. Following the transition from metaphase to anaphase, the sister chromatids separate from each other, and the individual kinetochores on each chromatid drive their movement to the spindle poles that will define the two new daughter cells. The kinetochore is therefore essential for the chromosome segregation that is classically associated with mitosis and meiosis.
Structure of Kinetochore
The kinetochore contains two regions:
an inner kinetochore, which is tightly associated with the centromere DNA and assembled in a specialized form of chromatin that persists t
Document 4:::
Chromosome segregation is the process in eukaryotes by which two sister chromatids formed as a consequence of DNA replication, or paired homologous chromosomes, separate from each other and migrate to opposite poles of the nucleus. This segregation process occurs during both mitosis and meiosis. Chromosome segregation also occurs in prokaryotes. However, in contrast to eukaryotic chromosome segregation, replication and segregation are not temporally separated. Instead segregation occurs progressively following replication.
Mitotic chromatid segregation
During mitosis chromosome segregation occurs routinely as a step in cell division (see mitosis diagram). As indicated in the mitosis diagram, mitosis is preceded by a round of DNA replication, so that each chromosome forms two copies called chromatids. These chromatids separate to opposite poles, a process facilitated by a protein complex referred to as cohesin. Upon proper segregation, a complete set of chromatids ends up in each of two nuclei, and when cell division is completed, each DNA copy previously referred to as a chromatid is now called a chromosome.
Meiotic chromosome and chromatid segregation
Chromosome segregation occurs at two separate stages during meiosis called anaphase I and anaphase II (see meiosis diagram). In a diploid cell there are two sets of homologous chromosomes of different parental origin (e.g. a paternal and a maternal set). During the phase of meiosis labeled “interphase s” in the meiosis diagram there is a round of DNA replication, so that each of the chromosomes initially present is now composed of two copies called chromatids. These chromosomes (paired chromatids) then pair with the homologous chromosome (also paired chromatids) present in the same nucleus (see prophase I in the meiosis diagram). The process of alignment of paired homologous chromosomes is called synapsis (see Synapsis). During synapsis, genetic recombination usually occurs. Some of the recombination even
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which type of cell division halves the number of chromosomes?
A. budding
B. fragmentation
C. mitosis
D. meiosis
Answer:
|
|
scienceQA-9638
|
multiple_choice
|
Select the vertebrate.
|
[
"red-spotted purple butterfly",
"bess beetle",
"domestic cat",
"earthworm"
] |
C
|
A bess beetle is an insect. Like other insects, a bess beetle is an invertebrate. It does not have a backbone. It has an exoskeleton.
A domestic cat is a mammal. Like other mammals, a domestic cat is a vertebrate. It has a backbone.
An earthworm is a worm. Like other worms, an earthworm is an invertebrate. It does not have a backbone. It has a soft body.
A red-spotted purple butterfly is an insect. Like other insects, a red-spotted purple butterfly is an invertebrate. It does not have a backbone. It has an exoskeleton.
|
Relavent Documents:
Document 0:::
Vertebrate zoology is the biological discipline that consists of the study of Vertebrate animals, i.e., animals with a backbone, such as fish, amphibians, reptiles, birds and mammals. Many natural history museums have departments named Vertebrate Zoology. In some cases whole museums bear this name, e.g. the Museum of Vertebrate Zoology at the University of California, Berkeley.
Subdivisions
This subdivision of zoology has many further subdivisions, including:
Ichthyology - the study of fishes.
Mammalogy - the study of mammals.
Chiropterology - the study of bats.
Primatology - the study of primates.
Ornithology - the study of birds.
Herpetology - the study of reptiles.
Batrachology - the study of amphibians.
These divisions are sometimes further divided into more specific specialties.
Document 1:::
Animals are multicellular, eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. As of 2022, 2.16 million living animal species have been described—of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are around 7.77 million animal species. Animals range in length from to . They have complex interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology.
Most living animal species are in Bilateria, a clade whose members have a bilaterally symmetric body plan. The Bilateria include the protostomes, containing animals such as nematodes, arthropods, flatworms, annelids and molluscs, and the deuterostomes, containing the echinoderms and the chordates, the latter including the vertebrates. Life forms interpreted as early animals were present in the Ediacaran biota of the late Precambrian. Many modern animal phyla became clearly established in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago.
Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on ad
Document 2:::
Animals are multicellular eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, are able to move, reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Over 1.5 million living animal species have been described—of which around 1 million are insects—but it has been estimated there are over 7 million in total. Animals range in size from 8.5 millionths of a metre to long and have complex interactions with each other and their environments, forming intricate food webs. The study of animals is called zoology.
Animals may be listed or indexed by many criteria, including taxonomy, status as endangered species, their geographical location, and their portrayal and/or naming in human culture.
By common name
List of animal names (male, female, young, and group)
By aspect
List of common household pests
List of animal sounds
List of animals by number of neurons
By domestication
List of domesticated animals
By eating behaviour
List of herbivorous animals
List of omnivores
List of carnivores
By endangered status
IUCN Red List endangered species (Animalia)
United States Fish and Wildlife Service list of endangered species
By extinction
List of extinct animals
List of extinct birds
List of extinct mammals
List of extinct cetaceans
List of extinct butterflies
By region
Lists of amphibians by region
Lists of birds by region
Lists of mammals by region
Lists of reptiles by region
By individual (real or fictional)
Real
Lists of snakes
List of individual cats
List of oldest cats
List of giant squids
List of individual elephants
List of historical horses
List of leading Thoroughbred racehorses
List of individual apes
List of individual bears
List of giant pandas
List of individual birds
List of individual bovines
List of individual cetaceans
List of individual dogs
List of oldest dogs
List of individual monkeys
List of individual pigs
List of w
Document 3:::
Roshd Biological Education is a quarterly science educational magazine covering recent developments in biology and biology education for a biology teacher Persian -speaking audience. Founded in 1985, it is published by The Teaching Aids Publication Bureau, Organization for Educational Planning and Research, Ministry of Education, Iran. Roshd Biological Education has an editorial board composed of Iranian biologists, experts in biology education, science journalists and biology teachers.
It is read by both biology teachers and students, as a way of launching innovations and new trends in biology education, and helping biology teachers to teach biology in better and more effective ways.
Magazine layout
As of Autumn 2012, the magazine is laid out as follows:
Editorial—often offering a view of point from editor in chief on an educational and/or biological topics.
Explore— New research methods and results on biology and/or education.
World— Reports and explores on biological education worldwide.
In Brief—Summaries of research news and discoveries.
Trends—showing how new technology is altering the way we live our lives.
Point of View—Offering personal commentaries on contemporary topics.
Essay or Interview—often with a pioneer of a biological and/or educational researcher or an influential scientific educational leader.
Muslim Biologists—Short histories of Muslim Biologists.
Environment—An article on Iranian environment and its problems.
News and Reports—Offering short news and reports events on biology education.
In Brief—Short articles explaining interesting facts.
Questions and Answers—Questions about biology concepts and their answers.
Book and periodical Reviews—About new publication on biology and/or education.
Reactions—Letter to the editors.
Editorial staff
Mohammad Karamudini, editor in chief
History
Roshd Biological Education started in 1985 together with many other magazines in other science and art. The first editor was Dr. Nouri-Dalooi, th
Document 4:::
This is a list of scientific journals which cover the field of zoology.
A
Acta Entomologica Musei Nationalis Pragae
Acta Zoologica Academiae Scientiarum Hungaricae
Acta Zoologica Bulgarica
Acta Zoológica Mexicana
Acta Zoologica: Morphology and Evolution
African Entomology
African Invertebrates
African Journal of Herpetology
African Zoology
Alces
American Journal of Primatology
Animal Biology, formerly Netherlands Journal of Zoology
Animal Cognition
Arctic
Australian Journal of Zoology
Australian Mammalogy
B
Bulgarian Journal of Agricultural Science
Bulletin of the American Museum of Natural History
C
Canadian Journal of Zoology
Caribbean Herpetology
Central European Journal of Biology
Contributions to Zoology
Copeia
Crustaceana
E
Environmental Biology of Fishes
F
Frontiers in Zoology
H
Herpetological Monographs
I
Integrative and Comparative Biology, formerly American Zoologist
International Journal of Acarology
International Journal of Primatology
J
M
Malacologia
N
North-Western Journal of Zoology
P
Physiological and Biochemical Zoology
R
Raffles Bulletin of Zoology
Rangifer
Russian Journal of Nematology
V
The Veliger
W
Worm Runner's Digest
Z
See also
List of biology journals
List of ornithology journals
List of entomology journals
Lists of academic journals
Zoology-related lists
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Select the vertebrate.
A. red-spotted purple butterfly
B. bess beetle
C. domestic cat
D. earthworm
Answer:
|
ai2_arc-329
|
multiple_choice
|
Which of the following is an example of a chemical change?
|
[
"clouds forming",
"sugar dissolving",
"water freezing",
"a candle burning"
] |
D
|
Relavent Documents:
Document 0:::
Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.
Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate.
A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density.
An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge.
Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change.
Examples
Heating and cooling
Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation.
Magnetism
Ferro-magnetic materials can become magnetic. The process is reve
Document 1:::
Desiccation () is the state of extreme dryness, or the process of extreme drying. A desiccant is a hygroscopic (attracts and holds water) substance that induces or sustains such a state in its local vicinity in a moderately sealed container.
Industry
Desiccation is widely employed in the oil and gas industry. These materials are obtained in a hydrated state, but the water content leads to corrosion or is incompatible with downstream processing. Removal of water is achieved by cryogenic condensation, absorption into glycols, and absorption onto desiccants such as silica gel.
Laboratory
A desiccator is a heavy glass or plastic container, now somewhat antiquated, used in practical chemistry for drying or keeping small amounts of materials very dry. The material is placed on a shelf, and a drying agent or desiccant, such as dry silica gel or anhydrous sodium hydroxide, is placed below the shelf.
Often some sort of humidity indicator is included in the desiccator to show, by color changes, the level of humidity. These indicators are in the form of indicator plugs or indicator cards. The active chemical is cobalt chloride (CoCl2). Anhydrous cobalt chloride is blue. When it bonds with two water molecules, (CoCl2•2H2O), it turns purple. Further hydration results in the pink hexaaquacobalt(II) chloride complex [Co(H2O)6]2+.
Biology and ecology
In biology and ecology, desiccation refers to the drying out of a living organism, such as when aquatic animals are taken out of water, slugs are exposed to salt, or when plants are exposed to sunlight or drought. Ecologists frequently study and assess various organisms' susceptibility to desiccation. For example, in one study the investigators found that Caenorhabditis elegans dauer is a true anhydrobiote that can withstand extreme desiccation and that the basis of this ability is founded in the metabolism of trehalose.
DNA damage and repair
Several bacterial species have been shown to accumulate DNA damages upon desicc
Document 2:::
A breakthrough curve in adsorption is the course of the effluent adsorptive concentration at the outlet of a fixed bed adsorber. Breakthrough curves are important for adsorptive separation technologies and for the characterization of porous materials.
Importance
Since almost all adsorptive separation processes are dynamic -meaning, that they are running under flow - testing porous materials for those applications for their separation performance has to be tested under flow as well. Since separation processes run with mixtures of different components, measuring several breakthrough curves results in thermodynamic mixture equilibria - mixture sorption isotherms, that are hardly accessible with static manometric sorption characterization. This enables the determination of sorption selectivities in gaseous and liquid phase.
The determination of breakthrough curves is the foundation of many other processes, like the pressure swing adsorption. Within this process, the loading of one adsorber is equivalent to a breakthrough experiment.
Measurement
A fixed bed of porous materials (e.g. activated carbons and zeolites) is pressurized and purged with a carrier gas. After becoming stationary one or more adsorptives are added to the carrier gas, resulting in a step-wise change of the inlet concentration. This is in contrast to chromatographic separation processes, where pulse-wise changes of the inlet concentrations are used. The course of the adsorptive concentrations at the outlet of the fixed bed are monitored.
Results
Integration of the area above the entire breakthrough curve gives the maximum loading of the adsorptive material. Additionally, the duration of the breakthrough experiment until a certain threshold of the adsorptive concentration at the outlet can be measured, which enables the calculation of a technically usable sorption capacity. Up to this time, the quality of the product stream can be maintained. The shape of the breakthrough curves contains informat
Document 3:::
In materials science, liquefaction is a process that generates a liquid from a solid or a gas or that generates a non-liquid phase which behaves in accordance with fluid dynamics.
It occurs both naturally and artificially. As an example of the latter, a "major commercial application of liquefaction is the liquefaction of air to allow separation of the constituents, such as oxygen, nitrogen, and the noble gases." Another is the conversion of solid coal into a liquid form usable as a substitute for liquid fuels.
Geology
In geology, soil liquefaction refers to the process by which water-saturated, unconsolidated sediments are transformed into a substance that acts like a liquid, often in an earthquake. Soil liquefaction was blamed for building collapses in the city of Palu, Indonesia in October 2018.
In a related phenomenon, liquefaction of bulk materials in cargo ships may cause a dangerous shift in the load.
Physics and chemistry
In physics and chemistry, the phase transitions from solid and gas to liquid (melting and condensation, respectively) may be referred to as liquefaction. The melting point (sometimes called liquefaction point) is the temperature and pressure at which a solid becomes a liquid. In commercial and industrial situations, the process of condensing a gas to liquid is sometimes referred to as liquefaction of gases.
Coal
Coal liquefaction is the production of liquid fuels from coal using a variety of industrial processes.
Dissolution
Liquefaction is also used in commercial and industrial settings to refer to mechanical dissolution of a solid by mixing, grinding or blending with a liquid.
Food preparation
In kitchen or laboratory settings, solids may be chopped into smaller parts sometimes in combination with a liquid, for example in food preparation or laboratory use. This may be done with a blender, or liquidiser in British English.
Irradiation
Liquefaction of silica and silicate glasses occurs on electron beam irradiation of nanos
Document 4:::
At equilibrium, the relationship between water content and equilibrium relative humidity of a material can be displayed graphically by a curve, the so-called moisture sorption isotherm.
For each humidity value, a sorption isotherm indicates the corresponding water content value at a given, constant temperature. If the composition or quality of the material changes, then its sorption behaviour also changes. Because of the complexity of sorption process the isotherms cannot be determined explicitly by calculation, but must be recorded experimentally for each product.
The relationship between water content and water activity (aw) is complex. An increase in aw is usually accompanied by an increase in water content, but in a non-linear fashion. This relationship between water activity and moisture content at a given temperature is called the moisture sorption isotherm. These curves are determined experimentally and constitute the fingerprint of a food system.
BET theory (Brunauer-Emmett-Teller) provides a calculation to describe the physical adsorption of gas molecules on a solid surface. Because of the complexity of the process, these calculations are only moderately successful; however, Stephen Brunauer was able to classify sorption isotherms into five generalized shapes as shown in Figure 2. He found that Type II and Type III isotherms require highly porous materials or desiccants, with first monolayer adsorption, followed by multilayer adsorption and finally leading to capillary condensation, explaining these materials high moisture capacity at high relative humidity.
Care must be used in extracting data from isotherms, as the representation for each axis may vary in its designation. Brunauer provided the vertical axis as moles of gas adsorbed divided by the moles of the dry material, and on the horizontal axis he used the ratio of partial pressure of the gas just over the sample, divided by its partial pressure at saturation. More modern isotherms showing the
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which of the following is an example of a chemical change?
A. clouds forming
B. sugar dissolving
C. water freezing
D. a candle burning
Answer:
|
|
sciq-4393
|
multiple_choice
|
What science is the study of the occurrence, distribution, and determinants of health and disease in a population?
|
[
"epidemiology",
"physiology",
"histology",
"toxicology"
] |
A
|
Relavent Documents:
Document 0:::
Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research.
Americas
Human Biology major at Stanford University, Palo Alto (since 1970)
Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government.
Human and Social Biology (Caribbean)
Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment.
Human Biology Program at University of Toronto
The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications.
Asia
BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002)
BSc (honours) Human Biology at AIIMS (New
Document 1:::
The mathematical sciences are a group of areas of study that includes, in addition to mathematics, those academic disciplines that are primarily mathematical in nature but may not be universally considered subfields of mathematics proper.
Statistics, for example, is mathematical in its methods but grew out of bureaucratic and scientific observations, which merged with inverse probability and then grew through applications in some areas of physics, biometrics, and the social sciences to become its own separate, though closely allied, field. Theoretical astronomy, theoretical physics, theoretical and applied mechanics, continuum mechanics, mathematical chemistry, actuarial science, computer science, computational science, data science, operations research, quantitative biology, control theory, econometrics, geophysics and mathematical geosciences are likewise other fields often considered part of the mathematical sciences.
Some institutions offer degrees in mathematical sciences (e.g. the United States Military Academy, Stanford University, and University of Khartoum) or applied mathematical sciences (for example, the University of Rhode Island).
See also
Document 2:::
This list of life sciences comprises the branches of science that involve the scientific study of life – such as microorganisms, plants, and animals including human beings. This science is one of the two major branches of natural science, the other being physical science, which is concerned with non-living matter. Biology is the overall natural science that studies life, with the other life sciences as its sub-disciplines.
Some life sciences focus on a specific type of organism. For example, zoology is the study of animals, while botany is the study of plants. Other life sciences focus on aspects common to all or many life forms, such as anatomy and genetics. Some focus on the micro-scale (e.g. molecular biology, biochemistry) other on larger scales (e.g. cytology, immunology, ethology, pharmacy, ecology). Another major branch of life sciences involves understanding the mindneuroscience. Life sciences discoveries are helpful in improving the quality and standard of life and have applications in health, agriculture, medicine, and the pharmaceutical and food science industries. For example, it has provided information on certain diseases which has overall aided in the understanding of human health.
Basic life science branches
Biology – scientific study of life
Anatomy – study of form and function, in plants, animals, and other organisms, or specifically in humans
Astrobiology – the study of the formation and presence of life in the universe
Bacteriology – study of bacteria
Biotechnology – study of combination of both the living organism and technology
Biochemistry – study of the chemical reactions required for life to exist and function, usually a focus on the cellular level
Bioinformatics – developing of methods or software tools for storing, retrieving, organizing and analyzing biological data to generate useful biological knowledge
Biolinguistics – the study of the biology and evolution of language.
Biological anthropology – the study of humans, non-hum
Document 3:::
Statistics is the mathematical science involving the collection, analysis and interpretation of data. A number of specialties have evolved to apply statistical and methods to various disciplines. Certain topics have "statistical" in their name but relate to manipulations of probability distributions rather than to statistical analysis.
Actuarial science is the discipline that applies mathematical and statistical methods to assess risk in the insurance and finance industries.
Astrostatistics is the discipline that applies statistical analysis to the understanding of astronomical data.
Biostatistics is a branch of biology that studies biological phenomena and observations by means of statistical analysis, and includes medical statistics.
Business analytics is a rapidly developing business process that applies statistical methods to data sets (often very large) to develop new insights and understanding of business performance & opportunities
Chemometrics is the science of relating measurements made on a chemical system or process to the state of the system via application of mathematical or statistical methods.
Demography is the statistical study of all populations. It can be a very general science that can be applied to any kind of dynamic population, that is, one that changes over time or space.
Econometrics is a branch of economics that applies statistical methods to the empirical study of economic theories and relationships.
Environmental statistics is the application of statistical methods to environmental science. Weather, climate, air and water quality are included, as are studies of plant and animal populations.
Epidemiology is the study of factors affecting the health and illness of populations, and serves as the foundation and logic of interventions made in the interest of public health and preventive medicine.
Forensic statistics is the application of probability models and statistical techniques to scientific evidence, such as DNA evidence, and the law. In
Document 4:::
Biomedicine (also referred to as Western medicine, mainstream medicine or conventional medicine) is a branch of medical science that applies biological and physiological principles to clinical practice. Biomedicine stresses standardized, evidence-based treatment validated through biological research, with treatment administered via formally trained doctors, nurses, and other such licensed practitioners.
Biomedicine also can relate to many other categories in health and biological related fields. It has been the dominant system of medicine in the Western world for more than a century.
It includes many biomedical disciplines and areas of specialty that typically contain the "bio-" prefix such as molecular biology, biochemistry, biotechnology, cell biology, embryology, nanobiotechnology, biological engineering, laboratory medical biology, cytogenetics, genetics, gene therapy, bioinformatics, biostatistics, systems biology, neuroscience, microbiology, virology, immunology, parasitology, physiology, pathology, anatomy, toxicology, and many others that generally concern life sciences as applied to medicine.
Overview
Biomedicine is the cornerstone of modern health care and laboratory diagnostics. It concerns a wide range of scientific and technological approaches: from in vitro diagnostics to in vitro fertilisation, from the molecular mechanisms of cystic fibrosis to the population dynamics of the HIV virus, from the understanding of molecular interactions to the study of carcinogenesis, from a single-nucleotide polymorphism (SNP) to gene therapy.
Biomedicine is based on molecular biology and combines all issues of developing molecular medicine into large-scale structural and functional relationships of the human genome, transcriptome, proteome, physiome and metabolome with the particular point of view of devising new technologies for prediction, diagnosis and therapy.
Biomedicine involves the study of (patho-) physiological processes with methods from biology and
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What science is the study of the occurrence, distribution, and determinants of health and disease in a population?
A. epidemiology
B. physiology
C. histology
D. toxicology
Answer:
|
|
sciq-767
|
multiple_choice
|
Which cycle is named after the scientist melvin calvin?
|
[
"melvin cycle",
"calvin cycle",
"krebs cycle",
"melcal cycle"
] |
B
|
Relavent Documents:
Document 0:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 1:::
The School of Biological Sciences is one of the academic units of the University of California, Irvine (UCI). The school is divided into four departments: developmental and cell biology, ecology and evolutionary biology, molecular biology and biochemistry, and neurobiology and behavior. With over 3,700 students it is in the top four largest schools in the university.<ref></http://grad-schools.usnews.rankingsandreviews.com/best-graduate-schools/top-medical-schools/research-rankings/page+2> In 2013, the Francisco J. Ayala School of Biological Sciences contained 19.4 percent of the student population
</ref>
It is consistently ranked in the top one hundred in U.S. News & World Report’s yearly list of best graduate schools.
History
The School of Biological Sciences first opened in 1965 at the University of California, Irvine and was one of the first schools founded when the university campus opened. The school's founding Dean, Edward A. Steinhaus, had four founding department chairs and started out with 17 professors.
On March 12, 2014, the School was officially renamed after UCI professor and donor Francisco J. Ayala by then-Chancellor Michael V. Drake. Ayala had previously pledged to donate $10 million to the School of Biological Sciences in 2011. The school reverted to its previous name in June 2018, after a university investigation confirmed that Ayala had sexually harassed at least four women colleagues and graduate students.
Notes
External links
University of California, Irvine
Biology education
Science education in the United States
Science and technology in Greater Los Angeles
University subdivisions in California
Educational institutions established in 1965
1965 establishments in California
Document 2:::
The pseudo Stirling cycle, also known as the adiabatic Stirling cycle, is a thermodynamic cycle with an adiabatic working volume and isothermal heater and cooler, in contrast to the ideal Stirling cycle with an isothermal working space. The working fluid has no bearing on the maximum thermal efficiencies of the pseudo Stirling cycle.
Practical Stirling engines usually use a adiabatic Stirling cycle as the ideal Stirling cycle can not be practically implemented.
Nomenclature (practical engines and ideal cycle are both named Stirling) and lack in specificity (omitting ideal or adiabatic Stirling cycle) can cause confusion.
History
The pseudo Stirling cycle was designed to address predictive shortcomings in the ideal isothermal Stirling cycle. Specifically, the ideal cycle does not give usable figures or criteria for judging the performance of real-world Stirling engines.
See also
Stirling engine
Stirling cycle
Document 3:::
The Siemens cycle is a technique used to cool or liquefy gases. A gas is compressed, leading to an increase in its temperature due to the directly proportional relationship between temperature and pressure (as stated by Gay-Lussac's law). The compressed gas is then cooled by a heat exchanger and decompressed, resulting in a (possibly condensed) gas that is colder than the original at the same pressure.
Carl Wilhelm Siemens patented the Siemens cycle in 1857.
In the Siemens cycle the gas is:
1. Heated – by compressing the gas – adding external energy into the gas, to give it what is needed for running through the cycle
2. Cooled – by immersing the gas in a cooler environment, losing some of its heat (and energy)
3. Cooled through heat exchanger with returning gas from next (and last stage)
4. Cooled further by expanding the gas and doing work, removing heat (and energy)
The gas which is now at its coolest in the current cycle, is recycled and sent back to be –
5. Heated – when participating as the coolant for stage 3, and then
6. Resent to stage one, to start the next cycle, and be slightly reheated by compression.
In each cycle the net cooling is more than the heat added at the beginning of the cycle. As the gas passes more cycles and becomes cooler, reaching lower temperatures at the expanding cylinder (stage 4 of the Siemens cycle) becomes more difficult.
See also
Adiabatic process
Gas compressor
Hampson–Linde cycle
Regenerative cooling
Timeline of low-temperature technology
Document 4:::
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which cycle is named after the scientist melvin calvin?
A. melvin cycle
B. calvin cycle
C. krebs cycle
D. melcal cycle
Answer:
|
|
sciq-276
|
multiple_choice
|
The mass of atoms is based on the number of protons and neutrons in what?
|
[
"nucleus",
"molecules",
"electrons",
"components"
] |
A
|
Relavent Documents:
Document 0:::
The atomic mass (ma or m) is the mass of an atom. Although the SI unit of mass is the kilogram (symbol: kg), atomic mass is often expressed in the non-SI unit dalton (symbol: Da) – equivalently, unified atomic mass unit (u). 1 Da is defined as of the mass of a free carbon-12 atom at rest in its ground state. The protons and neutrons of the nucleus account for nearly all of the total mass of atoms, with the electrons and nuclear binding energy making minor contributions. Thus, the numeric value of the atomic mass when expressed in daltons has nearly the same value as the mass number. Conversion between mass in kilograms and mass in daltons can be done using the atomic mass constant .
The formula used for conversion is:
where is the molar mass constant, is the Avogadro constant, and is the experimentally determined molar mass of carbon-12.
The relative isotopic mass (see section below) can be obtained by dividing the atomic mass ma of an isotope by the atomic mass constant mu yielding a dimensionless value. Thus, the atomic mass of a carbon-12 atom is by definition, but the relative isotopic mass of a carbon-12 atom is simply 12. The sum of relative isotopic masses of all atoms in a molecule is the relative molecular mass.
The atomic mass of an isotope and the relative isotopic mass refers to a certain specific isotope of an element. Because substances are usually not isotopically pure, it is convenient to use the elemental atomic mass which is the average (mean) atomic mass of an element, weighted by the abundance of the isotopes. The dimensionless (standard) atomic weight is the weighted mean relative isotopic mass of a (typical naturally occurring) mixture of isotopes.
The atomic mass of atoms, ions, or atomic nuclei is slightly less than the sum of the masses of their constituent protons, neutrons, and electrons, due to binding energy mass loss (per ).
Relative isotopic mass
Relative isotopic mass (a property of a single atom) is not to be confused w
Document 1:::
The mass recorded by a mass spectrometer can refer to different physical quantities depending on the characteristics of the instrument and the manner in which the mass spectrum is displayed.
Units
The dalton (symbol: Da) is the standard unit that is used for indicating mass on an atomic or molecular scale (atomic mass). The unified atomic mass unit (symbol: u) is equivalent to the dalton. One dalton is approximately the mass of one a single proton or neutron. The unified atomic mass unit has a value of . The amu without the "unified" prefix is an obsolete unit based on oxygen, which was replaced in 1961.
Molecular mass
The molecular mass (abbreviated Mr) of a substance, formerly also called molecular weight and abbreviated as MW, is the mass of one molecule of that substance, relative to the unified atomic mass unit u (equal to 1/12 the mass of one atom of 12C). Due to this relativity, the molecular mass of a substance is commonly referred to as the relative molecular mass, and abbreviated to Mr.
Average mass
The average mass of a molecule is obtained by summing the average atomic masses of the constituent elements. For example, the average mass of natural water with formula H2O is 1.00794 + 1.00794 + 15.9994 = 18.01528 Da.
Mass number
The mass number, also called the nucleon number, is the number of protons and neutrons in an atomic nucleus. The mass number is unique for each isotope of an element and is written either after the element name or as a superscript to the left of an element's symbol. For example, carbon-12 (12C) has 6 protons and 6 neutrons.
Nominal mass
The nominal mass for an element is the mass number of its most abundant naturally occurring stable isotope, and for an ion or molecule, the nominal mass is the sum of the nominal masses of the constituent atoms. Isotope abundances are tabulated by IUPAC: for example carbon has two stable isotopes 12C at 98.9% natural abundance and 13C at 1.1% natural abundance, thus the nominal mass of carbon i
Document 2:::
The subatomic scale is the domain of physical size that encompasses objects smaller than an atom. It is the scale at which the atomic constituents, such as the nucleus containing protons and neutrons, and the electrons in their orbitals, become apparent.
The subatomic scale includes the many thousands of times smaller subnuclear scale, which is the scale of physical size at which constituents of the protons and neutrons - particularly quarks - become apparent.
See also
Astronomical scale the opposite end of the spectrum
Subatomic particles
Document 3:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 4:::
In particle physics, the electron mass (symbol: ) is the mass of a stationary electron, also known as the invariant mass of the electron. It is one of the fundamental constants of physics. It has a value of about or about , which has an energy-equivalent of about or about
Terminology
The term "rest mass" is sometimes used because in special relativity the mass of an object can be said to increase in a frame of reference that is moving relative to that object (or if the object is moving in a given frame of reference). Most practical measurements are carried out on moving electrons. If the electron is moving at a relativistic velocity, any measurement must use the correct expression for mass. Such correction becomes substantial for electrons accelerated by voltages of over .
For example, the relativistic expression for the total energy, , of an electron moving at speed is
where
is the speed of light;
is the Lorentz factor,
is the "rest mass", or more simply just the "mass" of the electron.
This quantity is frame invariant and velocity independent. However, some texts group the Lorentz factor with the mass factor to define a new quantity called the relativistic mass, .
Determination
Since the electron mass determines a number of observed effects in atomic physics, there are potentially many ways to determine its mass from an experiment, if the values of other physical constants are already considered known.
Historically, the mass of the electron was determined directly from combining two measurements. The mass-to-charge ratio of the electron was first estimated by Arthur Schuster in 1890 by measuring the deflection of "cathode rays" due to a known magnetic field in a cathode ray tube. Seven years later J. J. Thomson showed that cathode rays consist of streams of particles, to be called electrons, and made more precise measurements of their mass-to-charge ratio again using a cathode ray tube.
The second measurement was of the charge of the electron. T
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The mass of atoms is based on the number of protons and neutrons in what?
A. nucleus
B. molecules
C. electrons
D. components
Answer:
|
|
sciq-875
|
multiple_choice
|
What gas is released into the atmosphere when fossil fuels are burned?
|
[
"co2",
"helium",
"glucose",
"hydrogen"
] |
A
|
Relavent Documents:
Document 0:::
Volcanic gases are gases given off by active (or, at times, by dormant) volcanoes. These include gases trapped in cavities (vesicles) in volcanic rocks, dissolved or dissociated gases in magma and lava, or gases emanating from lava, from volcanic craters or vents. Volcanic gases can also be emitted through groundwater heated by volcanic action.
The sources of volcanic gases on Earth include:
primordial and recycled constituents from the Earth's mantle,
assimilated constituents from the Earth's crust,
groundwater and the Earth's atmosphere.
Substances that may become gaseous or give off gases when heated are termed volatile substances.
Composition
The principal components of volcanic gases are water vapor (H2O), carbon dioxide (CO2), sulfur either as sulfur dioxide (SO2) (high-temperature volcanic gases) or hydrogen sulfide (H2S) (low-temperature volcanic gases), nitrogen, argon, helium, neon, methane, carbon monoxide and hydrogen. Other compounds detected in volcanic gases are oxygen (meteoric), hydrogen chloride, hydrogen fluoride, hydrogen bromide, sulfur hexafluoride, carbonyl sulfide, and organic compounds. Exotic trace compounds include mercury, halocarbons (including CFCs), and halogen oxide radicals.
The abundance of gases varies considerably from volcano to volcano, with volcanic activity and with tectonic setting. Water vapour is consistently the most abundant volcanic gas, normally comprising more than 60% of total emissions. Carbon dioxide typically accounts for 10 to 40% of emissions.
Volcanoes located at convergent plate boundaries emit more water vapor and chlorine than volcanoes at hot spots or divergent plate boundaries. This is caused by the addition of seawater into magmas formed at subduction zones. Convergent plate boundary volcanoes also have higher H2O/H2, H2O/CO2, CO2/He and N2/He ratios than hot spot or divergent plate boundary volcanoes.
Magmatic gases and high-temperature volcanic gases
Magma contains dissolved volatile componen
Document 1:::
Carbon dioxide is a chemical compound with the chemical formula . It is made up of molecules that each have one carbon atom covalently double bonded to two oxygen atoms. It is found in the gas state at room temperature, and as the source of available carbon in the carbon cycle, atmospheric is the primary carbon source for life on Earth. In the air, carbon dioxide is transparent to visible light but absorbs infrared radiation, acting as a greenhouse gas. Carbon dioxide is soluble in water and is found in groundwater, lakes, ice caps, and seawater. When carbon dioxide dissolves in water, it forms carbonate and mainly bicarbonate (), which causes ocean acidification as atmospheric levels increase.
It is a trace gas in Earth's atmosphere at 421 parts per million (ppm), or about 0.04% (as of May 2022) having risen from pre-industrial levels of 280 ppm or about 0.025%. Burning fossil fuels is the primary cause of these increased concentrations and also the primary cause of climate change.
Its concentration in Earth's pre-industrial atmosphere since late in the Precambrian was regulated by organisms and geological phenomena. Plants, algae and cyanobacteria use energy from sunlight to synthesize carbohydrates from carbon dioxide and water in a process called photosynthesis, which produces oxygen as a waste product. In turn, oxygen is consumed and is released as waste by all aerobic organisms when they metabolize organic compounds to produce energy by respiration. is released from organic materials when they decay or combust, such as in forest fires. Since plants require for photosynthesis, and humans and animals depend on plants for food, is necessary for the survival of life on earth.
Carbon dioxide is 53% more dense than dry air, but is long lived and thoroughly mixes in the atmosphere. About half of excess emissions to the atmosphere are absorbed by land and ocean carbon sinks. These sinks can become saturated and are volatile, as decay and wildfires result i
Document 2:::
Butane () or n-butane is an alkane with the formula C4H10. Butane is a highly flammable, colorless, easily liquefied gas that quickly vaporizes at room temperature and pressure. The name butane comes from the root but- (from butyric acid, named after the Greek word for butter) and the suffix -ane. It was discovered in crude petroleum in 1864 by Edmund Ronalds, who was the first to describe its properties, and commercialized by Walter O. Snelling in early 1910s.
Butane is one of a group of liquefied petroleum gases (LP gases). The others include propane, propylene, butadiene, butylene, isobutylene, and mixtures thereof. Butane burns more cleanly than both gasoline and coal.
History
The first synthesis of butane was accidentally achieved by British chemist Edward Frankland in 1849 from ethyl iodide and zinc, but he had not realized that the ethyl radical dimerized and misidentified the substance.
The proper discoverer of the butane called it "hydride of butyl", but already in the 1860s more names were used: "butyl hydride", "hydride of tetryl" and "tetryl hydride", "diethyl" or "ethyl ethylide" and others. August Wilhelm von Hofmann in his 1866 systemic nomenclature proposed the name "quartane", and the modern name was introduced to English from German around 1874.
Butane did not have much practical use until the 1910s, when W. Snelling identified butane and propane as components in gasoline and found that, if they were cooled, they could be stored in a volume-reduced liquified state in pressurized containers.
Density
The density of butane is highly dependent on temperature and pressure in the reservoir. For example, the density of liquid propane is 571.8±1 kg/m3 (for pressures up to 2MPa and temperature 27±0.2 °C), while the density of liquid butane is 625.5±0.7 kg/m3 (for pressures up to 2MPa and temperature -13±0.2 °C).
Isomers
Rotation about the central C−C bond produces two different conformations (trans and gauche) for n-butane.
Reactions
When oxyg
Document 3:::
Endothermic gas is a gas that inhibits or reverses oxidation on the surfaces it is in contact with. This gas is the product of incomplete combustion in a controlled environment. An example mixture is hydrogen gas (H2), nitrogen gas (N2), and carbon monoxide (CO). The hydrogen and carbon monoxide are reducing agents, so they work together to shield surfaces from oxidation.
Endothermic gas is often used as a carrier gas for gas carburizing and carbonitriding. An endothermic gas generator could be used to supply heat to form an endothermic reaction.
Synthesised in the catalytic retort(s) of endothermic generators, the gas in the endothermic atmosphere is combined with an additive gas including natural gas, propane (C3H8) or air and is then used to improve the surface chemistry work positioned in the furnace.
Purposes
There are two common purposes of the atmospheres in the heat treating industry:
Protect the processed material from surface reactions (chemically inert)
Allow surface of processed material to change (chemically reactive)
Principal components of a endothermic gas generator
Principal components of endothermic gas generators:
Heating chamber for supplying heat by electric heating elements of combustion,
Vertical cylindrical retorts,
Tiny, porous ceramic pieces that are saturated with nickel, which acts as a catalyst for the reaction,
Cooling heat exchanger in order to cool the products of the reaction as quickly as possible so that it reaches a particular temperature which stops any further reaction,
Control system which will help maintain the consistency of the temperature of the reaction which will help adjust the gas ratio, providing the wanted dew point.
Chemical composition
Chemistry of endothermic gas generators:
N2 (nitrogen) → 45.1% (volume)
CO (carbon monoxide) → 19.6% (volume)
CO2 (carbon dioxide) → 0.4% (volume)
H2 (hydrogen) → 34.6% (volume)
CH4 (methane) → 0.3% (volume)
Dew point → +20/+50
Gas ratio → 2.6:1
Applications
Document 4:::
Biogas is a gaseous renewable energy source produced from raw materials such as agricultural waste, manure, municipal waste, plant material, sewage, green waste, wastewater, and food waste. Biogas is produced by anaerobic digestion with anaerobic organisms or methanogens inside an anaerobic digester, biodigester or a bioreactor.
The gas composition is primarily methane () and carbon dioxide () and may have small amounts of hydrogen sulfide (), moisture and siloxanes. The gases methane and hydrogen can be combusted or oxidized with oxygen. This energy release allows biogas to be used as a fuel; it can be used in fuel cells and for heating purpose, such as in cooking. It can also be used in a gas engine to convert the energy in the gas into electricity and heat.
After removal of carbon dioxide and hydrogen sulfide it can be compressed in the same way as natural gas and used to power motor vehicles. In the United Kingdom, for example, biogas is estimated to have the potential to replace around 17% of vehicle fuel. It qualifies for renewable energy subsidies in some parts of the world. Biogas can be cleaned and upgraded to natural gas standards, when it becomes bio-methane. Biogas is considered to be a renewable resource because its production-and-use cycle is continuous, and it generates no net carbon dioxide. From a carbon perspective, as much carbon dioxide is absorbed from the atmosphere in the growth of the primary bio-resource as is released, when the material is ultimately converted to energy.
Production
Biogas is produced by microorganisms, such as methanogens and sulfate-reducing bacteria, performing anaerobic respiration. Biogas can refer to gas produced naturally and industrially.
Natural
In soil, methane is produced in anaerobic environments by methanogens, but is mostly consumed in aerobic zones by methanotrophs. Methane emissions result when the balance favors methanogens. Wetland soils are the main natural source of methane. Other sources include ocea
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What gas is released into the atmosphere when fossil fuels are burned?
A. co2
B. helium
C. glucose
D. hydrogen
Answer:
|
|
ai2_arc-15
|
multiple_choice
|
A ship leaks a large amount of oil near a coastal area. Which statement describes how the oil most likely will affect the coastal habitat?
|
[
"Fish reproduction rates will increase.",
"Water birds will be unable to use their wings.",
"Water plants will be exposed to more sunlight.",
"Coastal plants will have access to more nutrients."
] |
B
|
Relavent Documents:
Document 0:::
Aquatic science is the study of the various bodies of water that make up our planet including oceanic and freshwater environments. Aquatic scientists study the movement of water, the chemistry of water, aquatic organisms, aquatic ecosystems, the movement of materials in and out of aquatic ecosystems, and the use of water by humans, among other things. Aquatic scientists examine current processes as well as historic processes, and the water bodies that they study can range from tiny areas measured in millimeters to full oceans. Moreover, aquatic scientists work in Interdisciplinary groups. For example, a physical oceanographer might work with a biological oceanographer to understand how physical processes, such as tropical cyclones or rip currents, affect organisms in the Atlantic Ocean. Chemists and biologists, on the other hand, might work together to see how the chemical makeup of a certain body of water affects the plants and animals that reside there. Aquatic scientists can work to tackle global problems such as global oceanic change and local problems, such as trying to understand why a drinking water supply in a certain area is polluted.
There are two main fields of study that fall within the field of aquatic science. These fields of study include oceanography and limnology.
Oceanography
Oceanography refers to the study of the physical, chemical, and biological characteristics of oceanic environments. Oceanographers study the history, current condition, and future of the planet's oceans. They also study marine life and ecosystems, ocean circulation, plate tectonics, the geology of the seafloor, and the chemical and physical properties of the ocean.
Oceanography is interdisciplinary. For example, there are biological oceanographers and marine biologists. These scientists specialize in marine organisms. They study how these organisms develop, their relationship with one another, and how they interact and adapt to their environment. Biological oceanographers
Document 1:::
The University of Michigan Biological Station (UMBS) is a research and teaching facility operated by the University of Michigan. It is located on the south shore of Douglas Lake in Cheboygan County, Michigan. The station consists of 10,000 acres (40 km2) of land near Pellston, Michigan in the northern Lower Peninsula of Michigan and 3,200 acres (13 km2) on Sugar Island in the St. Mary's River near Sault Ste. Marie, in the Upper Peninsula. It is one of only 28 Biosphere Reserves in the United States.
Overview
Founded in 1909, it has grown to include approximately 150 buildings, including classrooms, student cabins, dormitories, a dining hall, and research facilities. Undergraduate and graduate courses are available in the spring and summer terms. It has a full-time staff of 15.
In the 2000s, UMBS is increasingly focusing on the measurement of climate change. Its field researchers are gauging the impact of global warming and increased levels of atmospheric carbon dioxide on the ecosystem of the upper Great Lakes region, and are using field data to improve the computer models used to forecast further change. Several archaeological digs have been conducted at the station as well.
UMBS field researchers sometimes call the station "bug camp" amongst themselves. This is believed to be due to the number of mosquitoes and other insects present. It is also known as "The Bio-Station".
The UMBS is also home to Michigan's most endangered species and one of the most endangered species in the world: the Hungerford's Crawling Water Beetle. The species lives in only five locations in the world, two of which are in Emmet County. One of these, a two and a half mile stretch downstream from the Douglas Road crossing of the East Branch of the Maple River supports the only stable population of the Hungerford's Crawling Water Beetle, with roughly 1000 specimens. This area, though technically not part of the UMBS is largely within and along the boundary of the University of Michigan
Document 2:::
Ecological death is the inability of an organism to function in an ecological context, leading to death. This term can be used in many fields of biology to describe any species. In the context of aquatic toxicology, a toxic chemical, or toxicant, directly affects an aquatic organism but does not immediately kill it; instead it impairs an organism's normal ecological functions which then lead to death or lack of offspring. The toxicant makes the organism unable to function ecologically in some way, even though it does not suffer obviously from the toxicant. Ecological death may be caused by sublethal toxicological effects that can be behavioral, physiological, biochemical, or histological.
Types of sublethal effects causing ecological death
Sublethal effects consist of any effects of an organism caused by a toxicant that do not include death. These effects are generally not observed well in a shorter acute toxicity test. A longer, chronic toxicity test will allow enough time for these effects to appear in an organism and for them to lead to ecological death.
Behavioral effects
Toxicants can affect an organism's behavior, which with aquatic organisms, may impact their ability to swim, feed or avoid predators. The impacted behavior can lead to an organism's death because it may starve or get eaten by predators. Toxicants may affect behavior by impacting the sensory systems which organisms depend on to collect information about their environment or by impacting an organism's motivation to properly respond to sensory cues. If an organism is unable to use sensory cues effectively, they may be unable to respond to early warning signs of predation risk. Toxicants can also affect later stages of predation by impacting an organism's ability to respond to predators or follow through with escape strategies.
Physiological effects
Toxicants can affect an organism's physiology which may impact its growth, reproduction, and/or development. If an organism does not gr
Document 3:::
The Gulf of St. Lawrence lowland forests are a temperate broadleaf and mixed forest ecoregion of Eastern Canada, as defined by the World Wildlife Fund (WWF) categorization system.
Setting
Located on the Gulf of Saint Lawrence, the world's largest estuary, this ecoregion covers all of Prince Edward Island, the Les Îles-de-la-Madeleine of Quebec, most of east-central New Brunswick, the Annapolis Valley, Minas Basin and the Northumberland Strait coast of Nova Scotia. This area has a coastal climate of warm summers and cold and snowy winters with an average annual temperature of around 5 °C going up to 15 °C in summer, the coast is warmer than the islands or the sheltered inland valleys.
Flora
The colder climate allows more hardwood trees to grow in the Gulf of St Lawrence than in most of this part of northeast North America. Trees of the region include eastern hemlock (Tsuga canadensis), balsam fir (Abies balsamea), American elm (Ulmus americana), black ash (Fraxinus nigra), eastern white pine (Pinus strobus), red maple, (Acer rubrum) northern red oak (Quercus rubra), black spruce (Picea mariana), red spruce (Picea rubens) and white spruce (Picea glauca).
Fauna
The forests are home to a variety of wildlife including American black bear (Ursus americanus), moose (Alces alces), white-tailed deer (Odocoileus virginianus), red fox (Vulpes vulpes), snowshoe hare (Lepus americanus), North American porcupine (Erithyzon dorsatum), fisher (Martes pennanti), North American beaver (Castor canadensis), bobcat (Lynx rufus), American marten (Martes americana), raccoon (Procyon lotor) and muskrat (Ondatra zibethica). The area is habitat for maritime ringlet butterflies (Coenonympha nipisiquit) and other invertebrates. Birds include many seabirds, a large colony of great blue heron (Ardea herodias), the largest remaining population of the endangered piping plover and one of the largest colonies of double-crested cormorant (Phalacrocorax auritus) in the world.
Threats and preserva
Document 4:::
Large woody debris (LWD) are the logs, sticks, branches, and other wood that falls into streams and rivers. This debris can influence the flow and the shape of the stream channel. Large woody debris, grains, and the shape of the bed of the stream are the three main providers of flow resistance, and are thus, a major influence on the shape of the stream channel. Some stream channels have less LWD than they would naturally because of removal by watershed managers for flood control and aesthetic reasons.
The study of woody debris is important for its forestry management implications. Plantation thinning can reduce the potential for recruitment of LWD into proximal streams. The presence of large woody debris is important in the formation of pools which serve as salmon habitat in the Pacific Northwest. Entrainment of the large woody debris in a stream can also cause erosion and scouring around and under the LWD. The amount of scouring and erosion is determined by the ratio of the diameter of the piece, to the depth of the stream, and the embedding and orientation of the piece.
Influence on stream flow around bends
Large woody debris slow the flow through a bend in the stream, while accelerating flow in the constricted area downstream of the obstruction.
See also
Beaver dam
Coarse woody debris
Driftwood
Log jam
Stream restoration
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A ship leaks a large amount of oil near a coastal area. Which statement describes how the oil most likely will affect the coastal habitat?
A. Fish reproduction rates will increase.
B. Water birds will be unable to use their wings.
C. Water plants will be exposed to more sunlight.
D. Coastal plants will have access to more nutrients.
Answer:
|
|
sciq-4821
|
multiple_choice
|
How many pairs of chromosomes are there?
|
[
"23",
"17",
"13",
"2"
] |
A
|
Relavent Documents:
Document 0:::
Advanced Placement (AP) Biology (also known as AP Bio) is an Advanced Placement biology course and exam offered by the College Board in the United States. For the 2012–2013 school year, the College Board unveiled a new curriculum with a greater focus on "scientific practices".
This course is designed for students who wish to pursue an interest in the life sciences. The College Board recommends successful completion of high school biology and high school chemistry before commencing AP Biology, although the actual prerequisites vary from school to school and from state to state. This course, nevertheless, is considered very challenging and one of the most difficult AP classes, as shown with AP Finals grade distributions.
Topic outline
The exam covers the following 8 units. The percentage indicates the portion of the multiple-choice section of the exam focused on each content area:
The course is based on and tests six skills, called scientific practices which include:
In addition to the topics above, students are required to be familiar with general lab procedure. Students should know how to collect data, analyze data to form conclusions, and apply those conclusions.
Exam
Students are allowed to use a four-function, scientific, or graphing calculator.
The exam has two sections: a 90 minute multiple choice section and a 90 minute free response section. There are 60 multiple choice questions and six free responses, two long and four short. Both sections are worth 50% of the score.
Score distribution
Commonly used textbooks
Biology, AP Edition by Sylvia Mader (2012, hardcover )
Life: The Science of Biology (Sadava, Heller, Orians, Purves, and Hillis, )
Campbell Biology AP Ninth Edition (Reece, Urry, Cain, Wasserman, Minorsky, and Andrew Jackson )
See also
Glossary of biology
A.P Bio (TV Show)
Document 1:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
Document 2:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 3:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 4:::
Advanced Placement (AP) Statistics (also known as AP Stats) is a college-level high school statistics course offered in the United States through the College Board's Advanced Placement program. This course is equivalent to a one semester, non-calculus-based introductory college statistics course and is normally offered to sophomores, juniors and seniors in high school.
One of the College Board's more recent additions, the AP Statistics exam was first administered in May 1996 to supplement the AP program's math offerings, which had previously consisted of only AP Calculus AB and BC. In the United States, enrollment in AP Statistics classes has increased at a higher rate than in any other AP class.
Students may receive college credit or upper-level college course placement upon passing the three-hour exam ordinarily administered in May. The exam consists of a multiple-choice section and a free-response section that are both 90 minutes long. Each section is weighted equally in determining the students' composite scores.
History
The Advanced Placement program has offered students the opportunity to pursue college-level courses while in high school. Along with the Educational Testing Service, the College Board administered the first AP Statistics exam in May 1997. The course was first taught to students in the 1996-1997 academic year. Prior to that, the only mathematics courses offered in the AP program included AP Calculus AB and BC. Students who didn't have a strong background in college-level math, however, found the AP Calculus program inaccessible and sometimes declined to take a math course in their senior year. Since the number of students required to take statistics in college is almost as large as the number of students required to take calculus, the College Board decided to add an introductory statistics course to the AP program. Since the prerequisites for such a program doesn't require mathematical concepts beyond those typically taught in a second-year al
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How many pairs of chromosomes are there?
A. 23
B. 17
C. 13
D. 2
Answer:
|
|
sciq-3805
|
multiple_choice
|
What is the body of a roundworm covered with?
|
[
"slime",
"tough cuticle",
"thick scales",
"thin epidermis"
] |
B
|
Relavent Documents:
Document 0:::
In biology, setae (: seta ; from the Latin word for "bristle") are any of a number of different bristle- or hair-like structures on living organisms.
Animal setae
Protostomes
Annelid setae are stiff bristles present on the body. They help, for example, earthworms to attach to the surface and prevent backsliding during peristaltic motion. These hairs make it difficult to pull a worm straight from the ground. Setae in oligochaetes (a group including earthworms) are largely composed of chitin. They are classified according to the limb to which they are attached; for instance, notosetae are attached to notopodia; neurosetae to neuropodia.
Diptera setae are bristles present throughout the body and function as mechanoreceptors.
Crustaceans have mechano- and chemosensory setae. Setae are especially present on the mouthparts of crustaceans and can also be found on grooming limbs. In some cases, setae are modified into scale like structures. Setae on the legs of krill and other small crustaceans help them to gather phytoplankton. It captures them and allows them to be eaten.
Setae on the integument of insects are unicellular, meaning that each is formed from a single epidermal cell of a type called a trichogen, literally meaning "bristle generator". They are at first hollow and in most forms remain hollow after they have hardened. They grow through and project through a secondary or accessory cell of a type called a tormogen, which generates the special flexible membrane that connects the base of the seta to the surrounding integument. Depending partly on their form and function, setae may be called hairs, macrotrichia, chaetae, or scales. The setal membrane is not cuticularized and movement is possible. Some insects, such as Eriogaster lanestris larvae, use setae as a defense mechanism, as they can cause dermatitis when they come into contact with skin.
Deuterostomes
Vertebrates
The pads on a gecko's feet are small hair-like processes that play a role in the a
Document 1:::
A worm cast is a structure created by worms, typically on soils such as those on beaches that gives the appearance of multiple worms. They are also used to trace the location of one or more worms.
Document 2:::
Vermiform (ˈvərməˌfôrm) describes something shaped like a worm. The expression is often employed in biology and anatomy to describe usually soft body parts or animals that are more or less tubular or cylindrical. The word root is Latin, vermes (worms) and formes (shaped). A well known example is the vermiform appendix, a small, blind section of the gut in humans and a number of other mammals.
A number of soft-bodied animal phyla are typically described as vermiform. The better-known ones are undoubtedly the annelids (earthworm and relatives) and the roundworms (a very common, mainly parasitic group), but a number of less-well-known phyla answer to the same description. Examples range from the minute parasitic mesozoans to the larger-bodied free-living phyla like ribbon worms, peanut worms, and priapulids.
Document 3:::
A cuticle (), or cuticula, is any of a variety of tough but flexible, non-mineral outer coverings of an organism, or parts of an organism, that provide protection. Various types of "cuticle" are non-homologous, differing in their origin, structure, function, and chemical composition.
Human anatomy
In human anatomy, "cuticle" can refer to several structures, but it is used in general parlance, and even by medical professionals, to refer to the thickened layer of skin surrounding fingernails and toenails (the eponychium), and to refer to the superficial layer of overlapping cells covering the hair shaft (cuticula pili), consisting of dead cells, that locks the hair into its follicle. It can also be used as a synonym for the epidermis, the outer layer of skin.
Cuticle of invertebrates
In zoology, the invertebrate cuticle or cuticula is a multi-layered structure outside the epidermis of many invertebrates, notably arthropods and roundworms, in which it forms an exoskeleton (see arthropod exoskeleton).
The main structural components of the nematode cuticle are proteins, highly cross-linked collagens and specialised insoluble proteins known as "cuticlins", together with glycoproteins and lipids.
The main structural component of arthropod cuticle is chitin, a polysaccharide composed of N-acetylglucosamine units, together with proteins and lipids. The proteins and chitin are cross-linked. The rigidity is a function of the types of proteins and the quantity of chitin. It is believed that the epidermal cells produce protein and also monitors the timing and amount of protein to be incorporated into the cuticle.
Often, in the cuticle of arthropods, structural coloration is observed, produced by nanostructures.
Botany
In botany, plant cuticles are protective, hydrophobic, waxy coverings produced by the epidermal cells of leaves, young shoots and all other aerial plant organs. Cuticles minimize water loss and effectively reduce pathogen entry due to their waxy secreti
Document 4:::
Bacillary band is a specialized row of longitudinal cells of some nematodes (Trichuris and Capillaria), consisting of glandular and nonglandular cells, formed by the hypodermis. The glandular cells opens up to the exterior through cuticular pores. The function of bacillary bands is unknown, their ultrastructure suggests that the gland cells may have a role in osmotic or ion regulation, and the nongland cells may function in cuticle formation and food storage.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the body of a roundworm covered with?
A. slime
B. tough cuticle
C. thick scales
D. thin epidermis
Answer:
|
|
ai2_arc-1103
|
multiple_choice
|
Swamp plants die, fall to the ground, and are buried by other dying plants. Approximately how long would it take for plants to possibly become a fossil fuel?
|
[
"1,000,000 years",
"100,000 years",
"10,000 years",
"1,000 years"
] |
A
|
Relavent Documents:
Document 0:::
Phyllocladane is a tricyclic diterpane which is generally found in gymnosperm resins. It has a formula of C20H34 and a molecular weight of 274.4840. As a biomarker, it can be used to learn about the gymnosperm input into a hydrocarbon deposit, and about the age of the deposit in general. It indicates a terrogenous origin of the source rock. Diterpanes, such as Phyllocladane are found in source rocks as early as the middle and late Devonian periods, which indicates any rock containing them must be no more than approximately 360 Ma. Phyllocladane is commonly found in lignite, and like other resinites derived from gymnosperms, is naturally enriched in 13C. This enrichment is a result of the enzymatic pathways used to synthesize the compound.
The compound can be identified by GC-MS. A peak of m/z 123 is indicative of tricyclic diterpenoids in general, and phyllocladane in particular is further characterized by strong peaks at m/z 231 and m/z 189. Presence of phyllocladane and its relative abundance to other tricyclic diterpanes can be used to differentiate between various oil fields.
Document 1:::
Plant ecology is a subdiscipline of ecology that studies the distribution and abundance of plants, the effects of environmental factors upon the abundance of plants, and the interactions among plants and between plants and other organisms. Examples of these are the distribution of temperate deciduous forests in North America, the effects of drought or flooding upon plant survival, and competition among desert plants for water, or effects of herds of grazing animals upon the composition of grasslands.
A global overview of the Earth's major vegetation types is provided by O.W. Archibold. He recognizes 11 major vegetation types: tropical forests, tropical savannas, arid regions (deserts), Mediterranean ecosystems, temperate forest ecosystems, temperate grasslands, coniferous forests, tundra (both polar and high mountain), terrestrial wetlands, freshwater ecosystems and coastal/marine systems. This breadth of topics shows the complexity of plant ecology, since it includes plants from floating single-celled algae up to large canopy forming trees.
One feature that defines plants is photosynthesis. Photosynthesis is the process of a chemical reactions to create glucose and oxygen, which is vital for plant life. One of the most important aspects of plant ecology is the role plants have played in creating the oxygenated atmosphere of earth, an event that occurred some 2 billion years ago. It can be dated by the deposition of banded iron formations, distinctive sedimentary rocks with large amounts of iron oxide. At the same time, plants began removing carbon dioxide from the atmosphere, thereby initiating the process of controlling Earth's climate. A long term trend of the Earth has been toward increasing oxygen and decreasing carbon dioxide, and many other events in the Earth's history, like the first movement of life onto land, are likely tied to this sequence of events.
One of the early classic books on plant ecology was written by J.E. Weaver and F.E. Clements. It
Document 2:::
Biotic material or biological derived material is any material that originates from living organisms. Most such materials contain carbon and are capable of decay.
The earliest life on Earth arose at least 3.5 billion years ago. Earlier physical evidences of life include graphite, a biogenic substance, in 3.7 billion-year-old metasedimentary rocks discovered in southwestern Greenland, as well as, "remains of biotic life" found in 4.1 billion-year-old rocks in Western Australia. Earth's biodiversity has expanded continually except when interrupted by mass extinctions. Although scholars estimate that over 99 percent of all species of life (over five billion) that ever lived on Earth are extinct, there are still an estimated 10–14 million extant species, of which about 1.2 million have been documented and over 86% have not yet been described.
Examples of biotic materials are wood, straw, humus, manure, bark, crude oil, cotton, spider silk, chitin, fibrin, and bone.
The use of biotic materials, and processed biotic materials (bio-based material) as alternative natural materials, over synthetics is popular with those who are environmentally conscious because such materials are usually biodegradable, renewable, and the processing is commonly understood and has minimal environmental impact. However, not all biotic materials are used in an environmentally friendly way, such as those that require high levels of processing, are harvested unsustainably, or are used to produce carbon emissions.
When the source of the recently living material has little importance to the product produced, such as in the production of biofuels, biotic material is simply called biomass. Many fuel sources may have biological sources, and may be divided roughly into fossil fuels, and biofuel.
In soil science, biotic material is often referred to as organic matter. Biotic materials in soil include glomalin, Dopplerite and humic acid. Some biotic material may not be considered to be organic matte
Document 3:::
The Department of Plant Sciences, at the University of Oxford, England, was a former Oxford department that researched plant and fungal biology. It was part of the university's Mathematical, Physical and Life Sciences Division. From 1 August 2022 its functionality merged with the Department of Zoology to become the Department of Biology at the University of Oxford.
Herbaria
The department housed the Oxford University Herbaria that consists of two herbaria:
Fielding-Druce Herbarium.
Daubeny Herbarium.
In total the collections contain 800,000 specimens and benefits from close links with the university's Oxford Botanic Garden. The herbaria are now housed under the title of Department of Biology.
History
Forestry was an important part of the university under the name of the Imperial Forestry Institute, from 1924; later the Commonwealth Forestry Institute from 1939. The Oxford Forestry Institute was incorporated into the Department of Plant Sciences in 2002, and research relating to forestry was undertaken under that name until 2022 when the department merged with the Department of Zoology to form the Department of Biology. Some students were Imperial Forest Service students, who came from many parts of the British Empire to qualify as foresters before they returned home. It ran a post graduate MSc forestry course for many years: Forestry and its Relation to Land Use, until 2002.
In January 2021, the Oxford City Council approved the £200m construction of the Life and Mind Building, which will be the university's largest building project and combine the Departments of Experimental Psychology and Biology. It will replace the Tinbergen Building on South Parks Road, which was closed and demolished when asbestos was discovered in 2017. The building will feature multiple laboratories, teaching and testing spaces providing research facilities for 800 students and 1200 researchers. Work is expected to start in June 2021, with the building opening in September 2024.
See a
Document 4:::
Retene, methyl isopropyl phenanthrene or 1-methyl-7-isopropyl phenanthrene, C18H18, is a polycyclic aromatic hydrocarbon present in the coal tar fraction, boiling above 360 °C. It occurs naturally in the tars obtained by the distillation of resinous woods. It crystallizes in large plates, which melt at 98.5 °C and boil at 390 °C. It is readily soluble in warm ether and in hot glacial acetic acid. Sodium and boiling amyl alcohol reduce it to a tetrahydroretene, but if it heated with phosphorus and hydriodic acid to 260 °C, a dodecahydride is formed. Chromic acid oxidizes it to retene quinone, phthalic acid and acetic acid. It forms a picrate that melts at 123-124 °C.
Retene is derived by degradation of specific diterpenoids biologically produced by conifer trees. The presence of traces of retene in the air is an indicator of forest fires; it is a major product of pyrolysis of conifer trees. It is also present in effluents from wood pulp and paper mills.
Retene, together with cadalene, simonellite and ip-iHMN, is a biomarker of vascular plants, which makes it useful for paleobotanic analysis of rock sediments. The ratio of retene/cadalene in sediments can reveal the ratio of the genus Pinaceae in the biosphere.
Health effects
A recent study has shown retene, which is a component of the Amazonian organic PM10, is cytotoxic to human lung cells.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Swamp plants die, fall to the ground, and are buried by other dying plants. Approximately how long would it take for plants to possibly become a fossil fuel?
A. 1,000,000 years
B. 100,000 years
C. 10,000 years
D. 1,000 years
Answer:
|
|
sciq-9741
|
multiple_choice
|
What are objects that are launched into the air called?
|
[
"stones",
"projectiles",
"booms",
"shoots"
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In physics, The Monkey and the Hunter is a hypothetical scenario often used to illustrate the effect of gravity on projectile motion. It can be presented as exercise problem or as a demonstration. No live monkeys are used in the demonstrations.
The essentials of the problem are stated in many introductory guides to physics. In essence, the problem is as follows: A hunter with a blowgun goes out in the woods to hunt for monkeys and sees one hanging in a tree. The monkey releases its grip the instant the hunter fires his blowgun. Where should the hunter aim in order to hit the monkey?
Discussion
To answer this question, recall that according to Galileo's law, all objects fall with the same constant acceleration of gravity (about 9.8 metres per second per second near the Earth's surface), regardless of the object's weight. Furthermore, horizontal motions and vertical motions are independent: gravity acts only upon an object's vertical velocity, not upon its velocity in the horizontal direction. The hunter's dart, therefore, falls with the same acceleration as the monkey.
Assume for the moment that gravity was not at work. In that case, the dart would proceed in a straight-line trajectory at a constant speed (Newton's first law). Gravity causes the dart to fall away from this straight-line path, making a trajectory that is in fact a parabola. Now, consider what happens if the hunter aims directly at the monkey, and the monkey releases his grip the instant the hunter fires. Because the force of gravity accelerates the dart and the monkey equally, they fall the same distance in the same time: the monkey falls from the tree branch, and the dart falls the same distance from the straight-line path it would have taken in the absence of gravity. Therefore, the dart will always hit the monkey, no matter the initial speed of the dart, no matter the acceleration of gravity.
Another way of looking at the problem is by a transformation of the reference frame. Earl
Document 2:::
Backyard Ballistics is a how-to book by William Gurstelle that was published in 2001. It is full of experiments that can be done relatively inexpensively and can be easily executed. It also includes the history and mechanical principles of some of the inventions and projects. From catapults to rockets, this book describes accessible ways to create these at home or in the classroom. In addition to recreational use by individuals, teacher's guides have been developed and science fair projects designed around this book. It has been cited in several educational and scientific journals.
Document 3:::
A projectile is an object that is propelled by the application of an external force and then moves freely under the influence of gravity and air resistance. Although any objects in motion through space are projectiles, they are commonly found in warfare and sports (for example, a thrown baseball, kicked football, fired bullet, shot arrow, stone released from catapult).
In ballistics mathematical equations of motion are used to analyze projectile trajectories through launch, flight, and impact.
Motive force
Blowguns and pneumatic rifles use compressed gases, while most other guns and cannons utilize expanding gases liberated by sudden chemical reactions by propellants like smokeless powder. Light-gas guns use a combination of these mechanisms.
Railguns utilize electromagnetic fields to provide a constant acceleration along the entire length of the device, greatly increasing the muzzle velocity.
Some projectiles provide propulsion during flight by means of a rocket engine or jet engine. In military terminology, a rocket is unguided, while a missile is guided. Note the two meanings of "rocket" (weapon and engine): an ICBM is a guided missile with a rocket engine.
An explosion, whether or not by a weapon, causes the debris to act as multiple high velocity projectiles. An explosive weapon or device may also be designed to produce many high velocity projectiles by the break-up of its casing; these are correctly termed fragments.
In sports
In projectile motion the most important force applied to the ‘projectile’ is the propelling force, in this case the propelling forces are the muscles that act upon the ball to make it move, and the stronger the force applied, the more propelling force, which means the projectile (the ball) will travel farther. See pitching, bowling.
As a weapon
Delivery projectiles
Many projectiles, e.g. shells, may carry an explosive charge or another chemical or biological substance. Aside from explosive payload, a projectile can be designed
Document 4:::
In mechanics and physics, shock is a sudden acceleration caused, for example, by impact, drop, kick, earthquake, or explosion. Shock is a transient physical excitation.
Shock describes matter subject to extreme rates of force with respect to time. Shock is a vector that has units of an acceleration (rate of change of velocity). The unit g (or g) represents multiples of the standard acceleration of gravity and is conventionally used.
A shock pulse can be characterised by its peak acceleration, the duration, and the shape of the shock pulse (half sine, triangular, trapezoidal, etc.). The shock response spectrum is a method for further evaluating a mechanical shock.
Shock measurement
Shock measurement is of interest in several fields such as
Propagation of heel shock through a runner's body
Measure the magnitude of a shock need to cause damage to an item: fragility.
Measure shock attenuation through athletic flooring
Measuring the effectiveness of a shock absorber
Measuring the shock absorbing ability of package cushioning
Measure the ability of an athletic helmet to protect people
Measure the effectiveness of shock mounts
Determining the ability of structures to resist seismic shock: earthquakes, etc.
Determining whether personal protective fabric attenuates or amplifies shocks
Verifying that a Naval ship and its equipment can survive explosive shocks
Shocks are usually measured by accelerometers but other transducers and high speed imaging are also used. A wide variety of laboratory instrumentation is available; stand-alone shock data loggers are also used.
Field shocks are highly variable and often have very uneven shapes. Even laboratory controlled shocks often have uneven shapes and include short duration spikes; Noise can be reduced by appropriate digital or analog filtering.
Governing test methods and specifications provide detail about the conduct of shock tests. Proper placement of measuring instruments is critical. Fragile items and packaged g
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are objects that are launched into the air called?
A. stones
B. projectiles
C. booms
D. shoots
Answer:
|
|
sciq-9589
|
multiple_choice
|
Liquid hcl can be used to do what to the ph of a swimming pool?
|
[
"heighten it",
"increase it",
"lower it",
"raise it"
] |
C
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
Document 2:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 3:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
Document 4:::
Single Best Answer (SBA or One Best Answer) is a written examination form of multiple choice questions used extensively in medical education.
Structure
A single question is posed with typically five alternate answers, from which the candidate must choose the best answer. This method avoids the problems of past examinations of a similar form described as Single Correct Answer. The older form can produce confusion where more than one of the possible answers has some validity. The newer form makes it explicit that more than one answer may have elements that are correct, but that one answer will be superior.
Prior to the widespread introduction of SBAs into medical education, the typical form of examination was true-false multiple choice questions. But during the 2000s, educators found that SBAs would be superior.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Liquid hcl can be used to do what to the ph of a swimming pool?
A. heighten it
B. increase it
C. lower it
D. raise it
Answer:
|
|
sciq-3500
|
multiple_choice
|
What is an organic compound made up of small molecules called amino acids called?
|
[
"a protein",
"a compound",
"a fat",
"a carbohydrate"
] |
A
|
Relavent Documents:
Document 0:::
This is a list of articles that describe particular biomolecules or types of biomolecules.
A
For substances with an A- or α- prefix such as
α-amylase, please see the parent page (in this case Amylase).
A23187 (Calcimycin, Calcium Ionophore)
Abamectine
Abietic acid
Acetic acid
Acetylcholine
Actin
Actinomycin D
Adenine
Adenosmeme
Adenosine diphosphate (ADP)
Adenosine monophosphate (AMP)
Adenosine triphosphate (ATP)
Adenylate cyclase
Adiponectin
Adonitol
Adrenaline, epinephrine
Adrenocorticotropic hormone (ACTH)
Aequorin
Aflatoxin
Agar
Alamethicin
Alanine
Albumins
Aldosterone
Aleurone
Alpha-amanitin
Alpha-MSH (Melaninocyte stimulating hormone)
Allantoin
Allethrin
α-Amanatin, see Alpha-amanitin
Amino acid
Amylase (also see α-amylase)
Anabolic steroid
Anandamide (ANA)
Androgen
Anethole
Angiotensinogen
Anisomycin
Antidiuretic hormone (ADH)
Anti-Müllerian hormone (AMH)
Arabinose
Arginine
Argonaute
Ascomycin
Ascorbic acid (vitamin C)
Asparagine
Aspartic acid
Asymmetric dimethylarginine
ATP synthase
Atrial-natriuretic peptide (ANP)
Auxin
Avidin
Azadirachtin A – C35H44O16
B
Bacteriocin
Beauvericin
beta-Hydroxy beta-methylbutyric acid
beta-Hydroxybutyric acid
Bicuculline
Bilirubin
Biopolymer
Biotin (Vitamin H)
Brefeldin A
Brassinolide
Brucine
Butyric acid
C
Document 1:::
The metabolome refers to the complete set of small-molecule chemicals found within a biological sample. The biological sample can be a cell, a cellular organelle, an organ, a tissue, a tissue extract, a biofluid or an entire organism. The small molecule chemicals found in a given metabolome may include both endogenous metabolites that are naturally produced by an organism (such as amino acids, organic acids, nucleic acids, fatty acids, amines, sugars, vitamins, co-factors, pigments, antibiotics, etc.) as well as exogenous chemicals (such as drugs, environmental contaminants, food additives, toxins and other xenobiotics) that are not naturally produced by an organism.
In other words, there is both an endogenous metabolome and an exogenous metabolome. The endogenous metabolome can be further subdivided to include a "primary" and a "secondary" metabolome (particularly when referring to plant or microbial metabolomes). A primary metabolite is directly involved in the normal growth, development, and reproduction. A secondary metabolite is not directly involved in those processes, but usually has important ecological function. Secondary metabolites may include pigments, antibiotics or waste products derived from partially metabolized xenobiotics. The study of the metabolome is called metabolomics.
Origins
The word metabolome appears to be a blending of the words "metabolite" and "chromosome". It was constructed to imply that metabolites are indirectly encoded by genes or act on genes and gene products. The term "metabolome" was first used in 1998 and was likely coined to match with existing biological terms referring to the complete set of genes (the genome), the complete set of proteins (the proteome) and the complete set of transcripts (the transcriptome). The first book on metabolomics was published in 2003. The first journal dedicated to metabolomics (titled simply "Metabolomics") was launched in 2005 and is currently edited by Prof. Roy Goodacre. Some of the m
Document 2:::
A biomolecule or biological molecule is a loosely used term for molecules present in organisms that are essential to one or more typically biological processes, such as cell division, morphogenesis, or development. Biomolecules include the primary metabolites which are large macromolecules (or polyelectrolytes) such as proteins, carbohydrates, lipids, and nucleic acids, as well as small molecules such as vitamins and hormones. A more general name for this class of material is biological materials. Biomolecules are an important element of living organisms, those biomolecules are often endogenous, produced within the organism but organisms usually need exogenous biomolecules, for example certain nutrients, to survive.
Biology and its subfields of biochemistry and molecular biology study biomolecules and their reactions. Most biomolecules are organic compounds, and just four elements—oxygen, carbon, hydrogen, and nitrogen—make up 96% of the human body's mass. But many other elements, such as the various biometals, are also present in small amounts.
The uniformity of both specific types of molecules (the biomolecules) and of certain metabolic pathways are invariant features among the wide diversity of life forms; thus these biomolecules and metabolic pathways are referred to as "biochemical universals" or "theory of material unity of the living beings", a unifying concept in biology, along with cell theory and evolution theory.
Types of biomolecules
A diverse range of biomolecules exist, including:
Small molecules:
Lipids, fatty acids, glycolipids, sterols, monosaccharides
Vitamins
Hormones, neurotransmitters
Metabolites
Monomers, oligomers and polymers:
Nucleosides and nucleotides
Nucleosides are molecules formed by attaching a nucleobase to a ribose or deoxyribose ring. Examples of these include cytidine (C), uridine (U), adenosine (A), guanosine (G), and thymidine (T).
Nucleosides can be phosphorylated by specific kinases in the cell, producing nucl
Document 3:::
This is a list of topics in molecular biology. See also index of biochemistry articles.
Document 4:::
Bioorganic chemistry is a scientific discipline that combines organic chemistry and biochemistry. It is that branch of life science that deals with the study of biological processes using chemical methods. Protein and enzyme function are examples of these processes.
Sometimes biochemistry is used interchangeably for bioorganic chemistry; the distinction being that bioorganic chemistry is organic chemistry that is focused on the biological aspects. While biochemistry aims at understanding biological processes using chemistry, bioorganic chemistry attempts to expand organic-chemical researches (that is, structures, synthesis, and kinetics) toward biology. When investigating metalloenzymes and cofactors, bioorganic chemistry overlaps bioinorganic chemistry.
Sub disciplines
Biophysical organic chemistry is a term used when attempting to describe intimate details of molecular recognition by bioorganic chemistry.
Natural product chemistry is the process of Identifying compounds found in nature to determine their properties. Compound discoveries have and often lead to medicinal uses, development of herbicides and insecticides.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is an organic compound made up of small molecules called amino acids called?
A. a protein
B. a compound
C. a fat
D. a carbohydrate
Answer:
|
|
sciq-4284
|
multiple_choice
|
Just under their skin, marine mammals have a very thick layer of insulating fat called what?
|
[
"blubber",
"lipisomes",
"tissue",
"cellulose"
] |
A
|
Relavent Documents:
Document 0:::
In zoology, the epidermis is an epithelium (sheet of cells) that covers the body of a eumetazoan (animal more complex than a sponge). Eumetazoa have a cavity lined with a similar epithelium, the gastrodermis, which forms a boundary with the epidermis at the mouth.
Sponges have no epithelium, and therefore no epidermis or gastrodermis. The epidermis of a more complex invertebrate is just one layer deep, and may be protected by a non-cellular cuticle. The epidermis of a higher vertebrate has many layers, and the outer layers are reinforced with keratin and then die.
Document 1:::
A central or intermediate group of three or four large glands is imbedded in the adipose tissue near the base of the axilla.
Its afferent lymphatic vessels are the efferent vessels of all the preceding groups of axillary glands; its efferents pass to the subclavicular group.
Additional images
Document 2:::
The tunica media (Neo-Latin "middle coat"), or media for short, is the middle tunica (layer) of an artery or vein. It lies between the tunica intima on the inside and the tunica externa on the outside.
Artery
Tunica media is made up of smooth muscle cells, elastic tissue and collagen. It lies between the tunica intima on the inside and the tunica externa on the outside.
The middle coat (tunica media) is distinguished from the inner (tunica intima) by its color and by the transverse arrangement of its fibers.
In the smaller arteries it consists principally of smooth muscle fibers in fine bundles, arranged in lamellae and disposed circularly around the vessel. These lamellae vary in number according to the size of the vessel; the smallest arteries having only a single layer, and those slightly larger three or four layers - up to a maximum of six layers. It is to this coat that the thickness of the wall of the artery is mainly due.
In the larger arteries, as the iliac, femoral, and carotid, elastic fibers and collagen unite to form lamellae which alternate with the layers of smooth muscular fibers; these lamellae are united to one another by elastic fibers which pass between the smooth muscular bundles, and are connected with the fenestrated membrane of the inner coat.
In the largest arteries, as the aorta and brachiocephalic, the amount of elastic tissue is considerable; in these vessels a few bundles of white connective tissue also have been found in the middle coat. The muscle fiber cells are arranged in 5 to 7 layers of circular and longitudinal smooth muscle with about 50μ in length and contain well-marked, rod-shaped nuclei, which are often slightly curved. Separating the tunica media from the outer tunica externa in larger arteries is the external elastic membrane (also called the external elastic lamina). This structure is not usually seen in smaller arteries, nor is it seen in veins.
Vein
The middle coat is composed of a thick layer of connective tissue
Document 3:::
In biology, the extracellular matrix (ECM), is a network consisting of extracellular macromolecules and minerals, such as collagen, enzymes, glycoproteins and hydroxyapatite that provide structural and biochemical support to surrounding cells. Because multicellularity evolved independently in different multicellular lineages, the composition of ECM varies between multicellular structures; however, cell adhesion, cell-to-cell communication and differentiation are common functions of the ECM.
The animal extracellular matrix includes the interstitial matrix and the basement membrane. Interstitial matrix is present between various animal cells (i.e., in the intercellular spaces). Gels of polysaccharides and fibrous proteins fill the interstitial space and act as a compression buffer against the stress placed on the ECM. Basement membranes are sheet-like depositions of ECM on which various epithelial cells rest. Each type of connective tissue in animals has a type of ECM: collagen fibers and bone mineral comprise the ECM of bone tissue; reticular fibers and ground substance comprise the ECM of loose connective tissue; and blood plasma is the ECM of blood.
The plant ECM includes cell wall components, like cellulose, in addition to more complex signaling molecules. Some single-celled organisms adopt multicellular biofilms in which the cells are embedded in an ECM composed primarily of extracellular polymeric substances (EPS).
Structure
Components of the ECM are produced intracellularly by resident cells and secreted into the ECM via exocytosis. Once secreted, they then aggregate with the existing matrix. The ECM is composed of an interlocking mesh of fibrous proteins and glycosaminoglycans (GAGs).
Proteoglycans
Glycosaminoglycans (GAGs) are carbohydrate polymers and mostly attached to extracellular matrix proteins to form proteoglycans (hyaluronic acid is a notable exception; see below). Proteoglycans have a net negative charge that attracts positively charged sod
Document 4:::
Pinacocytes are flat cells found on the outside of sponges, as well as the internal canals of a sponge. Pinacocytes are not specific to the sponge however. It was discovered that pinacocytes do not have as many sponge specific genes. These genes suggest that pinacocytes had evolved before the metazoan time period, which is, before porifera had evolved.
Function
Pinacocytes are part of the epithelium in sponges. They play a role in movement (contracting and stretching), cell adhesion, signaling, phagocytosis, and polarity. Pinacocytes are filled with mesohyl which is a gel like substance that helps maintain the shape and structure of the sponge.
Types
Basipinacocytes
These are the cells in contact with the sponge's substrate (the surface to which it is attached).
Exopinacocytes
These are found on the exterior of the sponge. Exopinococytes produce spicules which is a needle like process that serves as structure for the organism.
Endopinacocytes
These line the sponge's interior canals.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Just under their skin, marine mammals have a very thick layer of insulating fat called what?
A. blubber
B. lipisomes
C. tissue
D. cellulose
Answer:
|
|
sciq-728
|
multiple_choice
|
What light-producing process occurs when a substance absorbs shorter-wavelength ultraviolet light and then gives off the energy as visible light?
|
[
"fluorescence",
"luminescence",
"resistance",
"candescence"
] |
A
|
Relavent Documents:
Document 0:::
Photobiology is the scientific study of the beneficial and harmful interactions of light (technically, non-ionizing radiation) in living organisms. The field includes the study of photophysics, photochemistry, photosynthesis, photomorphogenesis, visual processing, circadian rhythms, photomovement, bioluminescence, and ultraviolet radiation effects.
The division between ionizing radiation and non-ionizing radiation is typically considered to be a photon energy greater than 10 eV, which approximately corresponds to both the first ionization energy of oxygen, and the ionization energy of hydrogen at about 14 eV.
When photons come into contact with molecules, these molecules can absorb the energy in photons and become excited. Then they can react with molecules around them and stimulate "photochemical" and "photophysical" changes of molecular structures.
Photophysics
This area of Photobiology focuses on the physical interactions of light and matter. When molecules absorb photons that matches their energy requirements they promote a valence electron from a ground state to an excited state and they become a lot more reactive. This is an extremely fast process, but very important for different processes.
Photochemistry
This area of Photobiology studies the reactivity of a molecule when it absorbs energy that comes from light. It also studies what happens with this energy, it could be given off as heat or fluorescence so the molecule goes back to ground state.
There are 3 basic laws of photochemistry:
1) First Law of Photochemistry: This law explains that in order for photochemistry to happen, light has to be absorbed.
2) Second Law of Photochemistry: This law explains that only one molecule will be activated by each photon that is absorbed.
3) Bunsen-Roscoe Law of Reciprosity: This law explains that the energy in the final products of a photochemical reaction will be directly proportional to the total energy that was initially absorbed by the system.
Plant Photo
Document 1:::
Photodissociation, photolysis, photodecomposition, or photofragmentation is a chemical reaction in which molecules of a chemical compound are broken down by photons. It is defined as the interaction of one or more photons with one target molecule.
Photodissociation is not limited to visible light. Any photon with sufficient energy can affect the chemical bonds of a chemical compound. Since a photon's energy is inversely proportional to its wavelength, electromagnetic radiations with the energy of visible light or higher, such as ultraviolet light, X-rays, and gamma rays can induce such reactions.
Photolysis in photosynthesis
Photolysis is part of the light-dependent reaction or light phase or photochemical phase or Hill reaction of photosynthesis. The general reaction of photosynthetic photolysis can be given in terms of photons as:
The chemical nature of "A" depends on the type of organism. Purple sulfur bacteria oxidize hydrogen sulfide () to sulfur (S). In oxygenic photosynthesis, water () serves as a substrate for photolysis resulting in the generation of diatomic oxygen (). This is the process which returns oxygen to Earth's atmosphere. Photolysis of water occurs in the thylakoids of cyanobacteria and the chloroplasts of green algae and plants.
Energy transfer models
The conventional semi-classical model describes the photosynthetic energy transfer process as one in which excitation energy hops from light-capturing pigment molecules to reaction center molecules step-by-step down the molecular energy ladder.
The effectiveness of photons of different wavelengths depends on the absorption spectra of the photosynthetic pigments in the organism. Chlorophylls absorb light in the violet-blue and red parts of the spectrum, while accessory pigments capture other wavelengths as well. The phycobilins of red algae absorb blue-green light which penetrates deeper into water than red light, enabling them to photosynthesize in deep waters. Each absorbed photon causes
Document 2:::
The photosynthetic efficiency is the fraction of light energy converted into chemical energy during photosynthesis in green plants and algae. Photosynthesis can be described by the simplified chemical reaction
6 H2O + 6 CO2 + energy → C6H12O6 + 6 O2
where C6H12O6 is glucose (which is subsequently transformed into other sugars, starches, cellulose, lignin, and so forth). The value of the photosynthetic efficiency is dependent on how light energy is defined – it depends on whether we count only the light that is absorbed, and on what kind of light is used (see Photosynthetically active radiation). It takes eight (or perhaps ten or more) photons to use one molecule of CO2. The Gibbs free energy for converting a mole of CO2 to glucose is 114 kcal, whereas eight moles of photons of wavelength 600 nm contains 381 kcal, giving a nominal efficiency of 30%. However, photosynthesis can occur with light up to wavelength 720 nm so long as there is also light at wavelengths below 680 nm to keep Photosystem II operating (see Chlorophyll). Using longer wavelengths means less light energy is needed for the same number of photons and therefore for the same amount of photosynthesis. For actual sunlight, where only 45% of the light is in the photosynthetically active wavelength range, the theoretical maximum efficiency of solar energy conversion is approximately 11%. In actuality, however, plants do not absorb all incoming sunlight (due to reflection, respiration requirements of photosynthesis and the need for optimal solar radiation levels) and do not convert all harvested energy into biomass, which results in a maximum overall photosynthetic efficiency of 3 to 6% of total solar radiation. If photosynthesis is inefficient, excess light energy must be dissipated to avoid damaging the photosynthetic apparatus. Energy can be dissipated as heat (non-photochemical quenching), or emitted as chlorophyll fluorescence.
Typical efficiencies
Plants
Quoted values sunlight-to-biomass efficien
Document 3:::
A photocyte is a cell that specializes in catalyzing enzymes to produce light (bioluminescence). Photocytes typically occur in select layers of epithelial tissue, functioning singly or in a group, or as part of a larger apparatus (a photophore). They contain special structures termed as photocyte granules. These specialized cells are found in a range of multicellular animals including ctenophora, coelenterates (cnidaria), annelids, arthropoda (including insects) and fishes. Although some fungi are bioluminescent, they do not have such specialized cells.
Mechanism of light production
Light production may first be triggered by nerve impulses which stimulate the photocyte to release the enzyme luciferase into a "reaction chamber" of luciferin substrate. In some species the release occurs continually without the precursor impulse via osmotic diffusion. Molecular oxygen is then actively gated through surrounding tracheal cells which otherwise limit the natural diffusion of oxygen from blood vessels; the resulting reaction with the luciferase and luciferin produces light energy and a by-product (usually carbon dioxide).
Researchers once postulated that ATP was the source of reaction energy for photocytes, but since ATP only produces a fraction the energy of the luciferase reaction, any resulting light wave-energy would be too small for detection by a human eye. The wavelengths produced by most photocytes fall close to 490 nm; although light as energetic as 250 nm is reportedly possible.
The variations of color seen in different photocytes are usually the result of color filters that alter the wavelength of the light prior to exiting the endoderm, thanks to the other parts of the photophore. The range of colors vary between bioluminescent species.
The exact combinations of luciferase and luciferin types found among photocytes are specific to the species to which they belong. This would seem to be the result of consistent evolutionary divergence.
Document 4:::
Cooperative luminescence is the radiative process in which two excited ions simultaneously make downward transition to emit one photon with the sum of their excitation energies. The inverse process is cooperative absorption, in which a photon can be absorbed by a coupled pair of two ions, making them excited simultaneously.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What light-producing process occurs when a substance absorbs shorter-wavelength ultraviolet light and then gives off the energy as visible light?
A. fluorescence
B. luminescence
C. resistance
D. candescence
Answer:
|
|
sciq-9495
|
multiple_choice
|
What liquid is referred to as the "universal solvent"?
|
[
"gasoline",
"bromine",
"blood",
"water"
] |
D
|
Relavent Documents:
Document 0:::
In chemistry, solubility is the ability of a substance, the solute, to form a solution with another substance, the solvent. Insolubility is the opposite property, the inability of the solute to form such a solution.
The extent of the solubility of a substance in a specific solvent is generally measured as the concentration of the solute in a saturated solution, one in which no more solute can be dissolved. At this point, the two substances are said to be at the solubility equilibrium. For some solutes and solvents, there may be no such limit, in which case the two substances are said to be "miscible in all proportions" (or just "miscible").
The solute can be a solid, a liquid, or a gas, while the solvent is usually solid or liquid. Both may be pure substances, or may themselves be solutions. Gases are always miscible in all proportions, except in very extreme situations, and a solid or liquid can be "dissolved" in a gas only by passing into the gaseous state first.
The solubility mainly depends on the composition of solute and solvent (including their pH and the presence of other dissolved substances) as well as on temperature and pressure. The dependency can often be explained in terms of interactions between the particles (atoms, molecules, or ions) of the two substances, and of thermodynamic concepts such as enthalpy and entropy.
Under certain conditions, the concentration of the solute can exceed its usual solubility limit. The result is a supersaturated solution, which is metastable and will rapidly exclude the excess solute if a suitable nucleation site appears.
The concept of solubility does not apply when there is an irreversible chemical reaction between the two substances, such as the reaction of calcium hydroxide with hydrochloric acid; even though one might say, informally, that one "dissolved" the other. The solubility is also not the same as the rate of solution, which is how fast a solid solute dissolves in a liquid solvent. This property de
Document 1:::
This is an index of lists of molecules (i.e. by year, number of atoms, etc.). Millions of molecules have existed in the universe since before the formation of Earth. Three of them, carbon dioxide, water and oxygen were necessary for the growth of life. Although humanity had always been surrounded by these substances, it has not always known what they were composed of.
By century
The following is an index of list of molecules organized by time of discovery of their molecular formula or their specific molecule in case of isomers:
List of compounds
By number of carbon atoms in the molecule
List of compounds with carbon number 1
List of compounds with carbon number 2
List of compounds with carbon number 3
List of compounds with carbon number 4
List of compounds with carbon number 5
List of compounds with carbon number 6
List of compounds with carbon number 7
List of compounds with carbon number 8
List of compounds with carbon number 9
List of compounds with carbon number 10
List of compounds with carbon number 11
List of compounds with carbon number 12
List of compounds with carbon number 13
List of compounds with carbon number 14
List of compounds with carbon number 15
List of compounds with carbon number 16
List of compounds with carbon number 17
List of compounds with carbon number 18
List of compounds with carbon number 19
List of compounds with carbon number 20
List of compounds with carbon number 21
List of compounds with carbon number 22
List of compounds with carbon number 23
List of compounds with carbon number 24
List of compounds with carbon numbers 25-29
List of compounds with carbon numbers 30-39
List of compounds with carbon numbers 40-49
List of compounds with carbon numbers 50+
Other lists
List of interstellar and circumstellar molecules
List of gases
List of molecules with unusual names
See also
Molecule
Empirical formula
Chemical formula
Chemical structure
Chemical compound
Chemical bond
Coordination complex
L
Document 2:::
Binary liquid is a type of chemical combination, which creates a special reaction or feature as a result of mixing two liquid chemicals, that are normally inert or have no function by themselves. A number of chemical products are produced as a result of mixing two chemicals as a binary liquid, such as plastic foams and some explosives.
See also
Binary chemical weapon
Thermophoresis
Percus-Yevick equation
Document 3:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
Document 4:::
A simple lipid is a fatty acid ester of different alcohols and carries no other substance. These lipids belong to a heterogeneous class of predominantly nonpolar compounds, mostly insoluble in water, but soluble in nonpolar organic solvents such as chloroform and benzene.
Simple lipids: esters of fatty acids with various alcohols.
a. Fats: esters of fatty acids with glycerol. Oils are fats in the liquid state. Fats are also called triglycerides because all the three hydroxyl groups of glycerol are esterified.
b. Waxes: Solid esters of long-chain fatty acids such as palmitic acid with aliphatic or alicyclic higher molecular weight monohydric alcohols. Waxes are water-insoluble due to the weakly polar nature of the ester group.
See also
Lipid
Lipids
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What liquid is referred to as the "universal solvent"?
A. gasoline
B. bromine
C. blood
D. water
Answer:
|
|
ai2_arc-643
|
multiple_choice
|
A child rides a wagon down a hill. Eventually, the wagon comes to a stop. Which is most responsible for causing the wagon to stop?
|
[
"gravity acting on the wagon",
"friction acting on the wagon",
"the mass of the wagon",
"the mass of the child"
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Belt friction is a term describing the friction forces between a belt and a surface, such as a belt wrapped around a bollard. When a force applies a tension to one end of a belt or rope wrapped around a curved surface, the frictional force between the two surfaces increases with the amount of wrap about the curved surface, and only part of that force (or resultant belt tension) is transmitted to the other end of the belt or rope. Belt friction can be modeled by the Belt friction equation.
In practice, the theoretical tension acting on the belt or rope calculated by the belt friction equation can be compared to the maximum tension the belt can support. This helps a designer of such a system determine how many times the belt or rope must be wrapped around a curved surface to prevent it from slipping. Mountain climbers and sailing crews demonstrate a working knowledge of belt friction when accomplishing tasks with ropes, pulleys, bollards and capstans.
Equation
The equation used to model belt friction is, assuming the belt has no mass and its material is a fixed composition:
where is the tension of the pulling side, is the tension of the resisting side, is the static friction coefficient, which has no units, and is the angle, in radians, formed by the first and last spots the belt touches the pulley, with the vertex at the center of the pulley.
The tension on the pulling side of the belt and pulley has the ability to increase exponentially if the magnitude of the belt angle increases (e.g. it is wrapped around the pulley segment numerous times).
Generalization for a rope lying on an arbitrary orthotropic surface
If a rope is laying in equilibrium under tangential forces on a rough orthotropic surface then three following conditions (all of them) are satisfied:
1. No separation – normal reaction is positive for all points of the rope curve:
, where is a normal curvature of the rope curve.
2. Dragging coefficient of friction and angle are satisfying
Document 2:::
In mechanics, friction torque is the torque caused by the frictional force that occurs when two objects in contact move. Like all torques, it is a rotational force that may be measured in newton meters or pounds-feet.
Engineering
Friction torque can be disruptive in engineering. There are a variety of measures engineers may choose to take to eliminate these disruptions. Ball bearings are an example of an attempt to minimize the friction torque.
Friction torque can also be an asset in engineering. Bolts and nuts, or screws are often designed to be fastened with a given amount of torque, where the friction is adequate during use or operation for the bolt, nut, or screw to remain safely fastened. This is true with such applications as lug nuts retaining wheels to vehicles, or equipment subjected to vibration with sufficiently well-attached bolts, nuts, or screws to prevent the vibration from shaking them loose.
Examples
When a cyclist applies the brake to the forward wheel, the bicycle tips forward due to the frictional torque between the wheel and the ground.
When a golf ball hits the ground it begins to spin in part because of the friction torque applied to the golf ball from the friction between the golf ball and the ground.
See also
Torque
Force
Engineering
Mechanics
Moment (physics)
Document 3:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
Document 4:::
As described by the third of Newton's laws of motion of classical mechanics, all forces occur in pairs such that if one object exerts a force on another object, then the second object exerts an equal and opposite reaction force on the first. The third law is also more generally stated as: "To every action there is always opposed an equal reaction: or the mutual actions of two bodies upon each other are always equal, and directed to contrary parts." The attribution of which of the two forces is the action and which is the reaction is arbitrary. Either of the two can be considered the action, while the other is its associated reaction.
Examples
Interaction with ground
When something is exerting force on the ground, the ground will push back with equal force in the opposite direction. In certain fields of applied physics, such as biomechanics, this force by the ground is called 'ground reaction force'; the force by the object on the ground is viewed as the 'action'.
When someone wants to jump, he or she exerts additional downward force on the ground ('action'). Simultaneously, the ground exerts upward force on the person ('reaction'). If this upward force is greater than the person's weight, this will result in upward acceleration. When these forces are perpendicular to the ground, they are also called a normal force.
Likewise, the spinning wheels of a vehicle attempt to slide backward across the ground. If the ground is not too slippery, this results in a pair of friction forces: the 'action' by the wheel on the ground in backward direction, and the 'reaction' by the ground on the wheel in forward direction. This forward force propels the vehicle.
Gravitational forces
The Earth, among other planets, orbits the Sun because the Sun exerts a gravitational pull that acts as a centripetal force, holding the Earth to it, which would otherwise go shooting off into space. If the Sun's pull is considered an action, then Earth simultaneously exerts a reaction as a gravi
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A child rides a wagon down a hill. Eventually, the wagon comes to a stop. Which is most responsible for causing the wagon to stop?
A. gravity acting on the wagon
B. friction acting on the wagon
C. the mass of the wagon
D. the mass of the child
Answer:
|
|
sciq-5124
|
multiple_choice
|
What are lipids' function in relation to nerves?
|
[
"reproduction",
"protection",
"conservation",
"transportation"
] |
B
|
Relavent Documents:
Document 0:::
The neuroprostanes are prostaglandin-like compounds formed in vivo from the free radical-catalyzed peroxidation
of essential fatty acids (primarily docosahexaenoic acid) without the direct action of cyclooxygenase (COX) enzymes. The result is the formation of isoprostane-like compounds F4-, D4-, E4-, A4-, and J4-neuroprostanes which have been shown to be produced in vivo. These oxygenated essential fatty acids possess potent biological activity as anti-inflammatory mediators inhibiting the response of human macrophages that augment the perception of pain.
See also
Isoprostanes
Prostaglandin
Document 1:::
The blood–brain barrier (BBB) is a highly selective semipermeable border of endothelial cells that regulates the transfer of solutes and chemicals between the circulatory system and the central nervous system, thus protecting the brain from harmful or unwanted substances in the blood. The blood–brain barrier is formed by endothelial cells of the capillary wall, astrocyte end-feet ensheathing the capillary, and pericytes embedded in the capillary basement membrane. This system allows the passage of some small molecules by passive diffusion, as well as the selective and active transport of various nutrients, ions, organic anions, and macromolecules such as glucose and amino acids that are crucial to neural function.
The blood–brain barrier restricts the passage of pathogens, the diffusion of solutes in the blood, and large or hydrophilic molecules into the cerebrospinal fluid, while allowing the diffusion of hydrophobic molecules (O2, CO2, hormones) and small non-polar molecules. Cells of the barrier actively transport metabolic products such as glucose across the barrier using specific transport proteins. The barrier also restricts the passage of peripheral immune factors, like signaling molecules, antibodies, and immune cells, into the CNS, thus insulating the brain from damage due to peripheral immune events.
Specialized brain structures participating in sensory and secretory integration within brain neural circuits—the circumventricular organs and choroid plexus—have in contrast highly permeable capillaries.
Structure
The BBB results from the selectivity of the tight junctions between the endothelial cells of brain capillaries, restricting the passage of solutes. At the interface between blood and the brain, endothelial cells are adjoined continuously by these tight junctions, which are composed of smaller subunits of transmembrane proteins, such as occludin, claudins (such as Claudin-5), junctional adhesion molecule (such as JAM-A). Each of these tight junct
Document 2:::
The lipidome refers to the totality of lipids in cells. Lipids are one of the four major molecular components of biological organisms, along with proteins, sugars and nucleic acids. Lipidome is a term coined in the context of omics in modern biology, within the field of lipidomics. It can be studied using mass spectrometry and bioinformatics as well as traditional lab-based methods. The lipidome of a cell can be subdivided into the membrane-lipidome and mediator-lipidome.
The first cell lipidome to be published was that of a mouse macrophage in 2010. The lipidome of the yeast Saccharomyces cerevisiae has been characterised with an estimated 95% coverage; studies of the human lipidome are ongoing. For example, the human plasma lipidome consist of almost 600 distinct molecular species. Research suggests that the lipidome of an individual may be able to indicate cancer risks associated with dietary fats, particularly breast cancer.
See also
Genome
Proteome
Glycome
Document 3:::
Catherina Gwynne Becker (née Krüger) is an Alexander von Humboldt Professor at TU Dresden, and was formerly Professor of Neural Development and Regeneration at the University of Edinburgh.
Early life and education
Catherina Becker was born in Marburg, Germany in 1964. She was educated at the in Bremen, before going on to study at the University of Bremen where she obtained an MSci of Biology and her PhD (Dr. rer. nat.) in 1993, investigating visual system development and regeneration in frogs and salamanders under the supervision of Gerhard Roth. She then trained as post-doctorate at the Swiss Federal Institute of Technology in Zürich, the Department Dev Cell Biol funded by an EMBO long-term fellowship, at the University of California, Irvine in USA and the Centre for Molecular Neurobiology Hamburg (ZMNH), Germany where she took a position of group leader in 2000 and finished her ‚Habilitation‘ in neurobiology in 2012.
Career
Becker joined the University of Edinburgh in 2005 as senior Lecturer and was appointed personal chair in neural development and regeneration in 2013. She was also the Director of Postgraduate Training at the Centre for Neuroregeneration up to 2015, then centre director up to 2017. In 2021 she received an Alexander von Humboldt Professorship, joining the at the Technical University of Dresden.
Research
Becker's research focuses on a better understanding of the factors governing the generation of neurons and axonal pathfinding in the CNS during development and regeneration using the zebrafish model to identify fundamental mechanisms in vertebrates with clear translational implications for CNS injury and neurodegenerative diseases.
The Becker group established the zebrafish as a model for spinal cord regeneration.
Their research found that functional regeneration is near perfect, but anatomical repair does not fully recreate the previous network, instead, new neurons are generated and extensive rewiring occurs.
They have identified neurotra
Document 4:::
The Journal of Comparative Physiology A: Neuroethology, Sensory, Neural, and Behavioral Physiology is a monthly peer-reviewed scientific journal covering the intersection of ethology, neuroscience, and physiology. It was established in 1984, when it was split off from the Journal of Comparative Physiology. It was originally subtitled the Journal of Comparative Physiology A: Sensory, Neural, and Behavioral Physiology, obtaining its current name in 2001. The editor-in-chief is Friedrich G. Barth (University of Vienna). The journal become electronic only in 2017.
Abstracting and indexing
The journal is indexed and abstracted in the following bibliographic databases:
According to the Journal Citation Reports, the journal has a 2017 impact factor of 1.970.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are lipids' function in relation to nerves?
A. reproduction
B. protection
C. conservation
D. transportation
Answer:
|
|
sciq-10332
|
multiple_choice
|
What type of bacteria change nitrogen gas from the atmosphere to nitrates in soil?
|
[
"spiral bacteria",
"hydrophylic bacteria",
"multicellular bacteria",
"nitrogen fixing bacteria"
] |
D
|
Relavent Documents:
Document 0:::
The nitrate reductase test is a test to differentiate between bacteria based on their ability or inability to reduce nitrate (NO3−) to nitrite (NO2−) using anaerobic respiration.
Procedure
Various assays for detecting nitrate reduction have been described. One method is performed as follows:
Inoculate nitrate broth with an isolate and incubate for 48 hours.
Add two nitrate tablets to the sample. If the bacterium produces nitrate reductase, the broth will turn a deep red within 5 minutes at this step.
If no color change is observed, then the result is inconclusive. Add a small amount of zinc to the broth. If the solution remains colorless, then both nitrate reductase and nitrite reductase are present. If the solution turns red, nitrate reductase is not present.
Document 1:::
Dissimilatory nitrate reduction to ammonium (DNRA), also known as nitrate/nitrite ammonification, is the result of anaerobic respiration by chemoorganoheterotrophic microbes using nitrate (NO3−) as an electron acceptor for respiration. In anaerobic conditions microbes which undertake DNRA oxidise organic matter and use nitrate (rather than oxygen) as an electron acceptor, reducing it to nitrite, then ammonium (NO3−→NO2−→NH4+).
Dissimilatory nitrate reduction to ammonium is more common in prokaryotes but may also occur in eukaryotic microorganisms. DNRA is a component of the terrestrial and oceanic nitrogen cycle. Unlike denitrification, it acts to conserve bioavailable nitrogen in the system, producing soluble ammonium rather than unreactive dinitrogen gas.
Background and process
Cellular process
Dissimilatory nitrate reduction to ammonium is a two step process, reducing NO3− to NO2− then NO2− to NH4+, though the reaction may begin with NO2− directly. Each step is mediated by a different enzyme, the first step of dissimilatory nitrate reduction to ammonium is usually mediated by a periplasmic nitrate reductase. The second step (respiratory NO2− reduction to NH4+) is mediated by cytochrome c nitrite reductase, occurring at the periplasmic membrane surface. Despite DNRA not producing N2O as an intermediate during nitrate reduction (as denitrification does) N2O may still be released as a byproduct, thus DNRA may also act as a sink of fixed, bioavailable nitrogen. DNRA's production of N2O may be enhanced at higher pH levels.
Denitrification
Dissimilatory nitrate reduction to ammonium is similar to the process of denitrification, though NO2− is reduced farther to NH4+ rather than to N2, transferring eight electrons. Both denitrifiers and nitrate ammonifiers are competing for NO3− in the environment. Despite the redox potential of dissimilatory nitrate reduction to ammonium being lower than denitrification and producing less Gibbs free energy, energy yield of denitr
Document 2:::
Nitrosomonas is a genus of Gram-negative bacteria, belonging to the Betaproteobacteria. It is one of the five genera of ammonia-oxidizing bacteria and, as an obligate chemolithoautotroph, uses ammonia (NH3) as an energy source and carbon dioxide (CO2) as a carbon source in presence of oxygen. Nitrosomonas are important in the global biogeochemical nitrogen cycle, since they increase the bioavailability of nitrogen to plants and in the denitrification, which is important for the release of nitrous oxide, a powerful greenhouse gas. This microbe is photophobic, and usually generate a biofilm matrix, or form clumps with other microbes, to avoid light. Nitrosomonas can be divided into six lineages: the first one includes the species Nitrosomonas europea, Nitrosomonas eutropha, Nitrosomonas halophila, and Nitrosomonas mobilis. The second lineage presents the species Nitrosomonas communis, N. sp. I and N. sp. II, meanwhile the third lineage includes only Nitrosomonas nitrosa. The fourth lineage includes the species Nitrosomonas ureae and Nitrosomonas oligotropha and the fifth and sixth lineages include the species Nitrosomonas marina, N. sp. III, Nitrosomonas estuarii and Nitrosomonas cryotolerans.
Morphology
All species included in this genus have ellipsoidal or rod-shaped cells in which are present extensive intracytoplasmic membranes displaying as flattened vesicles.
Most species are motile with a flagellum located in the polar region of the bacillus. Three basic morphological types of Nitrosomonas were studied, which are: short rods Nitrosomonas, rods Nitrosomonas and Nitrosomonas with pointed ends. Nitrosomonas species cells have different criteria of size and shape:
N. europaea shows short rods with pointed ends cells, which size is (0.8-1.1 x 1.0- 1.7) µm; motility has not been observed.
Document 3:::
Nitrification is the biological oxidation of ammonia to nitrate via the intermediary nitrite. Nitrification is an important step in the nitrogen cycle in soil. The process of complete nitrification may occur through separate organisms or entirely within one organism, as in comammox bacteria. The transformation of ammonia to nitrite is usually the rate limiting step of nitrification. Nitrification is an aerobic process performed by small groups of autotrophic bacteria and archaea.
Microbiology
Ammonia oxidation
The process of nitrification begins with the first stage of ammonia oxidation, where ammonia (NH3) or ammonium (NH4+) get converted into nitrite (NO2-). This first stage is sometimes known as nitritation. It is performed by two groups of organisms, ammonia-oxidizing bacteria (AOB) and ammonia-oxidizing archaea (AOA).
Ammonia-Oxidizing Bacteria
Ammonia-Oxidizing Bacteria (AOB) are typically Gram-negative bacteria and belong to Betaproteobacteria and Gammaproteobacteria including the commonly studied genera including Nitrosomonas and Nitrococcus. They are known for their ability to utilize ammonia as an energy source and are prevalent in a wide range of environments, such as soils, aquatic systems, and wastewater treatment plants.
AOB possess enzymes called ammonia monooxygenases (AMOs), which are responsible for catalyzing the conversion of ammonia to hydroxylamine (NH2OH), a crucial intermediate in the process of nitrification. This enzymatic activity is sensitive to environmental factors, such as pH, temperature, and oxygen availability.
AOB play a vital role in soil nitrification, making them key players in nutrient cycling. They contribute to the transformation of ammonia derived from organic matter decomposition or fertilizers into nitrite, which subsequently serves as a substrate for nitrite-oxidizing bacteria (NOB).
Ammonia-Oxidizing Archaea
Prior to the discovery of archaea capable of ammonia oxidation, ammonia-oxidizing bacteria (AOB) were consi
Document 4:::
Nitrifying bacteria are chemolithotrophic organisms that include species of genera such as Nitrosomonas, Nitrosococcus, Nitrobacter, Nitrospina, Nitrospira and Nitrococcus. These bacteria get their energy from the oxidation of inorganic nitrogen compounds. Types include ammonia-oxidizing bacteria (AOB) and nitrite-oxidizing bacteria (NOB). Many species of nitrifying bacteria have complex internal membrane systems that are the location for key enzymes in nitrification: ammonia monooxygenase (which oxidizes ammonia to hydroxylamine), hydroxylamine oxidoreductase (which oxidizes hydroxylamine to nitric oxide - which is further oxidized to nitrite by a currently unidentified enzyme), and nitrite oxidoreductase (which oxidizes nitrite to nitrate).
Ecology
Nitrifying bacteria are present in distinct taxonomical groups and are found in highest numbers where considerable amounts of ammonia are present (such as areas with extensive protein decomposition, and sewage treatment plants). Nitrifying bacteria thrive in lakes, streams, and rivers with high inputs and outputs of sewage, wastewater and freshwater because of the high ammonia content.
Oxidation of ammonia to nitrate
Nitrification in nature is a two-step oxidation process of ammonium () or ammonia () to nitrite () and then to nitrate () catalyzed by two ubiquitous bacterial groups growing together. The first reaction is oxidation of ammonium to nitrite by ammonia oxidizing bacteria (AOB) represented by members of Betaproteobacteria and Gammaproteobacteria. Further organisms able to oxidize ammonia are Archaea (AOA).
The second reaction is oxidation of nitrite () to nitrate by nitrite-oxidizing bacteria (NOB), represented by the members of Nitrospinota, Nitrospirota, Pseudomonadota, and Chloroflexota.
This two-step process was described already in 1890 by the Ukrainian microbiologist Sergei Winogradsky.
Ammonia can be also oxidized completely to nitrate by one comammox bacterium.
Ammonia-to-nitrite mechanism
Amm
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of bacteria change nitrogen gas from the atmosphere to nitrates in soil?
A. spiral bacteria
B. hydrophylic bacteria
C. multicellular bacteria
D. nitrogen fixing bacteria
Answer:
|
|
sciq-1731
|
multiple_choice
|
What is the major selective advantage of myelination?
|
[
"space efficiency",
"potentiation",
"heat regulation",
"storage capacity"
] |
A
|
Relavent Documents:
Document 0:::
There is much to be discovered about the evolution of the brain and the principles that govern it. While much has been discovered, not everything currently known is well understood. The evolution of the brain has appeared to exhibit diverging adaptations within taxonomic classes such as Mammalia and more vastly diverse adaptations across other taxonomic classes.
Brain to body size scales allometrically. This means as body size changes, so do other physiological, anatomical, and biochemical constructs connecting the brain to the body. Small bodied mammals have relatively large brains compared to their bodies whereas large mammals (such as whales) have a smaller brain to body ratios. If brain weight is plotted against body weight for primates, the regression line of the sample points can indicate the brain power of a primate species. Lemurs for example fall below this line which means that for a primate of equivalent size, we would expect a larger brain size. Humans lie well above the line indicating that humans are more encephalized than lemurs. In fact, humans are more encephalized compared to all other primates. This means that human brains have exhibited a larger evolutionary increase in its complexity relative to its size. Some of these evolutionary changes have been found to be linked to multiple genetic factors, such as proteins and other organelles.
Early history of brain development
One approach to understanding overall brain evolution is to use a paleoarchaeological timeline to trace the necessity for ever increasing complexity in structures that allow for chemical and electrical signaling. Because brains and other soft tissues do not fossilize as readily as mineralized tissues, scientists often look to other structures as evidence in the fossil record to get an understanding of brain evolution. This, however, leads to a dilemma as the emergence of organisms with more complex nervous systems with protective bone or other protective tissues that can then
Document 1:::
Cat intelligence is the capacity of the domesticated cat to solve problems and adapt to its environment. Research has shown that feline intelligence includes the ability to acquire new behavior that applies knowledge to new situations, communicating needs and desires within a social group and responding to training cues.
The brain
Brain size
The brain of the domesticated cat is about long and weighs . If a typical cat is taken to be long with a weight of , then the brain would be at 0.91% of its total body mass, compared to 2.33% of total body mass in the average human. Within the encephalization quotient proposed by Jerison in 1973, values above one are classified big-brained, while values lower than one are small-brained. The domestic cat is attributed a value of between 1–1.71 (for comparison: human values range between 7.44–7.8).
The largest brains in the family Felidae are those of the tigers in Java and Bali. It is debated whether there exists a causal relationship between brain size and intelligence in vertebrates. Most experiments involving the relevance of brain size to intelligence hinge on the assumption that complex behavior requires a complex (and therefore intelligent) brain; however, this connection has not been consistently demonstrated.
The surface area of a cat's cerebral cortex is approximately ; furthermore, a theoretical cat weighing has a cerebellum weighing , 0.17% of the total weight.
Brain structures
According to researchers at Tufts University School of Veterinary Medicine, the physical structure of the brains of humans and cats is very similar. The human brain and the cat brain both have cerebral cortices with similar lobes.
The number of cortical neurons contained in the brain of the cat is reported to be 203 million. Area 17 of the visual cortex was found to contain about 51,400 neurons per mm3. Area 17 is the primary visual cortex.
Feline brains are gyrencephalic, i.e. they have a surface folding as human brains do.
Analyse
Document 2:::
The following outline is provided as an overview of and topical guide to brain mapping:
Brain mapping – set of neuroscience techniques predicated on the mapping of (biological) quantities or properties onto spatial representations of the (human or non-human) brain resulting in maps. Brain mapping is further defined as the study of the anatomy and function of the brain and spinal cord through the use of imaging (including intra-operative, microscopic, endoscopic and multi-modality imaging), immunohistochemistry, molecular and optogenetics, stem cell and cellular biology, engineering (material, electrical and biomedical), neurophysiology and nanotechnology.
Broad scope
History of neuroscience
History of neurology
Brain mapping
Human brain
Neuroscience
Nervous system.
The neuron doctrine
Neuron doctrine – A set of carefully constructed elementary set of observations regarding neurons. For more granularity, more current, and more advanced topics, see the cellular level section
Asserts that neurons fall under the broader cell theory, which postulates:
All living organisms are composed of one or more cells.
The cell is the basic unit of structure, function, and organization in all organisms.
All cells come from preexisting, living cells.
The Neuron doctrine postulates several elementary aspects of neurons:
The brain is made up of individual cells (neurons) that contain specialized features such as dendrites, a cell body, and an axon.
Neurons are cells differentiable from other tissues in the body.
Neurons differ in size, shape, and structure according to their location or functional specialization.
Every neuron has a nucleus, which is the trophic center of the cell (The part which must have access to nutrition). If the cell is divided, only the portion containing the nucleus will survive.
Nerve fibers are the result of cell processes and the outgrowths of nerve cells. (Several axons are bound together to form one nerve fibril. See also: Neurofilament.
Document 3:::
A grid cell is a type of neuron within the entorhinal cortex that fires at regular intervals as an animal navigates an open area, allowing it to understand its position in space by storing and integrating information about location, distance, and direction. Grid cells have been found in many animals, including rats, mice, bats, monkeys, and humans.
Grid cells were discovered in 2005 by Edvard Moser, May-Britt Moser, and their students Torkel Hafting, Marianne Fyhn, and Sturla Molden at the Centre for the Biology of Memory (CBM) in Norway. They were awarded the 2014 Nobel Prize in Physiology or Medicine together with John O'Keefe for their discoveries of cells that constitute a positioning system in the brain. The arrangement of spatial firing fields, all at equal distances from their neighbors, led to a hypothesis that these cells encode a neural representation of Euclidean space. The discovery also suggested a mechanism for dynamic computation of self-position based on continuously updated information about position and direction.
To detect grid cell activity in a typical rat experiment, an electrode which can record single-neuron activity is implanted in the dorsomedial entorhinal cortex and collects recordings as the rat moves around freely in an open arena. The resulting data can be visualized by marking the rat's position on a map of the arena every time that neuron fires an action potential. These marks accumulate over time to form a set of small clusters, which in turn form the vertices of a grid of equilateral triangles. The regular triangle pattern distinguishes grid cells from other types of cells that show spatial firing. By contrast, if a place cell from the rat hippocampus is examined in the same way, then the marks will frequently only form one cluster (one "place field") in a given environment, and even when multiple clusters are seen, there is no perceptible regularity in their arrangement.
Background of discovery
In 1971 John O'Keefe and Jonathon
Document 4:::
The following are two lists of animals ordered by the size of their nervous system. The first list shows number of neurons in their entire nervous system, indicating their overall neural complexity. The second list shows the number of neurons in the structure that has been found to be representative of animal intelligence. The human brain contains 86 billion neurons, with 16 billion neurons in the cerebral cortex.
Scientists are engaged in counting, quantification, in order to find answers to the question in the strategy of neuroscience and intelligence of "self-knowledge": how the evolution of a set of components and parameters (~1011 neurons, ~1014 synapses) of a complex system could lead to the phenomenon of the appearance of intelligence in the biological species "sapiens".
Overview
Neurons are the cells that transmit information in an animal's nervous system so that it can sense stimuli from its environment and behave accordingly. Not all animals have neurons; Trichoplax and sponges lack nerve cells altogether.
Neurons may be packed to form structures such as the brain of vertebrates or the neural ganglions of insects.
The number of neurons and their relative abundance in different parts of the brain is a determinant of neural function and, consequently, of behavior.
Whole nervous system
All numbers for neurons (except Caenorhabditis and Ciona), and all numbers for synapses (except Ciona) are estimations.
List of animal species by forebrain (cerebrum or pallium) neuron number
The question of what physical characteristic of an animal makes an animal intelligent has varied over the centuries. One early speculation was brain size (or weight, which provides the same ordering.) A second proposal was brain-to-body-mass ratio, and a third was encephalization quotient, sometimes referred to as EQ. The current best predictor is number of neurons in the forebrain, based on Herculano-Houzel's improved neuron counts. It accounts most accurately for variations
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the major selective advantage of myelination?
A. space efficiency
B. potentiation
C. heat regulation
D. storage capacity
Answer:
|
|
sciq-8264
|
multiple_choice
|
The way a mineral cleaves or fractures depends on what part of the mineral?
|
[
"the core",
"the fault structure",
"the crust",
"the crystal structure"
] |
D
|
Relavent Documents:
Document 0:::
Plasticity theory for rocks is concerned with the response of rocks to loads beyond the elastic limit. Historically, conventional wisdom has it that rock is brittle and fails by fracture while plasticity is identified with ductile materials. In field scale rock masses, structural discontinuities exist in the rock indicating that failure has taken place. Since the rock has not fallen apart, contrary to expectation of brittle behavior, clearly elasticity theory is not the last word.
Theoretically, the concept of rock plasticity is based on soil plasticity which is different from metal plasticity. In metal plasticity, for example in steel, the size of a dislocation is sub-grain size while for soil it is the relative movement of microscopic grains. The theory of soil plasticity was developed in the 1960s at Rice University to provide for inelastic effects not observed in metals. Typical behaviors observed in rocks include strain softening, perfect plasticity, and work hardening.
Application of continuum theory is possible in jointed rocks because of the continuity of tractions across joints even through displacements may be discontinuous. The difference between an aggregate with joints and a continuous solid is in the type of constitutive law and the values of constitutive parameters.
Experimental evidence
Experiments are usually carried out with the intention of characterizing the mechanical behavior of rock in terms of rock strength. The strength is the limit to elastic behavior and delineates the regions where plasticity theory is applicable. Laboratory tests for characterizing rock plasticity fall into four overlapping categories: confining pressure tests, pore pressure or effective stress tests, temperature-dependent tests, and strain rate-dependent tests. Plastic behavior has been observed in rocks using all these techniques since the early 1900s.
The Boudinage experiments show that localized plasticity is observed in certain rock specimens that ha
Document 1:::
Fracture is the appearance of a crack or complete separation of an object or material into two or more pieces under the action of stress. The fracture of a solid usually occurs due to the development of certain displacement discontinuity surfaces within the solid. If a displacement develops perpendicular to the surface, it is called a normal tensile crack or simply a crack; if a displacement develops tangentially, it is called a shear crack, slip band or dislocation.
Brittle fractures occur without any apparent deformation before fracture. Ductile fractures occur after visible deformation. Fracture strength, or breaking strength, is the stress when a specimen fails or fractures. The detailed understanding of how a fracture occurs and develops in materials is the object of fracture mechanics.
Strength
Fracture strength, also known as breaking strength, is the stress at which a specimen fails via fracture. This is usually determined for a given specimen by a tensile test, which charts the stress–strain curve (see image). The final recorded point is the fracture strength.
Ductile materials have a fracture strength lower than the ultimate tensile strength (UTS), whereas in brittle materials the fracture strength is equivalent to the UTS. If a ductile material reaches its ultimate tensile strength in a load-controlled situation, it will continue to deform, with no additional load application, until it ruptures. However, if the loading is displacement-controlled, the deformation of the material may relieve the load, preventing rupture.
The statistics of fracture in random materials have very intriguing behavior, and was noted by the architects and engineers quite early. Indeed, fracture or breakdown studies might be the oldest physical science studies, which still remain intriguing and very much alive. Leonardo da Vinci, more than 500 years ago, observed that the tensile strengths of nominally identical specimens of iron wire decrease with increasing length of the w
Document 2:::
Comminution is the reduction of solid materials from one average particle size to a smaller average particle size, by crushing, grinding, cutting, vibrating, or other processes. In geology, it occurs naturally during faulting in the upper part of the Earth's crust. In industry, it is an important unit operation in mineral processing, ceramics, electronics, and other fields, accomplished with many types of mill. In dentistry, it is the result of mastication of food. In general medicine, it is one of the most traumatic forms of bone fracture.
Within industrial uses, the purpose of comminution is to reduce the size and to increase the surface area of solids. It is also used to free useful materials from matrix materials in which they are embedded, and to concentrate minerals.
Energy requirements
The comminution of solid materials consumes energy, which is being used to break up the solid into smaller pieces. The comminution energy can be estimated by:
Rittinger's law, which assumes that the energy consumed is proportional to the newly generated surface area;
Kick's law, which related the energy to the sizes of the feed particles and the product particles;
Bond's law, which assumes that the total work useful in breakage is inversely proportional to the square root of the diameter of the product particles, [implying] theoretically that the work input varies as the length of the new cracks made in breakage.
Holmes's law, which modifies Bond's law by substituting the square root with an exponent that depends on the material.
Forces
There are three forces which typically are used to effect the comminution of particles: impact, shear, and compression.
Methods
There are several methods of comminution. Comminution of solid materials requires different types of crushers and mills depending on the feed properties such as hardness at various size ranges and application requirements such as throughput and maintenance. The most common machines for the comminution of coarse
Document 3:::
In mineralogy, tenacity is a mineral's behavior when deformed or broken.
Common terms
Brittleness
The mineral breaks or powders easily. Most ionic-bonded minerals are brittle.
Malleability
The mineral may be pounded out into thin sheets. Metallic-bonded minerals are usually malleable.
Ductility
The mineral may be drawn into a wire. Ductile materials have to be malleable as well as tough.
Sectility
May be cut smoothly with a knife. Relatively few minerals are sectile. Sectility is a form of tenacity and can be used to distinguish minerals of similar appearance. Gold, for example, is sectile but pyrite ("fool's gold") is not.
Elasticity
If bent by an external force, an elastic mineral will spring back to its original shape and size when the stress, that is, the external force, is released.
Plasticity
If bent by an external force, a plastic mineral will not spring back to its original shape and size when the stress, that is, the external force, is released. It stays bent.
Document 4:::
In geology, rock (or stone) is any naturally occurring solid mass or aggregate of minerals or mineraloid matter. It is categorized by the minerals included, its chemical composition, and the way in which it is formed. Rocks form the Earth's outer solid layer, the crust, and most of its interior, except for the liquid outer core and pockets of magma in the asthenosphere. The study of rocks involves multiple subdisciplines of geology, including petrology and mineralogy. It may be limited to rocks found on Earth, or it may include planetary geology that studies the rocks of other celestial objects.
Rocks are usually grouped into three main groups: igneous rocks, sedimentary rocks and metamorphic rocks. Igneous rocks are formed when magma cools in the Earth's crust, or lava cools on the ground surface or the seabed. Sedimentary rocks are formed by diagenesis and lithification of sediments, which in turn are formed by the weathering, transport, and deposition of existing rocks. Metamorphic rocks are formed when existing rocks are subjected to such high pressures and temperatures that they are transformed without significant melting.
Humanity has made use of rocks since the earliest humans. This early period, called the Stone Age, saw the development of many stone tools. Stone was then used as a major component in the construction of buildings and early infrastructure. Mining developed to extract rocks from the Earth and obtain the minerals within them, including metals. Modern technology has allowed the development of new man-made rocks and rock-like substances, such as concrete.
Study
Geology is the study of Earth and its components, including the study of rock formations. Petrology is the study of the character and origin of rocks. Mineralogy is the study of the mineral components that create rocks. The study of rocks and their components has contributed to the geological understanding of Earth's history, the archaeological understanding of human history, and the
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The way a mineral cleaves or fractures depends on what part of the mineral?
A. the core
B. the fault structure
C. the crust
D. the crystal structure
Answer:
|
|
sciq-4011
|
multiple_choice
|
What causes halide minerals to form?
|
[
"salt water accumulation",
"salt water evaporation",
"salt water ionization",
"fresh water ionization"
] |
B
|
Relavent Documents:
Document 0:::
A solid solution, a term popularly used for metals, is a homogeneous mixture of two different kinds of atoms in solid state and having a single crystal structure. Many examples can be found in metallurgy, geology, and solid-state chemistry. The word "solution" is used to describe the intimate mixing of components at the atomic level and distinguishes these homogeneous materials from physical mixtures of components. Two terms are mainly associated with solid solutions – solvents and solutes, depending on the relative abundance of the atomic species.
In general if two compounds are isostructural then a solid solution will exist between the end members (also known as parents). For example sodium chloride and potassium chloride have the same cubic crystal structure so it is possible to make a pure compound with any ratio of sodium to potassium (Na1-xKx)Cl by dissolving that ratio of NaCl and KCl in water and then evaporating the solution. A member of this family is sold under the brand name Lo Salt which is (Na0.33K0.66)Cl, hence it contains 66% less sodium than normal table salt (NaCl). The pure minerals are called halite and sylvite; a physical mixture of the two is referred to as sylvinite.
Because minerals are natural materials they are prone to large variations in composition. In many cases specimens are members for a solid solution family and geologists find it more helpful to discuss the composition of the family than an individual specimen. Olivine is described by the formula (Mg, Fe)2SiO4, which is equivalent to (Mg1−xFex)2SiO4. The ratio of magnesium to iron varies between the two endmembers of the solid solution series: forsterite (Mg-endmember: Mg2SiO4) and fayalite (Fe-endmember: Fe2SiO4) but the ratio in olivine is not normally defined. With increasingly complex compositions the geological notation becomes significantly easier to manage than the chemical notation.
Nomenclature
The IUPAC definition of a solid solution is a "solid in which components ar
Document 1:::
See also
List of minerals
Document 2:::
In common usage, salt is a mineral composed primarily of sodium chloride (NaCl). When used in food, especially at table in ground form in dispensers, it is more formally called table salt. In the form of a natural crystalline mineral, salt is also known as rock salt or halite. Salt is essential for life in general, and saltiness is one of the basic human tastes. Salt is one of the oldest and most ubiquitous food seasonings, and is known to uniformly improve the taste perception of food, including otherwise unpalatable food. Salting, brining, and pickling are also ancient and important methods of food preservation.
Some of the earliest evidence of salt processing dates to around 6000 BC, when people living in the area of present-day Romania boiled spring water to extract salts; a salt works in China dates to approximately the same period. Salt was also prized by the ancient Hebrews, Greeks, Romans, Byzantines, Hittites, Egyptians, and Indians. Salt became an important article of trade and was transported by boat across the Mediterranean Sea, along specially built salt roads, and across the Sahara on camel caravans. The scarcity and universal need for salt have led nations to go to war over it and use it to raise tax revenues. Salt is used in religious ceremonies and has other cultural and traditional significance.
Salt is processed from salt mines, and by the evaporation of seawater (sea salt) and mineral-rich spring water in shallow pools. The greatest single use for salt (sodium chloride) is as a feedstock for the production of chemicals. It is used to produce caustic soda and chlorine; it is also used in the manufacturing processes of polyvinyl chloride, plastics, paper pulp and many other products. Of the annual global production of around three hundred million tonnes of salt, only a small percentage is used for human consumption. Other uses include water conditioning processes, de-icing highways, and agricultural use. Edible salt is sold in forms such as sea s
Document 3:::
Natural occurrence
Iron dissolved in groundwater is in the reduced iron II form. If this groundwater comes in c
Document 4:::
Clathrate hydrates, or gas hydrates, clathrates, or hydrates, are crystalline water-based solids physically resembling ice, in which small non-polar molecules (typically gases) or polar molecules with large hydrophobic moieties are trapped inside "cages" of hydrogen bonded, frozen water molecules. In other words, clathrate hydrates are clathrate compounds in which the host molecule is water and the guest molecule is typically a gas or liquid. Without the support of the trapped molecules, the lattice structure of hydrate clathrates would collapse into conventional ice crystal structure or liquid water. Most low molecular weight gases, including , , , , , , , , and , as well as some higher hydrocarbons and freons, will form hydrates at suitable temperatures and pressures. Clathrate hydrates are not officially chemical compounds, as the enclathrated guest molecules are never bonded to the lattice. The formation and decomposition of clathrate hydrates are first order phase transitions, not chemical reactions. Their detailed formation and decomposition mechanisms on a molecular level are still not well understood.
Clathrate hydrates were first documented in 1810 by Sir Humphry Davy who found that water was a primary component of what was earlier thought to be solidified chlorine.
Clathrates have been found to occur naturally in large quantities. Around 6.4 trillion () tonnes of methane is trapped in deposits of methane clathrate on the deep ocean floor. Such deposits can be found on the Norwegian continental shelf in the northern headwall flank of the Storegga Slide. Clathrates can also exist as permafrost, as at the Mallik gas hydrate site in the Mackenzie Delta of northwestern Canadian Arctic. These natural gas hydrates are seen as a potentially vast energy resource and several countries have dedicated national programs to develop this energy resource. Clathrate hydrate has also been of great interest as technology enabler for many applications like seawater desalina
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What causes halide minerals to form?
A. salt water accumulation
B. salt water evaporation
C. salt water ionization
D. fresh water ionization
Answer:
|
|
sciq-25
|
multiple_choice
|
This sharing of electrons produces what is known as a covalent bond. covalent bonds are ~20 to 50 times stronger than what?
|
[
"Newton's third law",
"van der waals interactions",
"Mendelian systems",
"gravitational pull"
] |
B
|
Relavent Documents:
Document 0:::
An intramolecular force (or primary forces) is any force that binds together the atoms making up a molecule or compound, not to be confused with intermolecular forces, which are the forces present between molecules. The subtle difference in the name comes from the Latin roots of English with inter meaning between or among and intra meaning inside. Chemical bonds are considered to be intramolecular forces which are often stronger than intermolecular forces present between non-bonding atoms or molecules.
Types
The classical model identifies three main types of chemical bonds — ionic, covalent, and metallic — distinguished by the degree of charge separation between participating atoms. The characteristics of the bond formed can be predicted by the properties of constituent atoms, namely electronegativity. They differ in the magnitude of their bond enthalpies, a measure of bond strength, and thus affect the physical and chemical properties of compounds in different ways. % of ionic character is directly proportional difference in electronegitivity of bonded atom.
Ionic bond
An ionic bond can be approximated as complete transfer of one or more valence electrons of atoms participating in bond formation, resulting in a positive ion and a negative ion bound together by electrostatic forces. Electrons in an ionic bond tend to be mostly found around one of the two constituent atoms due to the large electronegativity difference between the two atoms, generally more than 1.9, (greater difference in electronegativity results in a stronger bond); this is often described as one atom giving electrons to the other. This type of bond is generally formed between a metal and nonmetal, such as sodium and chlorine in NaCl. Sodium would give an electron to chlorine, forming a positively charged sodium ion and a negatively charged chloride ion.
Covalent bond
In a true covalent bond, the electrons are shared evenly between the two atoms of the bond; there is little or no charge separa
Document 1:::
A bonding electron is an electron involved in chemical bonding. This can refer to:
Chemical bond, a lasting attraction between atoms, ions or molecules
Covalent bond or molecular bond, a sharing of electron pairs between atoms
Bonding molecular orbital, an attraction between the atomic orbitals of atoms in a molecule
Chemical bonding
Document 2:::
In chemistry, an electron pair or Lewis pair consists of two electrons that occupy the same molecular orbital but have opposite spins. Gilbert N. Lewis introduced the concepts of both the electron pair and the covalent bond in a landmark paper he published in 1916.
Because electrons are fermions, the Pauli exclusion principle forbids these particles from having the same quantum numbers. Therefore, for two electrons to occupy the same orbital, and thereby have the same orbital quantum number, they must have different spin quantum number. This also limits the number of electrons in the same orbital to two.
The pairing of spins is often energetically favorable, and electron pairs therefore play a large role in chemistry. They can form a chemical bond between two atoms, or they can occur as a lone pair of valence electrons. They also fill the core levels of an atom.
Because the spins are paired, the magnetic moment of the electrons cancel one another, and the pair's contribution to magnetic properties is generally diamagnetic.
Although a strong tendency to pair off electrons can be observed in chemistry, it is also possible that electrons occur as unpaired electrons.
In the case of metallic bonding the magnetic moments also compensate to a large extent, but the bonding is more communal so that individual pairs of electrons cannot be distinguished and it is better to consider the electrons as a collective 'sea'.
A very special case of electron pair formation occurs in superconductivity: the formation of Cooper pairs. In unconventional superconductors, whose crystal structure contains copper anions, the electron pair bond is due to antiferromagnetic spin fluctuations.
See also
Electron pair production
Frustrated Lewis pair
Jemmis mno rules
Lewis acids and bases
Nucleophile
Polyhedral skeletal electron pair theory
Document 3:::
A chemical bonding model is a theoretical model used to explain atomic bonding structure, molecular geometry, properties, and reactivity of physical matter. This can refer to:
VSEPR theory, a model of molecular geometry.
Valence bond theory, which describes molecular electronic structure with localized bonds and lone pairs.
Molecular orbital theory, which describes molecular electronic structure with delocalized molecular orbitals.
Crystal field theory, an electrostatic model for transition metal complexes.
Ligand field theory, the application of molecular orbital theory to transition metal complexes.
Chemical bonding
Document 4:::
Molecular binding is an attractive interaction between two molecules that results in a stable association in which the molecules are in close proximity to each other. It is formed when atoms or molecules bind together by sharing of electrons. It often, but not always, involves some chemical bonding.
In some cases, the associations can be quite strong—for example, the protein streptavidin and the vitamin biotin have a dissociation constant (reflecting the ratio between bound and free biotin) on the order of 10−14—and so the reactions are effectively irreversible. The result of molecular binding is sometimes the formation of a molecular complex in which the attractive forces holding the components together are generally non-covalent, and thus are normally energetically weaker than covalent bonds.
Molecular binding occurs in biological complexes (e.g., between pairs or sets of proteins, or between a protein and a small molecule ligand it binds) and also in abiologic chemical systems, e.g. as in cases of coordination polymers and coordination networks such as metal-organic frameworks.
Types
Molecular binding can be classified into the following types:
Non-covalent – no chemical bonds are formed between the two interacting molecules hence the association is fully reversible
Reversible covalent – a chemical bond is formed, however the free energy difference separating the noncovalently-bonded reactants from bonded product is near equilibrium and the activation barrier is relatively low such that the reverse reaction which cleaves the chemical bond easily occurs
Irreversible covalent – a chemical bond is formed in which the product is thermodynamically much more stable than the reactants such that the reverse reaction does not take place.
Bound molecules are sometimes called a "molecular complex"—the term generally refers to non-covalent associations. Non-covalent interactions can effectively become irreversible; for example, tight binding inhibitors of enzymes
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
This sharing of electrons produces what is known as a covalent bond. covalent bonds are ~20 to 50 times stronger than what?
A. Newton's third law
B. van der waals interactions
C. Mendelian systems
D. gravitational pull
Answer:
|
|
sciq-5526
|
multiple_choice
|
What can be used to determine the mass of a quantity of material?
|
[
"molar masses",
"atomic masses",
"gravitational mass",
"inertial mass"
] |
A
|
Relavent Documents:
Document 0:::
A proof mass or test mass is a known quantity of mass used in a measuring instrument as a reference for the measurement of an unknown quantity.
A mass used to calibrate a weighing scale is sometimes called a calibration mass or calibration weight.
A proof mass that deforms a spring in an accelerometer is sometimes called the seismic mass. In a convective accelerometer, a fluid proof mass may be employed.
See also
Calibration, checking or adjustment by comparison with a standard
Control variable, the experimental element that is constant and unchanged throughout the course of a scientific investigation
Test particle, an idealized model of an object in which all physical properties are assumed to be negligible, except for the property being studied
Document 1:::
The atomic mass (ma or m) is the mass of an atom. Although the SI unit of mass is the kilogram (symbol: kg), atomic mass is often expressed in the non-SI unit dalton (symbol: Da) – equivalently, unified atomic mass unit (u). 1 Da is defined as of the mass of a free carbon-12 atom at rest in its ground state. The protons and neutrons of the nucleus account for nearly all of the total mass of atoms, with the electrons and nuclear binding energy making minor contributions. Thus, the numeric value of the atomic mass when expressed in daltons has nearly the same value as the mass number. Conversion between mass in kilograms and mass in daltons can be done using the atomic mass constant .
The formula used for conversion is:
where is the molar mass constant, is the Avogadro constant, and is the experimentally determined molar mass of carbon-12.
The relative isotopic mass (see section below) can be obtained by dividing the atomic mass ma of an isotope by the atomic mass constant mu yielding a dimensionless value. Thus, the atomic mass of a carbon-12 atom is by definition, but the relative isotopic mass of a carbon-12 atom is simply 12. The sum of relative isotopic masses of all atoms in a molecule is the relative molecular mass.
The atomic mass of an isotope and the relative isotopic mass refers to a certain specific isotope of an element. Because substances are usually not isotopically pure, it is convenient to use the elemental atomic mass which is the average (mean) atomic mass of an element, weighted by the abundance of the isotopes. The dimensionless (standard) atomic weight is the weighted mean relative isotopic mass of a (typical naturally occurring) mixture of isotopes.
The atomic mass of atoms, ions, or atomic nuclei is slightly less than the sum of the masses of their constituent protons, neutrons, and electrons, due to binding energy mass loss (per ).
Relative isotopic mass
Relative isotopic mass (a property of a single atom) is not to be confused w
Document 2:::
To help compare different orders of magnitude, the following lists describe various mass levels between 10−59 kg and 1052 kg. The least massive thing listed here is a graviton, and the most massive thing is the observable universe. Typically, an object having greater mass will also have greater weight (see mass versus weight), especially if the objects are subject to the same gravitational field strength.
Units of mass
The table at right is based on the kilogram (kg), the base unit of mass in the International System of Units (SI). The kilogram is the only standard unit to include an SI prefix (kilo-) as part of its name. The gram (10−3 kg) is an SI derived unit of mass. However, the names of all SI mass units are based on gram, rather than on kilogram; thus 103 kg is a megagram (106 g), not a *kilokilogram.
The tonne (t) is an SI-compatible unit of mass equal to a megagram (Mg), or 103 kg. The unit is in common use for masses above about 103 kg and is often used with SI prefixes. For example, a gigagram (Gg) or 109 g is 103 tonnes, commonly called a kilotonne.
Other units
Other units of mass are also in use. Historical units include the stone, the pound, the carat, and the grain.
For subatomic particles, physicists use the mass equivalent to the energy represented by an electronvolt (eV). At the atomic level, chemists use the mass of one-twelfth of a carbon-12 atom (the dalton). Astronomers use the mass of the sun ().
The least massive things: below 10−24 kg
Unlike other physical quantities, mass–energy does not have an a priori expected minimal quantity, or an observed basic quantum as in the case of electric charge. Planck's law allows for the existence of photons with arbitrarily low energies. Consequently, there can only ever be an experimental upper bound on the mass of a supposedly massless particle; in the case of the photon, this confirmed upper bound is of the order of = .
10−24 to 10−18 kg
10−18 to 10−12 kg
10−12 to 10−6 kg
10−6 to 1 kg
Document 3:::
The mass recorded by a mass spectrometer can refer to different physical quantities depending on the characteristics of the instrument and the manner in which the mass spectrum is displayed.
Units
The dalton (symbol: Da) is the standard unit that is used for indicating mass on an atomic or molecular scale (atomic mass). The unified atomic mass unit (symbol: u) is equivalent to the dalton. One dalton is approximately the mass of one a single proton or neutron. The unified atomic mass unit has a value of . The amu without the "unified" prefix is an obsolete unit based on oxygen, which was replaced in 1961.
Molecular mass
The molecular mass (abbreviated Mr) of a substance, formerly also called molecular weight and abbreviated as MW, is the mass of one molecule of that substance, relative to the unified atomic mass unit u (equal to 1/12 the mass of one atom of 12C). Due to this relativity, the molecular mass of a substance is commonly referred to as the relative molecular mass, and abbreviated to Mr.
Average mass
The average mass of a molecule is obtained by summing the average atomic masses of the constituent elements. For example, the average mass of natural water with formula H2O is 1.00794 + 1.00794 + 15.9994 = 18.01528 Da.
Mass number
The mass number, also called the nucleon number, is the number of protons and neutrons in an atomic nucleus. The mass number is unique for each isotope of an element and is written either after the element name or as a superscript to the left of an element's symbol. For example, carbon-12 (12C) has 6 protons and 6 neutrons.
Nominal mass
The nominal mass for an element is the mass number of its most abundant naturally occurring stable isotope, and for an ion or molecule, the nominal mass is the sum of the nominal masses of the constituent atoms. Isotope abundances are tabulated by IUPAC: for example carbon has two stable isotopes 12C at 98.9% natural abundance and 13C at 1.1% natural abundance, thus the nominal mass of carbon i
Document 4:::
Monoisotopic mass (Mmi) is one of several types of molecular masses used in mass spectrometry. The theoretical monoisotopic mass of a molecule is computed by taking the sum of the accurate masses (including mass defect) of the most abundant naturally occurring stable isotope of each atom in the molecule. For small molecules made up of low atomic number elements the monoisotopic mass is observable as an isotopically pure peak in a mass spectrum. This differs from the nominal molecular mass, which is the sum of the mass number of the primary isotope of each atom in the molecule and is an integer. It also is different from the molar mass, which is a type of average mass. For some atoms like carbon, oxygen, hydrogen, nitrogen, and sulfur, the Mmi of these elements is exactly the same as the mass of its natural isotope, which is the lightest one. However, this does not hold true for all atoms. Iron's most common isotope has a mass number of 56, while the stable isotopes of iron vary in mass number from 54 to 58. Monoisotopic mass is typically expressed in daltons (Da), also called unified atomic mass units (u).
Nominal mass vs monoisotopic mass
Nominal mass
Nominal mass is a term used in high level mass spectrometric discussions, it can be calculated using the mass number of the most abundant isotope of each atom, without regard for the mass defect. For example, when calculating the nominal mass of a molecule of nitrogen (N2) and ethylene (C2H4) it comes out as.
N2
(2*14)= 28 Da
C2H4
(2*12)+(4*1)= 28 Da
What this means, is when using mass spectrometer with insufficient source of power "low resolution" like a quadrupole mass analyser or a quadrupolar ion trap, these two molecules won't be able to be distinguished after ionization, this will be shown by the cross lapping of the m/z peaks. If a high-resolution instrument like an orbitrap or an ion cyclotron resonance is used, these two molecules can be distinguished.
Monoisotopic mass
When calculating
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What can be used to determine the mass of a quantity of material?
A. molar masses
B. atomic masses
C. gravitational mass
D. inertial mass
Answer:
|
|
ai2_arc-400
|
multiple_choice
|
Which of the following data would be most useful for describing the climate of a specific area?
|
[
"average weekly wind speeds for 1 month",
"daily relative humidity levels for 18 months",
"total annual precipitation amounts for 2 years",
"average high and low monthly temperatures for 20 years"
] |
D
|
Relavent Documents:
Document 0:::
The following outline is provided as an overview of and topical guide to the field of Meteorology.
Meteorology The interdisciplinary, scientific study of the Earth's atmosphere with the primary focus being to understand, explain, and forecast weather events. Meteorology, is applied to and employed by a wide variety of diverse fields, including the military, energy production, transport, agriculture, and construction.
Essence of meteorology
Meteorology
Climate – the average and variations of weather in a region over long periods of time.
Meteorology – the interdisciplinary scientific study of the atmosphere that focuses on weather processes and forecasting (in contrast with climatology).
Weather – the set of all the phenomena in a given atmosphere at a given time.
Branches of meteorology
Microscale meteorology – the study of atmospheric phenomena about 1 km or less, smaller than mesoscale, including small and generally fleeting cloud "puffs" and other small cloud features
Mesoscale meteorology – the study of weather systems about 5 kilometers to several hundred kilometers, smaller than synoptic scale systems but larger than microscale and storm-scale cumulus systems, skjjoch as sea breezes, squall lines, and mesoscale convective complexes
Synoptic scale meteorology – is a horizontal length scale of the order of 1000 kilometres (about 620 miles) or more
Methods in meteorology
Surface weather analysis – a special type of weather map that provides a view of weather elements over a geographical area at a specified time based on information from ground-based weather stations
Weather forecasting
Weather forecasting – the application of science and technology to predict the state of the atmosphere for a future time and a given location
Data collection
Pilot Reports
Weather maps
Weather map
Surface weather analysis
Forecasts and reporting of
Atmospheric pressure
Dew point
High-pressure area
Ice
Black ice
Frost
Low-pressure area
Precipitation
Document 1:::
This is a list of meteorology topics. The terms relate to meteorology, the interdisciplinary scientific study of the atmosphere that focuses on weather processes and forecasting. (see also: List of meteorological phenomena)
A
advection
aeroacoustics
aerobiology
aerography (meteorology)
aerology
air parcel (in meteorology)
air quality index (AQI)
airshed (in meteorology)
American Geophysical Union (AGU)
American Meteorological Society (AMS)
anabatic wind
anemometer
annular hurricane
anticyclone (in meteorology)
apparent wind
Atlantic Oceanographic and Meteorological Laboratory (AOML)
Atlantic hurricane season
atmometer
atmosphere
Atmospheric Model Intercomparison Project (AMIP)
Atmospheric Radiation Measurement (ARM)
(atmospheric boundary layer [ABL]) planetary boundary layer (PBL)
atmospheric chemistry
atmospheric circulation
atmospheric convection
atmospheric dispersion modeling
atmospheric electricity
atmospheric icing
atmospheric physics
atmospheric pressure
atmospheric sciences
atmospheric stratification
atmospheric thermodynamics
atmospheric window (see under Threats)
B
ball lightning
balloon (aircraft)
baroclinity
barotropity
barometer ("to measure atmospheric pressure")
berg wind
biometeorology
blizzard
bomb (meteorology)
buoyancy
Bureau of Meteorology (in Australia)
C
Canada Weather Extremes
Canadian Hurricane Centre (CHC)
Cape Verde-type hurricane
capping inversion (in meteorology) (see "severe thunderstorms" in paragraph 5)
carbon cycle
carbon fixation
carbon flux
carbon monoxide (see under Atmospheric presence)
ceiling balloon ("to determine the height of the base of clouds above ground level")
ceilometer ("to determine the height of a cloud base")
celestial coordinate system
celestial equator
celestial horizon (rational horizon)
celestial navigation (astronavigation)
celestial pole
Celsius
Center for Analysis and Prediction of Storms (CAPS) (in Oklahoma in the US)
Center for the Study o
Document 2:::
Average yearly temperature is calculated by averaging the minimum and maximum daily temperatures in the country, averaged for the years 1961–1990, based on gridded climatologies from the Climatic Research Unit elaborated in 2011.
Data source: Mitchell, T.D., Carter, T.R., Jones, P.D., Hulme, M., New, M., 2003: A Comprehensive Set of High-Resolution Grids of Monthly Climate for Europe and the Globe: the Observed Record (1901-2000) and 16 Scenarios (2001-2100). J. Climate: submitted.
See also
List of countries by average annual precipitation
Notes
Document 3:::
Surface weather observations are the fundamental data used for safety as well as climatological reasons to forecast weather and issue warnings worldwide. They can be taken manually, by a weather observer, by computer through the use of automated weather stations, or in a hybrid scheme using weather observers to augment the otherwise automated weather station. The ICAO defines the International Standard Atmosphere (ISA), which is the model of the standard variation of pressure, temperature, density, and viscosity with altitude in the Earth's atmosphere, and is used to reduce a station pressure to sea level pressure. Airport observations can be transmitted worldwide through the use of the METAR observing code. Personal weather stations taking automated observations can transmit their data to the United States mesonet through the Citizen Weather Observer Program (CWOP), the UK Met Office through their Weather Observations Website (WOW), or internationally through the Weather Underground Internet site. A thirty-year average of a location's weather observations is traditionally used to determine the station's climate. In the US a network of Cooperative Observers make a daily record of summary weather and sometimes water level information.
History
Reverend John Campanius Holm is credited with taking the first systematic weather observations in Colonial America. He was a chaplain in the Swedes Fort colony near the mouth of the Delaware River. Holm recorded daily observations without instruments during 1644 and 1645. While numerous other accounts of weather events on the East Coast were documented during the 17th Century. President George Washington kept a detailed weather diary during the late 1700s at Mount Vernon, Virginia. The number of routine weather observers increased significantly during the 1800s. In 1807, Dr. B. S. Barton of the University of Pennsylvania requested members throughout the Union of the Linnaean Society of Philadelphia to maintain instrumented wea
Document 4:::
In atmospheric science, an atmospheric model is a mathematical model constructed around the full set of primitive, dynamical equations which govern atmospheric motions. It can supplement these equations with parameterizations for turbulent diffusion, radiation, moist processes (clouds and precipitation), heat exchange, soil, vegetation, surface water, the kinematic effects of terrain, and convection. Most atmospheric models are numerical, i.e. they discretize equations of motion. They can predict microscale phenomena such as tornadoes and boundary layer eddies, sub-microscale turbulent flow over buildings, as well as synoptic and global flows. The horizontal domain of a model is either global, covering the entire Earth, or regional (limited-area), covering only part of the Earth. The different types of models run are thermotropic, barotropic, hydrostatic, and nonhydrostatic. Some of the model types make assumptions about the atmosphere which lengthens the time steps used and increases computational speed.
Forecasts are computed using mathematical equations for the physics and dynamics of the atmosphere. These equations are nonlinear and are impossible to solve exactly. Therefore, numerical methods obtain approximate solutions. Different models use different solution methods. Global models often use spectral methods for the horizontal dimensions and finite-difference methods for the vertical dimension, while regional models usually use finite-difference methods in all three dimensions. For specific locations, model output statistics use climate information, output from numerical weather prediction, and current surface weather observations to develop statistical relationships which account for model bias and resolution issues.
Types
The main assumption made by the thermotropic model is that while the magnitude of the thermal wind may change, its direction does not change with respect to height, and thus the baroclinicity in the atmosphere can be simulated usi
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which of the following data would be most useful for describing the climate of a specific area?
A. average weekly wind speeds for 1 month
B. daily relative humidity levels for 18 months
C. total annual precipitation amounts for 2 years
D. average high and low monthly temperatures for 20 years
Answer:
|
|
sciq-10131
|
multiple_choice
|
What is the time it takes for radioactive substance to decay?
|
[
"full-life",
"deterioration rate",
"decay rate",
"half-life"
] |
D
|
Relavent Documents:
Document 0:::
Decay correction is a method of estimating the amount of radioactive decay at some set time before it was actually measured.
Example of use
Researchers often want to measure, say, medical compounds in the bodies of animals. It's hard to measure them directly, so it can be chemically joined to a radionuclide - by measuring the radioactivity, you can get a good idea of how the original medical compound is being processed.
Samples may be collected and counted at short time intervals (ex: 1 and 4 hours). But they might be tested for radioactivity all at once. Decay correction is one way of working out what the radioactivity would have been at the time it was taken, rather than at the time it was tested.
For example, the isotope copper-64, commonly used in medical research, has a half-life of 12.7 hours. If you inject a large group of animals at "time zero", but measure the radioactivity in their organs at two later times, the later groups must be "decay corrected" to adjust for the decay that has occurred between the two time points.
Mathematics
The formula for decay correcting is:
where is the original activity count at time zero, is the activity at time "t", "λ" is the decay constant, and "t" is the elapsed time.
The decay constant is where "" is the half-life of the radioactive material of interest.
Example
The decay correct might be used this way: a group of 20 animals is injected with a compound of interest on a Monday at 10:00 a.m. The compound is chemically joined to the isotope copper-64, which has a known half-life of 12.7 hours, or 764 minutes. After one hour, the 5 animals in the "one hour" group are killed, dissected, and organs of interest are placed in sealed containers to await measurement. This is repeated for another 5 animals, at 2 hours, and again at 4 hours. At this point, (say, 4:00 p.m., Monday) all the organs collected so far are measured for radioactivity (a proxy of the distribution of the compound of interest). The next day
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
Advanced Placement (AP) Physics B was a physics course administered by the College Board as part of its Advanced Placement program. It was equivalent to a year-long introductory university course covering Newtonian mechanics, electromagnetism, fluid mechanics, thermal physics, waves, optics, and modern physics. The course was algebra-based and heavily computational; in 2015, it was replaced by the more concept-focused AP Physics 1 and AP Physics 2.
Exam
The exam consisted of a 70 MCQ section, followed by a 6-7 FRQ section. Each section was 90 minutes and was worth 50% of the final score. The MCQ section banned calculators, while the FRQ allowed calculators and a list of common formulas. Overall, the exam was configured to approximately cover a set percentage of each of the five target categories:
Purpose
According to the College Board web site, the Physics B course provided "a foundation in physics for students in the life sciences, a pre medical career path, and some applied sciences, as well as other fields not directly related to science."
Discontinuation
Starting in the 2014–2015 school year, AP Physics B was no longer offered, and AP Physics 1 and AP Physics 2 took its place. Like AP Physics B, both are algebra-based, and both are designed to be taught as year-long courses.
Grade distribution
The grade distributions for the Physics B scores from 2010 until its discontinuation in 2014 are as follows:
Document 3:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
Document 4:::
In nuclear science, the decay chain refers to a series of radioactive decays of different radioactive decay products as a sequential series of transformations. It is also known as a "radioactive cascade". The typical radioisotope does not decay directly to a stable state, but rather it decays to another radioisotope. Thus there is usually a series of decays until the atom has become a stable isotope, meaning that the nucleus of the atom has reached a stable state.
Decay stages are referred to by their relationship to previous or subsequent stages. A parent isotope is one that undergoes decay to form a daughter isotope. One example of this is uranium (atomic number 92) decaying into thorium (atomic number 90). The daughter isotope may be stable or it may decay to form a daughter isotope of its own. The daughter of a daughter isotope is sometimes called a granddaughter isotope. Note that the parent isotope becomes the daughter isotope, unlike in the case of a biological parent and daughter.
The time it takes for a single parent atom to decay to an atom of its daughter isotope can vary widely, not only between different parent-daughter pairs, but also randomly between identical pairings of parent and daughter isotopes. The decay of each single atom occurs spontaneously, and the decay of an initial population of identical atoms over time t, follows a decaying exponential distribution, e−λt, where λ is called a decay constant. One of the properties of an isotope is its half-life, the time by which half of an initial number of identical parent radioisotopes can be expected statistically to have decayed to their daughters, which is inversely related to λ. Half-lives have been determined in laboratories for many radioisotopes (or radionuclides). These can range from nearly instantaneous (less than 10−21 seconds) to more than 1019 years.
The intermediate stages each emit the same amount of radioactivity as the original radioisotope (i.e., there is a one-to-one relationsh
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the time it takes for radioactive substance to decay?
A. full-life
B. deterioration rate
C. decay rate
D. half-life
Answer:
|
|
sciq-6679
|
multiple_choice
|
What is a region of repetitive noncoding nucleotide sequences at each end of a chromosome?
|
[
"diploid",
"cellular",
"telomere",
"cytoskeleton"
] |
C
|
Relavent Documents:
Document 0:::
Copy number variation (CNV) is a phenomenon in which sections of the genome are repeated and the number of repeats in the genome varies between individuals. Copy number variation is a type of structural variation: specifically, it is a type of duplication or deletion event that affects a considerable number of base pairs. Approximately two-thirds of the entire human genome may be composed of repeats and 4.8–9.5% of the human genome can be classified as copy number variations. In mammals, copy number variations play an important role in generating necessary variation in the population as well as disease phenotype.
Copy number variations can be generally categorized into two main groups: short repeats and long repeats. However, there are no clear boundaries between the two groups and the classification depends on the nature of the loci of interest. Short repeats include mainly dinucleotide repeats (two repeating nucleotides e.g. A-C-A-C-A-C...) and trinucleotide repeats. Long repeats include repeats of entire genes. This classification based on size of the repeat is the most obvious type of classification as size is an important factor in examining the types of mechanisms that most likely gave rise to the repeats, hence the likely effects of these repeats on phenotype.
Types and chromosomal rearrangements
One of the most well known examples of a short copy number variation is the trinucleotide repeat of the CAG base pairs in the huntingtin gene responsible for the neurological disorder Huntington's disease. For this particular case, once the CAG trinucleotide repeats more than 36 times in a trinucleotide repeat expansion, Huntington's disease will likely develop in the individual and it will likely be inherited by his or her offspring. The number of repeats of the CAG trinucleotide is inversely correlated with the age of onset of Huntington's disease. These types of short repeats are often thought to be due to errors in polymerase activity during replication includi
Document 1:::
In cell biology, chromosome territories are regions of the nucleus preferentially occupied by particular chromosomes.
Interphase chromosomes are long DNA strands that are extensively folded, and are often described as appearing like a bowl of spaghetti. The chromosome territory concept holds that despite this apparent disorder, chromosomes largely occupy defined regions of the nucleus. Most eukaryotes are thought to have chromosome territories, although the budding yeast S. cerevisiae is an exception to this.
Characteristics
Chromosome territories are spheroid with diameters on the order of one to few micrometers.
Nuclear compartments devoid of DNA called interchromatin compartments have been reported to tunnel into chromosome territories to facilitate molecular diffusion into the otherwise tightly packed chromosome-occupied regions.
History and experimental support
The concept of chromosome territories was proposed by Carl Rabl in 1885 based on studies of Salamandra maculata.
Chromosome territories have gained recognition using fluorescence labeling techniques (fluorescence in situ hybridization).
Studies of genomic proximity using techniques like chromosome conformation capture have supported the chromosome territory concept by showing that DNA-DNA contacts predominantly happen within particular chromosomes.
See also
Document 2:::
Eukaryotic chromosome fine structure refers to the structure of sequences for eukaryotic chromosomes. Some fine sequences are included in more than one class, so the classification listed is not intended to be completely separate.
Chromosomal characteristics
Some sequences are required for a properly functioning chromosome:
Centromere: Used during cell division as the attachment point for the spindle fibers.
Telomere: Used to maintain chromosomal integrity by capping off the ends of the linear chromosomes. This region is a microsatellite, but its function is more specific than a simple tandem repeat.
Throughout the eukaryotic kingdom, the overall structure of chromosome ends is conserved and is characterized by the telomeric tract - a series of short G-rich repeats. This is succeeded by an extensive subtelomeric region consisting of various types and lengths of repeats - the telomere associated sequences (TAS). These regions are generally low in gene density, low in transcription, low in recombination, late replicating, are involved in protecting the end from degradation and end-to-end fusions and in completing replication. The subtelomeric repeats can rescue chromosome ends when telomerase fails, buffer subtelomerically located genes against transcriptional silencing and protect the genome from deleterious rearrangements due to ectopic recombination. They may also be involved in fillers for increasing chromosome size to some minimum threshold level necessary for chromosome stability; act as barriers against transcriptional silencing; provide a location for the adaptive amplification of genes; and be involved in secondary mechanism of telomere maintenance via recombination when telomerase activity is absent.
Structural sequences
Other sequences are used in replication or during interphase with the physical structure of the chromosome.
Ori, or Origin: Origins of replication.
MAR: Matrix attachment regions, where the DNA attaches to the nuclear matrix.
Prote
Document 3:::
Neocentromeres are new centromeres that form at a place on the chromosome that is usually not centromeric. They typically arise due to disruption of the normal centromere. These neocentromeres should not be confused with “knobs”, which were also described as “neocentromeres” in maize in the 1950s. Unlike most normal centromeres, neocentromeres do not contain satellite sequences that are highly repetitive but instead consist of unique sequences. Despite this, most neocentromeres are still able to carry out the functions of normal centromeres in regulating chromosome segregation and inheritance. This raises many questions on what is necessary versus what is sufficient for constituting a centromere.
As neocentromeres are still a relatively new phenomenon in cell biology and genetics, it may be useful to keep in mind that neocentromeres may be somewhat related to point centromeres, holocentromeres, and regional centromeres. Whereas point centromeres are defined by sequence, regional and holocentromeres are epigenetically defined by where a specific type of nucleosome (the one containing the centromeric histone H3) is located.
It may also be analytically helpful to take into account that the centromere is generally defined in relation to the kinetochore, specifically as the “part of the chromosome that links two sister chromatids together via the kinetochore”. However, the emergence of research in neocentromeres troubles this conventional definition and questions the function of a centromere beyond being a “landing pad” for kinetochore formation. This expands the scope of the centromere's function to include regulating the function of the kinetochore and the mitotic spindle.
History
Neocentromeres were discovered relatively recently. They were first observed by Andy Choo in a human karyotype clinic case in 1997, using fluorescent in situ hybridization (FISH) and cytogenetic analysis. The neocentromeres were observed on chromosome 10 of a patient, who was a child with
Document 4:::
Several chromosome regions have been defined by convenience in order to talk about gene loci. Most important is the distinction between chromosome region p and chromosome region q. The p region is represented in the shorter arm of the chromosome (p is for petit, French for small) while the q region is in the larger arm (chosen as next letter in alphabet after p). These are virtual regions that exist in all chromosomes.
These are listed as follows:
Chromatids
Arms
Centromere
Kinetochore
Telomere
Sub telomere
satellite chromosome or trabant.
NOR region
During cell division, the molecules that compose chromosomes (DNA and proteins) suffer a condensation process (called the chromatin reticulum condensation) that forms a compact and small complex called a chromatid. The complexes containing the duplicated DNA molecules, the sister chromatids, are attached to each other by the centromere(where the Kinetochore assembles).
If the chromosome is a submetacentric chromosome (One arm big and the other arm small) then the centromere divides each chromosome into two regions: the smaller one, which is the p region, and the bigger one, the q region. The sister chromatids will be distributed to each daughter cell at the end of the cell division. Whereas if the chromosome is isobrachial (centromere at centre and arms of equal length), the p and q system is meaningless.
At either end of a chromosome is a telomere, a cap of DNA that protects the rest of the chromosome from damage. The telomere has repetitive junk DNA and hence any enzymatic damage will not affect the coded regions. The areas of the p and q regions close to the telomeres are the subtelomeres, or subtelomeric regions. The areas closer to the centromere are the pericentronomic regions. Finally, the interstitial regions are the parts of the p and q regions that are close to neither the centromere nor the telomeres, but are roughly in the middle of p or q.
See also
Satellite chromosome
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is a region of repetitive noncoding nucleotide sequences at each end of a chromosome?
A. diploid
B. cellular
C. telomere
D. cytoskeleton
Answer:
|
|
sciq-1072
|
multiple_choice
|
Low levels of thyroid hormones in the blood cause the release of what by the hypothalamus and pituitary gland?
|
[
"bases",
"enzymes",
"acids",
"hormones"
] |
D
|
Relavent Documents:
Document 0:::
Prior to the availability of sensitive TSH assays, thyrotropin releasing hormone or TRH stimulation tests were relied upon for confirming and assessing the degree of suppression
in suspected hyperthyroidism. Typically, this stimulation test involves determining basal
TSH levels and levels 15 to 30 minutes after an intravenous bolus of TRH. Normally,
TSH would rise into the concentration range measurable with less sensitive TSH assays.
Third generation TSH assays do not have this
limitation and thus TRH stimulation is generally not required when third generation TSH
assays are used to assess degree of suppression.
Differential diagnosis use
TRH-stimulation testing however
continues to be useful for the differential diagnosis of secondary (pituitary disorder) and tertiary (hypothalamic disorder) hypothyroidism. Patients with these conditions
appear to have physiologically inactive TSH in their circulation that is recognized by
TSH assays to a degree such that they may yield misleading, "euthyroid" TSH
results. Use and Interpretation:
• Helpful in diagnosis in patients with confusing TFTs. In primary hyperthyroidism
TSH are low and TRH administration induces little or no change in TSH levels
• In hypothyroidism due to end organ failure, administration of TRH produces a
prompt increase in TSH
• In hypothyroidism due to pituitary disease (secondary hypothyroidism) administration of TRH does not produce
an increase in TSH
• In hypothyroidism due to hypothalamic disease (tertiary hypothyroidism), administration of TRH produces a
delayed (60–120 minutes, rather than 15–30 minutes) increase in TSH
Process and interpretation
The TRH test involves administration of a small amount of TRH intravenously, following which levels of TSH will be measured at several subsequent time points using samples of blood taken from a peripheral vein.
The test is used in the differential diagnosis of secondary and tertiary hypothyroidism. First, blood is drawn and a baseline TSH level is
Document 1:::
Thyroid's secretory capacity (GT, also referred to as thyroid's incretory capacity, maximum thyroid hormone output, T4 output or, if calculated from serum levels of thyrotropin and thyroxine, as SPINA-GT) is the maximum stimulated amount of thyroxine that the thyroid can produce in a given time-unit (e.g. one second).
How to determine GT
Experimentally, GT can be determined by stimulating the thyroid with a high thyrotropin concentration (e.g. by means of rhTSH, i.e. recombinant human thyrotropin) and measuring its output in terms of T4 production, or by measuring the serum concentration of protein-bound iodine-131 after administration of radioiodine. These approaches are, however, costly and accompanied by significant exposure to radiation.
In vivo, GT can also be estimated from equilibrium levels of TSH and T4 or free T4. In this case it is calculated with
or
[TSH]: Serum thyrotropin concentration (in mIU/L or μIU/mL)
[FT4]: Serum free T4 concentration (in pmol/L)
[TT4]: Serum total T4 concentration (in nmol/L)
: Theoretical (apparent) secretory capacity (SPINA-GT)
: Dilution factor for T4 (reciprocal of apparent volume of distribution, 0.1 L−1)
: Clearance exponent for T4 (1.1e-6 sec−1), i. e., reaction rate constant for degradation
K41: Binding constant T4-TBG (2e10 L/mol)
K42: Binding constant T4-TBPA (2e8 L/mol)
DT: EC50 for TSH (2.75 mU/L)
The method is based on mathematical models of thyroid homeostasis. Calculating the secretory capacity with one of these equations is an inverse problem. Therefore, certain conditions (e.g. stationarity) have to be fulfilled to deliver a reliable result.
Specific secretory capacity
The ratio of SPINA-GT and thyroid volume VT (as determined e.g. by ultrasonography)
,
i.e.
or
Document 2:::
SimThyr is a free continuous dynamic simulation program for the pituitary-thyroid feedback control system. The open-source program is based on a nonlinear model of thyroid homeostasis. In addition to simulations in the time domain the software supports various methods of sensitivity analysis. Its simulation engine is multi-threaded and supports multiple processor cores. SimThyr provides a GUI, which allows for visualising time series, modifying constant structure parameters of the feedback loop (e.g. for simulation of certain diseases), storing parameter sets as XML files (referred to as "scenarios" in the software) and exporting results of simulations in various formats that are suitable for statistical software. SimThyr is intended for both educational purposes and in-silico research.
Mathematical model
The underlying model of thyroid homeostasis is based on fundamental biochemical, physiological and pharmacological principles, e.g. Michaelis-Menten kinetics, non-competitive inhibition and empirically justified kinetic parameters. The model has been validated in healthy controls and in cohorts of patients with hypothyroidism and thyrotoxicosis.
Scientific uses
Multiple studies have employed SimThyr for in silico research on the control of thyroid function.
The original version was developed to check hypotheses about the generation of pulsatile TSH release. Later and expanded versions of the software were used to develop the hypothesis of the TSH-T3 shunt in the hypothalamus-pituitary-thyroid axis, to assess the validity of calculated parameters of thyroid homeostasis (including SPINA-GT and SPINA-GD) and to study allostatic mechanisms leading to non-thyroidal illness syndrome.
SimThyr was also used to show that the release rate of thyrotropin is controlled by multiple factors other than T4 and that the relation between free T4 and TSH may be different in euthyroidism, hypothyroidism and thyrotoxicosis.
Public perception, reception and discussion of the sof
Document 3:::
Thyroid hormone binding ratio (THBR) is a thyroid function test that measures the "uptake" of T3 or T4 tracer by thyroid-binding globulin (TBG) in a given serum sample. This provides an indirect and reciprocal estimate of the available binding sites on TBG within the sample.
The results are then reported as a ratio to normal serum.
Indications
Attempts to correct for changes in thyroid binding globulin due to liver disease, protein losing states, pregnancy or various drugs
It is used to calculate free thyroxine index (total T4 x T3 uptake), an estimate of free T4. Free thyroxine index may be calculated with increased diagnostic accuracy using direct TBG measurement when the total hormone concentration is abnormally elevated
Examples
In patients with hyperthyroidism, there will be fewer available binding sites on TBG (due to the increased circulating T3 / T4). This will lead to an increased thyroid hormone binding ratio.
In patients with hypothyroidism, there will be more free binding sites on TBG (due to the decreased amount of circulating T3 / T4) and as such the THBR will be decreased.
In general, High with High thyroid activity and Low with Low thyroid activity.
Other Conditions
Total TBG can be increased (thereby decreasing the THBR) congenitally, or in conditions such as pregnancy (period of increased estrogen) and with the treatment of certain infections such as Hepatitis C. In the latter, reduction of inflammation of the liver results in increased protein synthesis
Total TBG can be decreased (thereby increasing the THBR) congenitally, or in conditions such as liver failure, protein-losing conditions, or nephrotic conditions. Increased androgen levels will also decrease TBG synthesis, increasing THBR.
THBR can be directly altered by drugs such as;
Anticonvulsants such as phenytoin and carbamazepine
Antinflammatory drugs such as salicylates (Aspirin) or phenylbutazone (NSAID)
High levels of free fatty acids, commonly seen in acutely ill patients
Document 4:::
The following is a list of hormones found in Homo sapiens. Spelling is not uniform for many hormones. For example, current North American and international usage uses estrogen and gonadotropin, while British usage retains the Greek digraph in oestrogen and favours the earlier spelling gonadotrophin.
Hormone listing
Steroid
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Low levels of thyroid hormones in the blood cause the release of what by the hypothalamus and pituitary gland?
A. bases
B. enzymes
C. acids
D. hormones
Answer:
|
|
sciq-5665
|
multiple_choice
|
What do the hair cells in the cochlea release when they bend?
|
[
"receptors",
"hormones",
"lipids",
"neurotransmitters"
] |
D
|
Relavent Documents:
Document 0:::
The stria vascularis of the cochlear duct is a capillary loop in the upper portion of the spiral ligament (the outer wall of the cochlear duct). It produces endolymph for the scala media in the cochlea.
Structure
The stria vascularis is part of the lateral wall of the cochlear duct. It is a somewhat stratified epithelium containing primarily three cell types:
marginal cells, which are involved in K+ transport, and line the endolymphatic space of the scala media.
intermediate cells, which are pigment-containing cells scattered among capillaries.
basal cells, which separate the stria vascularis from the underlying spiral ligament. They are connected to basal cells with gap junctions.
The stria vascularis also contains pericytes, melanocytes, and endothelial cells. It also contains intraepithelial capillaries - it is the only epithelial tissue that is not avascular (completely lacking blood vessels and lymphatic vessels).
Function
The stria vascularis produces endolymph for the scala media, one of the three fluid-filled compartments of the cochlea. This maintains the ion balance of the endolymph that surround inner hair cells and outer hair cells of the organ of Corti. It secretes lots of K+, and may also secrete H+.
Document 1:::
The endocochlear potential (EP; also called endolymphatic potential) is the positive voltage of 80-100mV seen in the cochlear endolymphatic spaces. Within the cochlea the EP varies in the magnitude all along its length. When a sound is presented, the endocochlear potential changes either positive or negative in the endolymph, depending on the stimulus. The change in the potential is called the summating potential.
With the movement of the basilar membrane, a shear force is created and a small potential is generated due to a difference in potential between the endolymph (scala media, +80 mV) and the perilymph (vestibular and tympanic ducts, 0 mV). EP is highest in the basal turn of the cochlea (95 mV in mice) and decreases in the magnitude towards the apex (87 mV). In saccule and utricle, endolymphatic potential is about +9 mV and +3mV in the semicircular canal. EP is highly dependent on the metabolism and ionic transport.
An acoustic stimulus produces a simultaneous change in conductance at the membrane of the receptor cell. Because there is a steep gradient (150 mV), changes in membrane conductance are accompanied by rapid influx and efflux of ions which in turn produce the receptor potential. This is known as the Battery Hypothesis. The receptor potential for each hair cell causes a release of neurotransmitter at its basal pole, which elicits excitation of the afferent nerve fibres.
Anatomy
Document 2:::
In the ventral cochlear nucleus (VCN), auditory nerve fibers enter the brain via the nerve root in the VCN. The ventral cochlear nucleus is divided into the anterior ventral (anteroventral) cochlear nucleus (AVCN) and the posterior ventral (posteroventral) cochlear nucleus (PVCN). In the VCN, auditory nerve fibers bifurcate, the ascending branch innervates the AVCN and the descending branch innervates the PVCN and then continue to the dorsal cochlear nucleus. The orderly innervation by auditory nerve fibers gives the AVCN a tonotopic organization along the dorsoventral axis. Fibers that carry information from the apex of the cochlea that are tuned to low frequencies contact neurons in the ventral part of the AVCN; those that carry information from the base of the cochlea that are tuned to high frequencies contact neurons in the dorsal part of the AVCN. Several populations of neurons populate the AVCN. Bushy cells receive input from auditory nerve fibers through particularly large endings called end bulbs of Held. They contact stellate cells through more conventional boutons.
Cell types
The anterior cochlear nucleus contains several cell types, which correspond fairly well with different physiological unit types. Additionally, these cell types generally have specific projection patterns.
Bushy cells
Named due to the branching, tree-like, nature of their dendritic fields, visible using Golgi's method, they receive large end bulbs of Held from auditory nerve fibers. Bushy cells are of three subtypes that project to different target nuclei in the superior olivary complex.
Globular
Globular bushy cells project large axons to the contralateral medial nucleus of the trapezoid body (MNTB), in the superior olivary complex where they synapse onto principal cells via a single calyx of Held, and several smaller collaterals synapse ipsilaterally in the posterior (PPO) and dorsolateral periolivary (DLPO) nuclei, lateral superior olive (LSO), and lateral nucleus of the tra
Document 3:::
Earwax, also known by the medical term cerumen, is a waxy substance secreted in the ear canal of humans and other mammals. Earwax can be many colors, including brown, orange, red, yellowish, and gray. Earwax protects the skin of the human ear canal, assists in cleaning and lubrication, and provides protection against bacteria, fungi, particulate matter, and water.
Major components of earwax include cerumen, produced by a type of modified sweat gland, and sebum, an oily substance. Both components are made by glands located in the outer ear canal. The chemical composition of earwax includes long chain fatty acids, both saturated and unsaturated, alcohols, squalene, and cholesterol. Earwax also contains dead skin cells and hair.
Excess or compacted cerumen is the buildup of ear wax causing a blockage in the ear canal and it can press against the eardrum or block the outside ear canal or hearing aids, potentially causing hearing loss.
Physiology
Cerumen is produced in the cartilaginous outer third portion of the ear canal. It is a mixture of secretions from sebaceous glands and less-viscous ones from modified apocrine sweat glands. The primary components of both wet and dry earwax are shed layers of skin, with, on average, 60% of the earwax consisting of keratin, 12–20% saturated and unsaturated long-chain fatty acids, alcohols, squalene and 6–9% cholesterol.
Wet or dry
There are two genetically-determined types of earwax: the wet type, which is dominant, and the dry type, which is recessive. This distinction is caused by a single base change in the "ATP-binding cassette C11 gene". Dry-type individuals are homozygous for adenine (AA) whereas wet-type requires at least one guanine (AG or GG). Dry earwax is gray or tan and brittle, and is about 20% lipid. It has a smaller concentration of lipid and pigment granules than wet earwax. Wet earwax is light brown or dark brown and has a viscous and sticky consistency, and is about 50% lipid. Wet-type earwax is associated
Document 4:::
The ampullary cupula, or cupula, is a structure in the vestibular system, providing the sense of spatial orientation.
The cupula is located within the ampullae of each of the three semicircular canals. Part of the crista ampullaris, the cupula has embedded within it hair cells that have several stereocilia associated with each kinocilium. The cupula itself is the gelatinous component of the crista ampullaris that extends from the crista to the roof of the ampullae. When the head rotates, the endolymph filling the semicircular ducts initially lags behind due to inertia. As a result, the cupula is deflected opposite the direction of head movement. As the endolymph pushes the cupula, the stereocilia is bent as well, stimulating the hair cells within the crista ampullaris. After a short time of continual rotation however, the endolymph's acceleration normalizes with the rate of rotation of the semicircular ducts. As a result, the cupula returns to its resting position and the hair cells cease to be stimulated. This continues until the head stops rotating which simultaneously halts semicircular duct rotation. Due to inertia, however, the endolymph continues on. As the endolymph continues to move, the cupula is once again deflected resulting in the compensatory movements of the body when spun. In only the first situation, as fluid rushes by the cupula, the hair cells stimulated transmit the corresponding signal to the brain through the vestibulocochlear nerve (CN VIII). In the second one, there is no stimulation as the kinocilium can only be bent in one direction.
In their natural orientation within the head, the cupulae are located on the medial aspect of the semicircular canals. In this orientation, the kinocilia rest on the posterior aspect of the cupula.
Effects of alcohol
The Buoyancy Hypothesis posits that alcohol causes vertigo by affecting the neutral buoyancy of the cupula within the surrounding fluid called the endolymph. Linear accelerations (such as that
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do the hair cells in the cochlea release when they bend?
A. receptors
B. hormones
C. lipids
D. neurotransmitters
Answer:
|
|
sciq-11342
|
multiple_choice
|
Hiv is a retrovirus, which means it reverse transcribes its rna genome into what?
|
[
"amino acid chains",
"ribosomes",
"dna",
"atp"
] |
C
|
Relavent Documents:
Document 0:::
Retrovirology is a peer-reviewed open access scientific journal covering basic research on retroviruses. The journal was established in 2004 and is published by BioMed Central. The editors-in-chief are Johnson Mak (Griffith University, Australia) and Susan Ross (University of Illinois at Chicago); earlier, Kuan-Teh Jeang was editor-in-chief.
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2019 impact factor of 4.183, ranking it 10th out of 37 journals in the category "Virology".
Document 1:::
AIDS Research and Human Retroviruses is a peer-reviewed scientific journal focusing on HIV/AIDS research, as well as on human retroviruses and their related diseases. The journal was founded in 1983 as AIDS Research, and acquired its current name in 1987. It is published by Mary Ann Liebert, and edited by R. Keith Reeves and Lish Ndhlovu.
It is the official journal of the International Retrovirology Association.
Indexing and abstracting
AIDS Research and Human Retroviruses is indexed and abstracted in the following databases:
External links
AIDS Research and Human Retroviruses website
International Retrovirology Association website
Academic journals established in 1983
Immunology journals
Mary Ann Liebert academic journals
English-language journals
HIV/AIDS journals
Academic journals associated with learned and professional societies
Document 2:::
Pol (DNA polymerase) refers to a gene in retroviruses, or the protein produced by that gene.
Products of pol include:
Reverse transcriptase
Common to all retroviruses, this enzyme transcribes the viral RNA into double-stranded DNA.
Integrase
This enzyme integrates the DNA produced by reverse transcriptase into the host's genome.
Protease
A protease is any enzyme that cuts proteins into segments. HIV's gag and pol genes do not produce their proteins in their final form, but as larger combination proteins; the specific protease used by HIV cleaves these into separate functional units. Protease inhibitor drugs block this step.
See also
Gag/pol translational readthrough site
External links
Viral structural proteins
Document 3:::
Virological failure is defined as the failure to meet a specific target of antiviral drug treatment, namely the non-attainment or non-maintenance of undetectable viral load, particularly in the treatment of HIV. As antiretroviral therapy is evaluated by detecting the amount of copies of the virus in blood samples, the concept of virological failure gives a way to modify treatment of this disease.
Virological failure in HIV is characterized by a confirmed viral load above 400 copies / ml after 24 weeks or above 50 copies / ml after 48 weeks of treatment or, even for individuals who have reached complete viral suppression, by confirmed rebound of viral load above 400 copies / ml. Non-adherence of HIV antiretroviral therapy increases the risk of drug suppression and resistance (Bangsberg, D. R., Moss, A. R., & Deeks, S. G. (2004)).
After the institution of antiretroviral treatment, basically three aspects of the evolution can characterize failure or therapeutic success: the evolution of viral load, T-CD4 + lymphocyte count and the occurrence of clinical events.
The progressive decline in T-CD4 + lymphocyte counts is characterized by immunologic failure. It should be considered, however, that there is a wide biological variability (individual and interindividual) in the counts of these cells, as well as laboratory variability related to the technical reproducibility of the test. There is also circadian variation of CD4 levels and therefore it is recommended that the sample for the test be obtained in the morning. Variability related to the various motifs described above may result in oscillations of up to 25% in absolute CD4 T-lymphocyte counts, with no clinical significance. It is therefore recommended that reductions greater than 25% in T-CD4 + lymphocyte counts are suspected of immunological failure and confirmation is given.
Document 4:::
Current HIV Research is a peer-reviewed scientific journal focusing on HIV/AIDS research, established in 2003. The journal is edited by Yuntao Wu and is published by Bentham Science Publishers. It has an impact factor of 1.581
Indexing
Current HIV Research is abstracted and indexing in the following databases and publications:
External links
HIV/AIDS journals
Academic journals established in 2003
Bentham Science Publishers academic journals
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Hiv is a retrovirus, which means it reverse transcribes its rna genome into what?
A. amino acid chains
B. ribosomes
C. dna
D. atp
Answer:
|
|
sciq-10858
|
multiple_choice
|
What system of organs delivers blood to all cells of the body?
|
[
"cardiovascular",
"integumentary",
"gastrointestinal",
"respiratory"
] |
A
|
Relavent Documents:
Document 0:::
A biological system is a complex network which connects several biologically relevant entities. Biological organization spans several scales and are determined based different structures depending on what the system is. Examples of biological systems at the macro scale are populations of organisms. On the organ and tissue scale in mammals and other animals, examples include the circulatory system, the respiratory system, and the nervous system. On the micro to the nanoscopic scale, examples of biological systems are cells, organelles, macromolecular complexes and regulatory pathways. A biological system is not to be confused with a living system, such as a living organism.
Organ and tissue systems
These specific systems are widely studied in human anatomy and are also present in many other animals.
Respiratory system: the organs used for breathing, the pharynx, larynx, bronchi, lungs and diaphragm.
Digestive system: digestion and processing food with salivary glands, oesophagus, stomach, liver, gallbladder, pancreas, intestines, rectum and anus.
Cardiovascular system (heart and circulatory system): pumping and channeling blood to and from the body and lungs with heart, blood and blood vessels.
Urinary system: kidneys, ureters, bladder and urethra involved in fluid balance, electrolyte balance and excretion of urine.
Integumentary system: skin, hair, fat, and nails.
Skeletal system: structural support and protection with bones, cartilage, ligaments and tendons.
Endocrine system: communication within the body using hormones made by endocrine glands such as the hypothalamus, pituitary gland, pineal body or pineal gland, thyroid, parathyroid and adrenals, i.e., adrenal glands.
Lymphatic system: structures involved in the transfer of lymph between tissues and the blood stream; includes the lymph and the nodes and vessels. The lymphatic system includes functions including immune responses and development of antibodies.
Immune system: protects the organism from
Document 1:::
In a multicellular organism, an organ is a collection of tissues joined in a structural unit to serve a common function. In the hierarchy of life, an organ lies between tissue and an organ system. Tissues are formed from same type cells to act together in a function. Tissues of different types combine to form an organ which has a specific function. The intestinal wall for example is formed by epithelial tissue and smooth muscle tissue. Two or more organs working together in the execution of a specific body function form an organ system, also called a biological system or body system.
An organ's tissues can be broadly categorized as parenchyma, the functional tissue, and stroma, the structural tissue with supportive, connective, or ancillary functions. For example, the gland's tissue that makes the hormones is the parenchyma, whereas the stroma includes the nerves that innervate the parenchyma, the blood vessels that oxygenate and nourish it and carry away its metabolic wastes, and the connective tissues that provide a suitable place for it to be situated and anchored. The main tissues that make up an organ tend to have common embryologic origins, such as arising from the same germ layer. Organs exist in most multicellular organisms. In single-celled organisms such as members of the eukaryotes, the functional analogue of an organ is known as an organelle. In plants, there are three main organs.
The number of organs in any organism depends on the definition used. By one widely adopted definition, 79 organs have been identified in the human body.
Animals
Except for placozoans, multicellular animals including humans have a variety of organ systems. These specific systems are widely studied in human anatomy. The functions of these organ systems often share significant overlap. For instance, the nervous and endocrine system both operate via a shared organ, the hypothalamus. For this reason, the two systems are combined and studied as the neuroendocrine system. The sam
Document 2:::
H2.00.04.4.01001: Lymphoid tissue
H2.00.05.0.00001: Muscle tissue
H2.00.05.1.00001: Smooth muscle tissue
H2.00.05.2.00001: Striated muscle tissue
H2.00.06.0.00001: Nerve tissue
H2.00.06.1.00001: Neuron
H2.00.06.2.00001: Synapse
H2.00.06.2.00001: Neuroglia
h3.01: Bones
h3.02: Joints
h3.03: Muscles
h3.04: Alimentary system
h3.05: Respiratory system
h3.06: Urinary system
h3.07: Genital system
h3.08:
Document 3:::
The lymphatic system, or lymphoid system, is an organ system in vertebrates that is part of the immune system, and complementary to the circulatory system. It consists of a large network of lymphatic vessels, lymph nodes, lymphoid organs, lymphoid tissues and lymph. Lymph is a clear fluid carried by the lymphatic vessels back to the heart for re-circulation. (The Latin word for lymph, lympha, refers to the deity of fresh water, "Lympha").
Unlike the circulatory system that is a closed system, the lymphatic system is open. The human circulatory system processes an average of 20 litres of blood per day through capillary filtration, which removes plasma from the blood. Roughly 17 litres of the filtered blood is reabsorbed directly into the blood vessels, while the remaining three litres are left in the interstitial fluid. One of the main functions of the lymphatic system is to provide an accessory return route to the blood for the surplus three litres.
The other main function is that of immune defense. Lymph is very similar to blood plasma, in that it contains waste products and cellular debris, together with bacteria and proteins. The cells of the lymph are mostly lymphocytes. Associated lymphoid organs are composed of lymphoid tissue, and are the sites either of lymphocyte production or of lymphocyte activation. These include the lymph nodes (where the highest lymphocyte concentration is found), the spleen, the thymus, and the tonsils. Lymphocytes are initially generated in the bone marrow. The lymphoid organs also contain other types of cells such as stromal cells for support. Lymphoid tissue is also associated with mucosas such as mucosa-associated lymphoid tissue (MALT).
Fluid from circulating blood leaks into the tissues of the body by capillary action, carrying nutrients to the cells. The fluid bathes the tissues as interstitial fluid, collecting waste products, bacteria, and damaged cells, and then drains as lymph into the lymphatic capillaries and lymphatic
Document 4:::
The human body is the structure of a human being. It is composed of many different types of cells that together create tissues and subsequently organs and then organ systems. They ensure homeostasis and the viability of the human body.
It comprises a head, hair, neck, torso (which includes the thorax and abdomen), arms and hands, legs and feet.
The study of the human body includes anatomy, physiology, histology and embryology. The body varies anatomically in known ways. Physiology focuses on the systems and organs of the human body and their functions. Many systems and mechanisms interact in order to maintain homeostasis, with safe levels of substances such as sugar and oxygen in the blood.
The body is studied by health professionals, physiologists, anatomists, and artists to assist them in their work.
Composition
The human body is composed of elements including hydrogen, oxygen, carbon, calcium and phosphorus. These elements reside in trillions of cells and non-cellular components of the body.
The adult male body is about 60% water for a total water content of some . This is made up of about of extracellular fluid including about of blood plasma and about of interstitial fluid, and about of fluid inside cells. The content, acidity and composition of the water inside and outside cells is carefully maintained. The main electrolytes in body water outside cells are sodium and chloride, whereas within cells it is potassium and other phosphates.
Cells
The body contains trillions of cells, the fundamental unit of life. At maturity, there are roughly 3037trillion cells in the body, an estimate arrived at by totaling the cell numbers of all the organs of the body and cell types. The body is also host to about the same number of non-human cells as well as multicellular organisms which reside in the gastrointestinal tract and on the skin. Not all parts of the body are made from cells. Cells sit in an extracellular matrix that consists of proteins such as collagen,
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What system of organs delivers blood to all cells of the body?
A. cardiovascular
B. integumentary
C. gastrointestinal
D. respiratory
Answer:
|
|
sciq-8341
|
multiple_choice
|
Nucleic acids are polymers made of monomers called what?
|
[
"carotenoids",
"loops",
"nucleotides",
"peptides"
] |
C
|
Relavent Documents:
Document 0:::
Biomolecular structure is the intricate folded, three-dimensional shape that is formed by a molecule of protein, DNA, or RNA, and that is important to its function. The structure of these molecules may be considered at any of several length scales ranging from the level of individual atoms to the relationships among entire protein subunits. This useful distinction among scales is often expressed as a decomposition of molecular structure into four levels: primary, secondary, tertiary, and quaternary. The scaffold for this multiscale organization of the molecule arises at the secondary level, where the fundamental structural elements are the molecule's various hydrogen bonds. This leads to several recognizable domains of protein structure and nucleic acid structure, including such secondary-structure features as alpha helixes and beta sheets for proteins, and hairpin loops, bulges, and internal loops for nucleic acids.
The terms primary, secondary, tertiary, and quaternary structure were introduced by Kaj Ulrik Linderstrøm-Lang in his 1951 Lane Medical Lectures at Stanford University.
Primary structure
The primary structure of a biopolymer is the exact specification of its atomic composition and the chemical bonds connecting those atoms (including stereochemistry). For a typical unbranched, un-crosslinked biopolymer (such as a molecule of a typical intracellular protein, or of DNA or RNA), the primary structure is equivalent to specifying the sequence of its monomeric subunits, such as amino acids or nucleotides.
The primary structure of a protein is reported starting from the amino N-terminus to the carboxyl C-terminus, while the primary structure of DNA or RNA molecule is known as the nucleic acid sequence reported from the 5' end to the 3' end.
The nucleic acid sequence refers to the exact sequence of nucleotides that comprise the whole molecule. Often, the primary structure encodes sequence motifs that are of functional importance. Some examples of such motif
Document 1:::
A nucleic acid sequence is a succession of bases within the nucleotides forming alleles within a DNA (using GACT) or RNA (GACU) molecule. This succession is denoted by a series of a set of five different letters that indicate the order of the nucleotides. By convention, sequences are usually presented from the 5' end to the 3' end. For DNA, with its double helix, there are two possible directions for the notated sequence; of these two, the sense strand is used. Because nucleic acids are normally linear (unbranched) polymers, specifying the sequence is equivalent to defining the covalent structure of the entire molecule. For this reason, the nucleic acid sequence is also termed the primary structure.
The sequence represents biological information. Biological deoxyribonucleic acid represents the information which directs the functions of an organism.
Nucleic acids also have a secondary structure and tertiary structure. Primary structure is sometimes mistakenly referred to as "primary sequence". However there is no parallel concept of secondary or tertiary sequence.
Nucleotides
Nucleic acids consist of a chain of linked units called nucleotides. Each nucleotide consists of three subunits: a phosphate group and a sugar (ribose in the case of RNA, deoxyribose in DNA) make up the backbone of the nucleic acid strand, and attached to the sugar is one of a set of nucleobases. The nucleobases are important in base pairing of strands to form higher-level secondary and tertiary structures such as the famed double helix.
The possible letters are A, C, G, and T, representing the four nucleotide bases of a DNA strand – adenine, cytosine, guanine, thymine – covalently linked to a phosphodiester backbone. In the typical case, the sequences are printed abutting one another without gaps, as in the sequence AAAGTCTGAC, read left to right in the 5' to 3' direction. With regards to transcription, a sequence is on the coding strand if it has the same order as the transcribed RNA.
Document 2:::
In molecular biology, a polynucleotide () is a biopolymer composed of nucleotide monomers that are covalently bonded in a chain. DNA (deoxyribonucleic acid) and RNA (ribonucleic acid) are examples of polynucleotides with distinct biological functions. DNA consists of two chains of polynucleotides, with each chain in the form of a helix (like a spiral staircase).
Sequence
Although DNA and RNA do not generally occur in the same polynucleotide, the four species of nucleotides may occur in any order in the chain. The sequence of DNA or RNA species for a given polynucleotide is the main factor determining its function in a living organism or a scientific experiment.
Polynucleotides in organisms
Polynucleotides occur naturally in all living organisms. The genome of an organism consists of complementary pairs of enormously long polynucleotides wound around each other in the form of a double helix. Polynucleotides have a variety of other roles in organisms.
Polynucleotides in scientific experiments
Polynucleotides are used in biochemical experiments such as polymerase chain reaction (PCR) or DNA sequencing. Polynucleotides are made artificially from oligonucleotides, smaller nucleotide chains with generally fewer than 30 subunits. A polymerase enzyme is used to extend the chain by adding nucleotides according to a pattern specified by the scientist.
Prebiotic condensation of nucleobases with ribose
In order to understand how life arose, knowledge is required of the chemical pathways that permit formation of the key building blocks of life under plausible prebiotic conditions. According to the RNA world hypothesis free-floating ribonucleotides were present in the primitive soup. These were the fundamental molecules that combined in series to form RNA. Molecules as complex as RNA must have arisen from small molecules whose reactivity was governed by physico-chemical processes. RNA is composed of purine and pyrimidine nucleotides, both of which are necessary for re
Document 3:::
Experimental approaches of determining the structure of nucleic acids, such as RNA and DNA, can be largely classified into biophysical and biochemical methods. Biophysical methods use the fundamental physical properties of molecules for structure determination, including X-ray crystallography, NMR and cryo-EM. Biochemical methods exploit the chemical properties of nucleic acids using specific reagents and conditions to assay the structure of nucleic acids. Such methods may involve chemical probing with specific reagents, or rely on native or analogue chemistry. Different experimental approaches have unique merits and are suitable for different experimental purposes.
Biophysical methods
X-ray crystallography
X-ray crystallography is not common for nucleic acids alone, since neither DNA nor RNA readily form crystals. This is due to the greater degree of intrinsic disorder and dynamism in nucleic acid structures and the negatively charged (deoxy)ribose-phosphate backbones, which repel each other in close proximity. Therefore, crystallized nucleic acids tend to be complexed with a protein of interest to provide structural order and neutralize the negative charge.
Nuclear magnetic resonance spectroscopy (NMR)
Nucleic acid NMR is the use of NMR spectroscopy to obtain information about the structure and dynamics of nucleic acid molecules, such as DNA or RNA. As of 2003, nearly half of all known RNA structures had been determined by NMR spectroscopy.
Nucleic acid NMR uses similar techniques as protein NMR, but has several differences. Nucleic acids have a smaller percentage of hydrogen atoms, which are the atoms usually observed in NMR, and because nucleic acid double helices are stiff and roughly linear, they do not fold back on themselves to give "long-range" correlations. The types of NMR usually done with nucleic acids are 1H or proton NMR, 13C NMR, 15N NMR, and 31P NMR. Two-dimensional NMR methods are almost always used, such as correlation spectroscopy (COSY
Document 4:::
Nucleic acid analogues are compounds which are analogous (structurally similar) to naturally occurring RNA and DNA, used in medicine and in molecular biology research.
Nucleic acids are chains of nucleotides, which are composed of three parts: a phosphate backbone, a pentose sugar, either ribose or deoxyribose, and one of four nucleobases.
An analogue may have any of these altered. Typically the analogue nucleobases confer, among other things, different base pairing and base stacking properties. Examples include universal bases, which can pair with all four canonical bases, and phosphate-sugar backbone analogues such as PNA, which affect the properties of the chain (PNA can even form a triple helix).
Nucleic acid analogues are also called Xeno Nucleic Acid and represent one of the main pillars of xenobiology, the design of new-to-nature forms of life based on alternative biochemistries.
Artificial nucleic acids include peptide nucleic acid (PNA), Morpholino and locked nucleic acid (LNA), as well as glycol nucleic acid (GNA), threose nucleic acid (TNA) and hexitol nucleic acids (HNA). Each of these is distinguished from naturally occurring DNA or RNA by changes to the backbone of the molecule.
In May 2014, researchers announced that they had successfully introduced two new artificial nucleotides into bacterial DNA, and by including individual artificial nucleotides in the culture media, were able to passage the bacteria 24 times; they did not create mRNA or proteins able to use the artificial nucleotides. The artificial nucleotides featured 2 fused aromatic rings.
Medicine
Several nucleoside analogues are used as antiviral or anticancer agents. The viral polymerase incorporates these compounds with non-canonical bases. These compounds are activated in the cells by being converted into nucleotides, they are administered as nucleosides since charged nucleotides cannot easily cross cell membranes.
Molecular biology
Nucleic acid analogues are used in molecular b
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Nucleic acids are polymers made of monomers called what?
A. carotenoids
B. loops
C. nucleotides
D. peptides
Answer:
|
|
sciq-5754
|
multiple_choice
|
What state of matter exists if particles do not have enough kinetic energy to overcome the force of attraction between them?
|
[
"plasma",
"gas",
"liquid",
"solid"
] |
D
|
Relavent Documents:
Document 0:::
In classical physics and general chemistry, matter is any substance that has mass and takes up space by having volume. All everyday objects that can be touched are ultimately composed of atoms, which are made up of interacting subatomic particles, and in everyday as well as scientific usage, matter generally includes atoms and anything made up of them, and any particles (or combination of particles) that act as if they have both rest mass and volume. However it does not include massless particles such as photons, or other energy phenomena or waves such as light or heat. Matter exists in various states (also known as phases). These include classical everyday phases such as solid, liquid, and gas – for example water exists as ice, liquid water, and gaseous steam – but other states are possible, including plasma, Bose–Einstein condensates, fermionic condensates, and quark–gluon plasma.
Usually atoms can be imagined as a nucleus of protons and neutrons, and a surrounding "cloud" of orbiting electrons which "take up space". However this is only somewhat correct, because subatomic particles and their properties are governed by their quantum nature, which means they do not act as everyday objects appear to act – they can act like waves as well as particles, and they do not have well-defined sizes or positions. In the Standard Model of particle physics, matter is not a fundamental concept because the elementary constituents of atoms are quantum entities which do not have an inherent "size" or "volume" in any everyday sense of the word. Due to the exclusion principle and other fundamental interactions, some "point particles" known as fermions (quarks, leptons), and many composites and atoms, are effectively forced to keep a distance from other particles under everyday conditions; this creates the property of matter which appears to us as matter taking up space.
For much of the history of the natural sciences people have contemplated the exact nature of matter. The idea tha
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
Applied physics is the application of physics to solve scientific or engineering problems. It is usually considered a bridge or a connection between physics and engineering.
"Applied" is distinguished from "pure" by a subtle combination of factors, such as the motivation and attitude of researchers and the nature of the relationship to the technology or science that may be affected by the work. Applied physics is rooted in the fundamental truths and basic concepts of the physical sciences but is concerned with the utilization of scientific principles in practical devices and systems and with the application of physics in other areas of science and high technology.
Examples of research and development areas
Accelerator physics
Acoustics
Atmospheric physics
Biophysics
Brain–computer interfacing
Chemistry
Chemical physics
Differentiable programming
Artificial intelligence
Scientific computing
Engineering physics
Chemical engineering
Electrical engineering
Electronics
Sensors
Transistors
Materials science and engineering
Metamaterials
Nanotechnology
Semiconductors
Thin films
Mechanical engineering
Aerospace engineering
Astrodynamics
Electromagnetic propulsion
Fluid mechanics
Military engineering
Lidar
Radar
Sonar
Stealth technology
Nuclear engineering
Fission reactors
Fusion reactors
Optical engineering
Photonics
Cavity optomechanics
Lasers
Photonic crystals
Geophysics
Materials physics
Medical physics
Health physics
Radiation dosimetry
Medical imaging
Magnetic resonance imaging
Radiation therapy
Microscopy
Scanning probe microscopy
Atomic force microscopy
Scanning tunneling microscopy
Scanning electron microscopy
Transmission electron microscopy
Nuclear physics
Fission
Fusion
Optical physics
Nonlinear optics
Quantum optics
Plasma physics
Quantum technology
Quantum computing
Quantum cryptography
Renewable energy
Space physics
Spectroscopy
See also
Applied science
Applied mathematics
Engineering
Engineering Physics
High Technology
Document 3:::
In non-technical terms, M-theory presents an idea about the basic substance of the universe. As of 2023, science has produced no experimental evidence to support the conclusion that M-theory is a description of the real world. Although a complete mathematical formulation of M-theory is not known, the general approach is the leading contender for a universal "Theory of Everything" that unifies gravity with other forces such as electromagnetism. M-theory aims to unify quantum mechanics with general relativity's gravitational force in a mathematically consistent way. In comparison, other theories such as loop quantum gravity are considered by physicists and researchers/students to be less elegant, because they posit gravity to be completely different from forces such as the electromagnetic force.
Background
In the early years of the 20th century, the atom – long believed to be the smallest building-block of matter – was proven to consist of even smaller components called protons, neutrons and electrons, which are known as subatomic particles. Other subatomic particles began being discovered in the 1960s. In the 1970s, it was discovered that protons and neutrons (and other hadrons) are themselves made up of smaller particles called quarks. The Standard Model is the set of rules that describes the interactions of these particles.
In the 1980s, a new mathematical model of theoretical physics, called string theory, emerged. It showed how all the different subatomic particles known to science could be constructed by hypothetical one-dimensional "strings", infinitesimal building-blocks that have only the dimension of length, but not height or width.
However, for string theory to be mathematically consistent, the strings must be in a universe of ten dimensions. This contradicts the experience that our real universe has four dimensions: three space dimensions (height, width, and length) and one time dimension. To "save" their theory, string theorists therefore added the exp
Document 4:::
In physics, action at a distance is the concept that an object's motion can be affected by another object without being physically contact (as in mechanical contact) by the other object. That is, it is the non-local interaction of objects that are separated in space. Coulomb's law and Newton's law of universal gravitation are based on action at a distance.
Historically, action at a distance was the earliest scientific model for gravity and electricity and it continues to be useful in many practical cases. In the 19th and 20th centuries, field models arose to explain these phenomena with more precision. The discovery of electrons and of special relativity lead to new action at a distance models providing alternative to field theories.
Categories of action
In the study of mechanics, action at a distance is one of three fundamental actions on matter that cause motion. The other two are direct impact (elastic or inelastic collisions) and actions in a continuous medium as in fluid mechanics or solid mechanics.
Historically, physical explanations for particular phenomena have moved between these three categories over time as new models were developed.
Action at a distance and actions in a continuous medium may be easily distinguished when the medium dynamics are visible, like waves in water or in an elastic solid. In the case of electricity or gravity, there is no medium required. In the nineteenth century, criteria like the effect of actions on intervening matter, the observation of a time delay, the apparent storage of energy, or even the possibility of a plausible mechanical model for action transmission were all accepted as evidence against action at a distance. Aether theories were alternative proposals to replace apparent action-at-a-distance in gravity and electromagnetism, in terms of continuous action inside an (invisible) medium called "aether".
Roles
The concept of action at a distance acts in multiple roles in physics and it can co-exist with other mode
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What state of matter exists if particles do not have enough kinetic energy to overcome the force of attraction between them?
A. plasma
B. gas
C. liquid
D. solid
Answer:
|
|
sciq-9950
|
multiple_choice
|
What force holds together both types of star clusters?
|
[
"weight",
"magnetism",
"gravity",
"inertia"
] |
C
|
Relavent Documents:
Document 0:::
Stellar dynamics is the branch of astrophysics which describes in a statistical way the collective motions of stars subject to their mutual gravity. The essential difference from celestial mechanics is that the number of body
Typical galaxies have upwards of millions of macroscopic gravitating bodies and countless number of neutrinos and perhaps other dark microscopic bodies. Also each star contributes more or less equally to the total gravitational field, whereas in celestial mechanics the pull of a massive body dominates any satellite orbits.
Connection with fluid dynamics
Stellar dynamics also has connections to the field of plasma physics. The two fields underwent significant development during a similar time period in the early 20th century, and both borrow mathematical formalism originally developed in the field of fluid mechanics.
In accretion disks and stellar surfaces, the dense plasma or gas particles collide very frequently, and collisions result in equipartition and perhaps viscosity under magnetic field. We see various sizes for accretion disks and stellar atmosphere, both made of enormous number of microscopic particle mass,
at stellar surfaces,
around Sun-like stars or km-sized stellar black holes,
around million solar mass black holes (about AU-sized) in centres of galaxies.
The system crossing time scale is long in stellar dynamics, where it is handy to note that
The long timescale means that, unlike gas particles in accretion disks, stars in galaxy disks very rarely see a collision in their stellar lifetime. However, galaxies collide occasionally in galaxy clusters, and stars have close encounters occasionally in star clusters.
As a rule of thumb, the typical scales concerned (see the Upper Portion of P.C.Budassi's Logarithmic Map of the Universe) are
for M13 Star Cluster,
for M31 Disk Galaxy,
for neutrinos in the Bullet Clusters, which is a merging system of N = 1000 galaxies.
Connection with Kepler problem and 3-body problem
Document 1:::
The Plummer model or Plummer sphere is a density law that was first used by H. C. Plummer to fit observations of globular clusters. It is now often used as toy model in N-body simulations of stellar systems.
Description of the model
The Plummer 3-dimensional density profile is given by
where is the total mass of the cluster, and a is the Plummer radius, a scale parameter that sets the size of the cluster core. The corresponding potential is
where G is Newton's gravitational constant. The velocity dispersion is
The isotropic distribution function reads
if , and otherwise, where is the specific energy.
Properties
The mass enclosed within radius is given by
Many other properties of the Plummer model are described in Herwig Dejonghe's comprehensive article.
Core radius , where the surface density drops to half its central value, is at .
Half-mass radius is
Virial radius is .
The 2D surface density is:
,
and hence the 2D projected mass profile is:
.
In astronomy, it is convenient to define 2D half-mass radius which is the radius where the 2D projected mass profile is half of the total mass: .
For the Plummer profile: .
The escape velocity at any point is
For bound orbits, the radial turning points of the orbit is characterized by specific energy and specific angular momentum are given by the positive roots of the cubic equation
where , so that . This equation has three real roots for : two positive and one negative, given that , where is the specific angular momentum for a circular orbit for the same energy. Here can be calculated from single real root of the discriminant of the cubic equation, which is itself another cubic equation
where underlined parameters are dimensionless in Henon units defined as , , and .
Applications
The Plummer model comes closest to representing the observed density profiles of star clusters, although the rapid falloff of the density at large radii () is not a good description of these systems.
Document 2:::
Astrophysics is a science that employs the methods and principles of physics and chemistry in the study of astronomical objects and phenomena. As one of the founders of the discipline, James Keeler, said, Astrophysics "seeks to ascertain the nature of the heavenly bodies, rather than their positions or motions in space–what they are, rather than where they are." Among the subjects studied are the Sun (solar physics), other stars, galaxies, extrasolar planets, the interstellar medium and the cosmic microwave background. Emissions from these objects are examined across all parts of the electromagnetic spectrum, and the properties examined include luminosity, density, temperature, and chemical composition. Because astrophysics is a very broad subject, astrophysicists apply concepts and methods from many disciplines of physics, including classical mechanics, electromagnetism, statistical mechanics, thermodynamics, quantum mechanics, relativity, nuclear and particle physics, and atomic and molecular physics.
In practice, modern astronomical research often involves a substantial amount of work in the realms of theoretical and observational physics. Some areas of study for astrophysicists include their attempts to determine the properties of dark matter, dark energy, black holes, and other celestial bodies; and the origin and ultimate fate of the universe. Topics also studied by theoretical astrophysicists include Solar System formation and evolution; stellar dynamics and evolution; galaxy formation and evolution; magnetohydrodynamics; large-scale structure of matter in the universe; origin of cosmic rays; general relativity, special relativity, quantum and physical cosmology, including string cosmology and astroparticle physics.
History
Astronomy is an ancient science, long separated from the study of terrestrial physics. In the Aristotelian worldview, bodies in the sky appeared to be unchanging spheres whose only motion was uniform motion in a circle, while the earthl
Document 3:::
The red clump is a clustering of red giants in the Hertzsprung–Russell diagram at around 5,000 K and absolute magnitude (MV) +0.5, slightly hotter than most red-giant-branch stars of the same luminosity. It is visible as a denser region of the red-giant branch or a bulge towards hotter temperatures. It is prominent in many galactic open clusters, and it is also noticeable in many intermediate-age globular clusters and in nearby field stars (e.g. the Hipparcos stars).
The red clump giants are cool horizontal branch stars, stars originally similar to the Sun which have undergone a helium flash and are now fusing helium in their cores.
Properties
Red clump stellar properties vary depending on their origin, most notably on the metallicity of the stars, but typically they have early K spectral types and effective temperatures around 5,000 K. The absolute visual magnitude of red clump giants near the sun has been measured at an average of +0.81 with metallicities between −0.6 and +0.4 dex.
There is a considerable spread in the properties of red clump stars even within a single population of similar stars such as an open cluster. This is partly due to the natural variation in temperatures and luminosities of horizontal branch stars when they form and as they evolve, and partly due to the presence of other stars with similar properties. Although red clump stars are generally hotter than red-giant-branch stars, the two regions overlap and the status of individual stars can only be assigned with a detailed chemical abundance study.
Evolution
Modelling of the horizontal branch has shown that stars have a strong tendency to cluster at the cool end of the zero age horizontal branch (ZAHB). This tendency is weaker in low metallicity stars, so the red clump is usually more prominent in metal-rich clusters. However, there are other effects, and there are well-populated red clumps in some metal-poor globular clusters.
Stars with a similar mass to the sun evolve towards
Document 4:::
Galactic clusters are gravitationally bound large-scale structures of multiple galaxies. The evolution of these aggregates is determined by time and manner of formation and the process of how their structures and constituents have been changing with time. Gamow (1952) and Weizscker (1951) showed that the observed rotations of galaxies are important for cosmology. They postulated that the rotation of galaxies might be a clue of physical conditions under which these systems formed. Thus, understanding the distribution of spatial orientations of the spin vectors of galaxies is critical to understanding the origin of the angular momenta of galaxies.
There are mainly three scenarios for the origin of galaxy clusters and superclusters. These models are based on different assumptions of the primordial conditions, so they predict different spin vector alignments of the galaxies. The three hypotheses are the pancake model, the hierarchy model, and the primordial vorticity theory. The three are mutually exclusive as they produce contradictory predictions. However, the predictions made by all three theories are based on the precepts of cosmology. Thus, these models can be tested using a database with appropriate methods of analysis.
Galaxies
A galaxy is a large gravitational aggregation of stars, dust, gas, and an unknown component termed dark matter. The Milky Way Galaxy is only one of the billions of galaxies in the known universe. Galaxies are classified into spirals, ellipticals, irregular, and peculiar. Sizes can range from only a few thousand stars (dwarf irregulars) to 1013 stars in giant ellipticals. Elliptical galaxies are spherical or elliptical in appearance. Spiral galaxies range from S0, the lenticular galaxies, to Sb, which have a bar across the nucleus, to Sc galaxies which have strong spiral arms. In total count, ellipticals amount to 13%, S0 to 22%, Sa, b, c galaxies to 61%, irregulars to 3.5%, and peculiars to 0.9%.
At the center of most galaxies is a
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What force holds together both types of star clusters?
A. weight
B. magnetism
C. gravity
D. inertia
Answer:
|
|
sciq-4703
|
multiple_choice
|
At the time of birth, bones of the brain case are separated by what wide areas of fibrous connective tissue, which later become sutures?
|
[
"fontanelles",
"sporozoans",
"cerebellum",
"fluctuations"
] |
A
|
Relavent Documents:
Document 0:::
Cytoarchitecture (Greek κύτος= "cell" + ἀρχιτεκτονική= "architecture"), also known as cytoarchitectonics, is the study of the cellular composition of the central nervous system's tissues under the microscope. Cytoarchitectonics is one of the ways to parse the brain, by obtaining sections of the brain using a microtome and staining them with chemical agents which reveal where different neurons are located.
The study of the parcellation of nerve fibers (primarily axons) into layers forms the subject of myeloarchitectonics (<Gk. μυελός=marrow + ἀρχιτεκτονική=architecture), an approach complementary to cytoarchitectonics.
History of the cerebral cytoarchitecture
Defining cerebral cytoarchitecture began with the advent of histology—the science of slicing and staining brain slices for examination. It is credited to the Viennese psychiatrist Theodor Meynert (1833–1892), who in 1867 noticed regional variations in the histological structure of different parts of the gray matter in the cerebral hemispheres.
Paul Flechsig was the first to present the cytoarchitecture of the human brain into 40 areas. Alfred Walter Campbell then divided it into 14 areas.
Sir Grafton Elliot Smith (1871–1937), a New South Wales native working in Cairo, identified 50 areas. Korbinian Brodmann worked on the brains of diverse mammalian species and developed a division of the cerebral cortex into 52 discrete areas (of which 44 in the human, and the remaining 8 in non-human primate brain). Brodmann used numbers to categorize the different architectural areas, now referred to as a Brodmann Area, and he believed that each of these regions served a unique functional purpose.
Constantin von Economo and Georg N. Koskinas, two neurologists in Vienna, produced a landmark work in brain research by defining 107 cortical areas on the basis of cytoarchitectonic criteria. They used letters to categorize the architecture, e.g., "F" for areas of the frontal lobe.
The Nissl staining technique
The Nissl stain
Document 1:::
The ovarian cortex is the outer portion of the ovary. The ovarian follicles are located within the ovarian cortex. The ovarian cortex is made up of connective tissue. Ovarian cortex tissue transplant has been performed to treat infertility.
Document 2:::
A shallow, longitudinal groove separating the developing gray matter into a basal and alar plates along the length of the neural tube. The sulcus limitans extends the length of the spinal cord and through the mesencephalon.
Document 3:::
H2.00.04.4.01001: Lymphoid tissue
H2.00.05.0.00001: Muscle tissue
H2.00.05.1.00001: Smooth muscle tissue
H2.00.05.2.00001: Striated muscle tissue
H2.00.06.0.00001: Nerve tissue
H2.00.06.1.00001: Neuron
H2.00.06.2.00001: Synapse
H2.00.06.2.00001: Neuroglia
h3.01: Bones
h3.02: Joints
h3.03: Muscles
h3.04: Alimentary system
h3.05: Respiratory system
h3.06: Urinary system
h3.07: Genital system
h3.08:
Document 4:::
The sphenozygomatic suture is the cranial suture between the sphenoid bone and the zygomatic bone.
Additional images
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
At the time of birth, bones of the brain case are separated by what wide areas of fibrous connective tissue, which later become sutures?
A. fontanelles
B. sporozoans
C. cerebellum
D. fluctuations
Answer:
|
|
sciq-3580
|
multiple_choice
|
What is the microtubule-organizing center found near the nuclei of animal cells?
|
[
"lysosome",
"spliceosome",
"centrosome",
"entrosome"
] |
C
|
Relavent Documents:
Document 0:::
Organizing center may refer to:
Microtubule organizing center
Spemann's Organizer
Certain groups of cells in mesoderm formation, see FGF and mesoderm formation
Primitive streak in Amniotes responsible for gastrulation
a small cell group underneath the stem cells in Arabidopsis and other plants
animal cap cells treated with activin
Pattern formation
Developmental biology
Document 1:::
In cell biology, microtrabeculae were a hypothesised fourth element of the cytoskeleton (the other three being microfilaments, microtubules and intermediate filaments), proposed by Keith Porter based on images obtained from high-voltage electron microscopy of whole cells in the 1970s. The images showed short, filamentous structures of unknown molecular composition associated with known cytoplasmic structures. It is now generally accepted that microtrabeculae are nothing more than an artifact of certain types of fixation treatment, although the complexity of the cell's cytoskeleton is not yet fully understood.
Document 2:::
Cell physiology is the biological study of the activities that take place in a cell to keep it alive. The term physiology refers to normal functions in a living organism. Animal cells, plant cells and microorganism cells show similarities in their functions even though they vary in structure.
General characteristics
There are two types of cells: prokaryotes and eukaryotes.
Prokaryotes were the first of the two to develop and do not have a self-contained nucleus. Their mechanisms are simpler than later-evolved eukaryotes, which contain a nucleus that envelops the cell's DNA and some organelles.
Prokaryotes
Prokaryotes have DNA located in an area called the nucleoid, which is not separated from other parts of the cell by a membrane. There are two domains of prokaryotes: bacteria and archaea. Prokaryotes have fewer organelles than eukaryotes. Both have plasma membranes and ribosomes (structures that synthesize proteins and float free in cytoplasm). Two unique characteristics of prokaryotes are fimbriae (finger-like projections on the surface of a cell) and flagella (threadlike structures that aid movement).
Eukaryotes
Eukaryotes have a nucleus where DNA is contained. They are usually larger than prokaryotes and contain many more organelles. The nucleus, the feature of a eukaryote that distinguishes it from a prokaryote, contains a nuclear envelope, nucleolus and chromatin. In cytoplasm, endoplasmic reticulum (ER) synthesizes membranes and performs other metabolic activities. There are two types, rough ER (containing ribosomes) and smooth ER (lacking ribosomes). The Golgi apparatus consists of multiple membranous sacs, responsible for manufacturing and shipping out materials such as proteins. Lysosomes are structures that use enzymes to break down substances through phagocytosis, a process that comprises endocytosis and exocytosis. In the mitochondria, metabolic processes such as cellular respiration occur. The cytoskeleton is made of fibers that support the str
Document 3:::
Centrocones are sub-cellular structures involved in the cell division of apicomplexan parasites. Centrocones are a nuclear sub-compartment in parasites of Toxoplasma gondii that work in apposition with the centrosome to coordinate the budding process in mitosis. The centrocone concentrates and organizes various regulatory factors involved in the early stages of mitosis, including the ECR1 and TgCrk5 proteins. The membrane occupation and recognition nexus 1 (MORN1) protein is also contained in this structure and is linked to human diseases, though not much is yet known about the connection between the centrocone and the MORN1 protein.
Centrocones are located in the nuclear envelope and contain spindles that are used in mitosis. Chromosomes are contained within these spindles of the centrocone throughout the cell cycle.
Document 4:::
This lecture, named in memory of Keith R. Porter, is presented to an eminent cell biologist each year at the ASCB Annual Meeting. The ASCB Program Committee and the ASCB President recommend the Porter Lecturer to the Porter Endowment each year.
Lecturers
Source: ASCB
See also
List of biology awards
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the microtubule-organizing center found near the nuclei of animal cells?
A. lysosome
B. spliceosome
C. centrosome
D. entrosome
Answer:
|
|
ai2_arc-335
|
multiple_choice
|
Which action will result in a product with new chemical properties?
|
[
"shredding a newspaper",
"breaking a mirror",
"cutting wood",
"popping popcorn"
] |
D
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 2:::
Bioproducts engineering or bioprocess engineering refers to engineering of bio-products from renewable bioresources. This pertains to the design and development of processes and technologies for the sustainable manufacture of bioproducts (materials, chemicals and energy) from renewable biological resources.
Bioproducts engineers harness the molecular building blocks of renewable resources to design, develop and manufacture environmentally friendly industrial and consumer products. From biofuels, renewable energy, and bioplastics to paper products and "green" building materials such as bio-based composites, Bioproducts engineers are developing sustainable solutions to meet the world's growing materials and energy demand. Conventional bioproducts and emerging bioproducts are two broad categories used to categorize bioproducts. Examples of conventional bio-based products include building materials, pulp and paper, and forest products. Examples of emerging bioproducts or biobased products include biofuels, bioenergy, starch-based and cellulose-based ethanol, bio-based adhesives, biochemicals, biodegradable plastics, etc. Bioproducts Engineers play a major role in the design and development of "green" products including biofuels, bioenergy, biodegradable plastics, biocomposites, building materials, paper and chemicals. Bioproducts engineers also develop energy efficient, environmentally friendly manufacturing processes for these products as well as effective end-use applications. Bioproducts engineers play a critical role in a sustainable 21st century bio-economy by using renewable resources to design, develop, and manufacture the products we use every day. The career outlook for bioproducts engineers is very bright with employment opportunities in a broad range of industries, including pulp and paper, alternative energy, renewable plastics, and other fiber, forest products, building materials and chemical-based industries.
Commonly referred to as bioprocess engineerin
Document 3:::
Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.
Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate.
A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density.
An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge.
Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change.
Examples
Heating and cooling
Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation.
Magnetism
Ferro-magnetic materials can become magnetic. The process is reve
Document 4:::
A breakthrough curve in adsorption is the course of the effluent adsorptive concentration at the outlet of a fixed bed adsorber. Breakthrough curves are important for adsorptive separation technologies and for the characterization of porous materials.
Importance
Since almost all adsorptive separation processes are dynamic -meaning, that they are running under flow - testing porous materials for those applications for their separation performance has to be tested under flow as well. Since separation processes run with mixtures of different components, measuring several breakthrough curves results in thermodynamic mixture equilibria - mixture sorption isotherms, that are hardly accessible with static manometric sorption characterization. This enables the determination of sorption selectivities in gaseous and liquid phase.
The determination of breakthrough curves is the foundation of many other processes, like the pressure swing adsorption. Within this process, the loading of one adsorber is equivalent to a breakthrough experiment.
Measurement
A fixed bed of porous materials (e.g. activated carbons and zeolites) is pressurized and purged with a carrier gas. After becoming stationary one or more adsorptives are added to the carrier gas, resulting in a step-wise change of the inlet concentration. This is in contrast to chromatographic separation processes, where pulse-wise changes of the inlet concentrations are used. The course of the adsorptive concentrations at the outlet of the fixed bed are monitored.
Results
Integration of the area above the entire breakthrough curve gives the maximum loading of the adsorptive material. Additionally, the duration of the breakthrough experiment until a certain threshold of the adsorptive concentration at the outlet can be measured, which enables the calculation of a technically usable sorption capacity. Up to this time, the quality of the product stream can be maintained. The shape of the breakthrough curves contains informat
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which action will result in a product with new chemical properties?
A. shredding a newspaper
B. breaking a mirror
C. cutting wood
D. popping popcorn
Answer:
|
|
sciq-9326
|
multiple_choice
|
Single-displacement reactions are a subset of what?
|
[
"kinetic reactions",
"redox reactions",
"particle reactions",
"gravitational reactions"
] |
B
|
Relavent Documents:
Document 0:::
An elementary reaction is a chemical reaction in which one or more chemical species react directly to form products in a single reaction step and with a single transition state. In practice, a reaction is assumed to be elementary if no reaction intermediates have been detected or need to be postulated to describe the reaction on a molecular scale. An apparently elementary reaction may be in fact a stepwise reaction, i.e. a complicated sequence of chemical reactions, with reaction intermediates of variable lifetimes.
In a unimolecular elementary reaction, a molecule dissociates or isomerises to form the products(s)
At constant temperature, the rate of such a reaction is proportional to the concentration of the species
In a bimolecular elementary reaction, two atoms, molecules, ions or radicals, and , react together to form the product(s)
The rate of such a reaction, at constant temperature, is proportional to the product of the concentrations of the species and
The rate expression for an elementary bimolecular reaction is sometimes referred to as the Law of Mass Action as it was first proposed by Guldberg and Waage in 1864. An example of this type of reaction is a cycloaddition reaction.
This rate expression can be derived from first principles by using collision theory for ideal gases. For the case of dilute fluids equivalent results have been obtained from simple probabilistic arguments.
According to collision theory the probability of three chemical species reacting simultaneously with each other in a termolecular elementary reaction is negligible. Hence such termolecular reactions are commonly referred as non-elementary reactions and can be broken down into a more fundamental set of bimolecular reactions, in agreement with the law of mass action. It is not always possible to derive overall reaction schemes, but solutions based on rate equations are often possible in terms of steady-state or Michaelis-Menten approximations.
Notes
Chemical kinetics
Phy
Document 1:::
In chemistry, a reaction coordinate is an abstract one-dimensional coordinate chosen to represent progress along a reaction pathway. Where possible it is usually a geometric parameter that changes during the conversion of one or more molecular entities, such as bond length or bond angle. For example, in the homolytic dissociation of molecular hydrogen, an apt choice would be the coordinate corresponding to the bond length. Non-geometric parameters such as bond order are also used, but such direct representation of the reaction process can be difficult, especially for more complex reactions.
In molecular dynamics simulations, a reaction coordinate is called a collective variable.
A reaction coordinate parametrises reaction process at the level of the molecular entities involved. It differs from extent of reaction, which measures reaction progress in terms of the composition of the reaction system.
(Free) energy is often plotted against reaction coordinate(s) to demonstrate in schematic form the potential energy profile (an intersection of a potential energy surface) associated with the reaction.
In the formalism of transition-state theory the reaction coordinate for each reaction step is one of a set of curvilinear coordinates obtained from the conventional coordinates for the reactants, and leads smoothly among configurations, from reactants to products via the transition state. It is typically chosen to follow the path defined by potential energy gradient – shallowest ascent/steepest descent – from reactants to products.
Notes and references
Physical chemistry
Quantum chemistry
Theoretical chemistry
Computational chemistry
Molecular physics
Chemical kinetics
Document 2:::
Activation energy asymptotics (AEA), also known as large activation energy asymptotics, is an asymptotic analysis used in the combustion field utilizing the fact that the reaction rate is extremely sensitive to temperature changes due to the large activation energy of the chemical reaction.
History
The techniques were pioneered by the Russian scientists Yakov Borisovich Zel'dovich, David A. Frank-Kamenetskii and co-workers in the 30s, in their study on premixed flames and thermal explosions (Frank-Kamenetskii theory), but not popular to western scientists until the 70s. In the early 70s, due to the pioneering work of Williams B. Bush, Francis E. Fendell, Forman A. Williams, Amable Liñán and John F. Clarke, it became popular in western community and since then it was widely used to explain more complicated problems in combustion.
Method overview
In combustion processes, the reaction rate is dependent on temperature in the following form (Arrhenius law),
where is the activation energy, and is the universal gas constant. In general, the condition is satisfied, where is the burnt gas temperature. This condition forms the basis for activation energy asymptotics. Denoting for unburnt gas temperature, one can define the Zel'dovich number and heat release parameter as follows
In addition, if we define a non-dimensional temperature
such that approaching zero in the unburnt region and approaching unity in the burnt gas region (in other words, ), then the ratio of reaction rate at any temperature to reaction rate at burnt gas temperature is given by
Now in the limit of (large activation energy) with , the reaction rate is exponentially small i.e., and negligible everywhere, but non-negligible when . In other words, the reaction rate is negligible everywhere, except in a small region very close to burnt gas temperature, where . Thus, in solving the conservation equations, one identifies two different regimes, at leading order,
Outer convective-diffusive zone
I
Document 3:::
Activation, in chemistry and biology, is the process whereby something is prepared or excited for a subsequent reaction.
Chemistry
In chemistry, "activation" refers to the reversible transition of a molecule into a nearly identical chemical or physical state, with the defining characteristic being that this resultant state exhibits an increased propensity to undergo a specified chemical reaction. Thus, activation is conceptually the opposite of protection, in which the resulting state exhibits a decreased propensity to undergo a certain reaction.
The energy of activation specifies the amount of free energy the reactants must possess (in addition to their rest energy) in order to initiate their conversion into corresponding products—that is, in order to reach the transition state for the reaction. The energy needed for activation can be quite small, and often it is provided by the natural random thermal fluctuations of the molecules themselves (i.e. without any external sources of energy).
The branch of chemistry that deals with this topic is called chemical kinetics.
Biology
Biochemistry
In biochemistry, activation, specifically called bioactivation, is where enzymes or other biologically active molecules acquire the ability to perform their biological function, such as inactive proenzymes being converted into active enzymes that are able to catalyze their substrates' reactions into products. Bioactivation may also refer to the process where inactive prodrugs are converted into their active metabolites, or the toxication of protoxins into actual toxins.
An enzyme may be reversibly or irreversibly bioactivated. A major mechanism of irreversible bioactivation is where a piece of a protein is cut off by cleavage, producing an enzyme that will then stay active. A major mechanism of reversible bioactivation is substrate presentation where an enzyme translocates near its substrate. Another reversible reaction is where a cofactor binds to an enzyme, which then rem
Document 4:::
The theory of response reactions (RERs) was elaborated for systems in which several physico-chemical processes run simultaneously in mutual interaction, with local thermodynamic equilibrium, and in which state variables called extents of reaction are allowed, but thermodynamic equilibrium proper is not required. It is based on detailed analysis of the Hessian determinant, using either the Gibbs or the De Donder method of analysis. The theory derives the sensitivity coefficient as the sum of the contributions of individual RERs. Thus phenomena which are in contradiction to over-general statements of the Le Chatelier principle can be interpreted. With the help of RERs the equilibrium coupling was defined. RERs could be derived based either on the species, or on the stoichiometrically independent reactions of a parallel system. The set of RERs is unambiguous in a given system; and the number of them (M) is , where S denotes the number of species and C refers to the number of components. In the case of three-component systems, RERs can be visualized on a triangle diagram.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Single-displacement reactions are a subset of what?
A. kinetic reactions
B. redox reactions
C. particle reactions
D. gravitational reactions
Answer:
|
|
sciq-4923
|
multiple_choice
|
Budding is a form of what type of reproduction in tunicates?
|
[
"sexual",
"asexual",
"nuclear",
"microscopic"
] |
B
|
Relavent Documents:
Document 0:::
Sexual maturity is the capability of an organism to reproduce. In humans, it is related to both puberty and adulthood. However, puberty is the process of biological sexual maturation, while the concept of adulthood is generally based on broader cultural definitions.
Most multicellular organisms are unable to sexually reproduce at birth (animals) or germination (e.g. plants): depending on the species, it may be days, weeks, or years until they have developed enough to be able to do so. Also, certain cues may trigger an organism to become sexually mature. They may be external, such as drought (certain plants), or internal, such as percentage of body fat (certain animals). (Such internal cues are not to be confused with hormones, which directly produce sexual maturity – the production/release of those hormones is triggered by such cues.)
Role of reproductive organs
Sexual maturity is brought about by a maturing of the reproductive organs and the production of gametes. It may also be accompanied by a growth spurt or other physical changes which distinguish the immature organism from its adult form. In animals these are termed secondary sex characteristics, and often represent an increase in sexual dimorphism.
After sexual maturity is achieved, some organisms become infertile, or even change their sex. Some organisms are hermaphrodites and may or may not be able to "completely" mature and/or to produce viable offspring. Also, while in many organisms sexual maturity is strongly linked to age, many other factors are involved, and it is possible for some to display most or all of the characteristics of the adult form without being sexually mature. Conversely it is also possible for the "immature" form of an organism to reproduce. This is called progenesis, in which sexual development occurs faster than other physiological development (in contrast, the term neoteny refers to when non-sexual development is slowed – but the result is the same - the retention of juvenile c
Document 1:::
Budding or blastogenesis is a type of asexual reproduction in which a new organism develops from an outgrowth or bud due to cell division at one particular site. For example, the small bulb-like projection coming out from the yeast cell is known as a bud. Since the reproduction is asexual, the newly created organism is a clone and excepting mutations is genetically identical to the parent organism. Organisms such as hydra use regenerative cells for reproduction in the process of budding.
In hydra, a bud develops as an outgrowth due to repeated cell division at one specific site. These buds develop into tiny individuals and, when fully mature, detach from the parent body and become new independent individuals.
Internal budding or endodyogeny is a process of asexual reproduction, favored by parasites such as Toxoplasma gondii. It involves an unusual process in which two daughter cells are produced inside a mother cell, which is then consumed by the offspring prior to their separation.
Endopolygeny is the division into several organisms at once by internal budding.
Cellular reproduction
Some cells divide asymmetrically by budding, for example Saccharomyces cerevisiae, the yeast species used in baking and brewing. This process results in a 'mother' cell and a smaller 'daughter' cell. Cryo-electron tomography recently revealed that mitochondria in cells divide by budding.
Animal reproduction
In some multicellular animals, offspring may develop as outgrowths of the mother. Animals that reproduce by budding include corals, some sponges, some acoels (e.g., Convolutriloba), and echinoderm larvae.
Colony division
Colonies of some bee species have also exhibited budding behavior, such as Apis dorsata. Although budding behavior is rare in this bee species, it has been observed when a group of workers leave the natal nest and construct a new nest usually near the natal one.
Virology
In virology, budding is a form of viral shedding by which enveloped viruses acquire their
Document 2:::
A juvenile is an individual organism (especially an animal) that has not yet reached its adult form, sexual maturity or size. Juveniles can look very different from the adult form, particularly in colour, and may not fill the same niche as the adult form. In many organisms the juvenile has a different name from the adult (see List of animal names).
Some organisms reach sexual maturity in a short metamorphosis, such as ecdysis in many insects and some other arthropods. For others, the transition from juvenile to fully mature is a more prolonged process—puberty in humans and other species (like higher primates and whales), for example. In such cases, juveniles during this transformation are sometimes called subadults.
Many invertebrates cease development upon reaching adulthood. The stages of such invertebrates are larvae or nymphs.
In vertebrates and some invertebrates (e.g. spiders), larval forms (e.g. tadpoles) are usually considered a development stage of their own, and "juvenile" refers to a post-larval stage that is not fully grown and not sexually mature. In amniotes, the embryo represents the larval stage. Here, a "juvenile" is an individual in the time between hatching/birth/germination and reaching maturity.
Examples
For animal larval juveniles, see larva
Juvenile birds or bats can be called fledglings
For cat juveniles, see kitten
For dog juveniles, see puppy
For human juvenile life stages, see childhood and adolescence, an intermediary period between the onset of puberty and full physical, psychological, and social adulthood
Document 3:::
Microgametogenesis is the process in plant reproduction where a microgametophyte develops in a pollen grain to the three-celled stage of its development. In flowering plants it occurs with a microspore mother cell inside the anther of the plant.
When the microgametophyte is first formed inside the pollen grain four sets of fertile cells called sporogenous cells are apparent. These cells are surrounded by a wall of sterile cells called the tapetum, which supplies food to the cell and eventually becomes the cell wall for the pollen grain. These sets of sporogenous cells eventually develop into diploid microspore mother cells. These microspore mother cells, also called microsporocytes, then undergo meiosis and become four microspore haploid cells. These new microspore cells then undergo mitosis and form a tube cell and a generative cell. The generative cell then undergoes mitosis one more time to form two male gametes, also called sperm.
See also
Gametogenesis
Document 4:::
Paratomy is a form of asexual reproduction in animals where the organism splits in a plane perpendicular to the antero-posterior axis and the split is preceded by the "pregeneration" of the anterior structures in the posterior portion. The developing organisms have their body axis aligned, i.e., they develop in a head to tail fashion.
Budding can be considered to be similar to paratomy except that the body axes need not be aligned: the new head may grow toward the side or even point backward (e.g. Convolutriloba retrogemma an acoel flat worm). In animals that undergo fast paratomy a chain of zooids packed in a head to tail formation may develop. Many oligochaete annelids, acoelous turbellarians, echinoderm larvae and coelenterates reproduce by this method.
See also
External resources
This paper has a detailed description of the changes during paratomy.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Budding is a form of what type of reproduction in tunicates?
A. sexual
B. asexual
C. nuclear
D. microscopic
Answer:
|
|
sciq-4225
|
multiple_choice
|
What cell structures are like storage centers and tend to be larger in plant cells?
|
[
"alleles",
"tubules",
"vacuoles",
"nuclei"
] |
C
|
Relavent Documents:
Document 0:::
Plant stem cells
Plant stem cells are innately undifferentiated cells located in the meristems of plants. Plant stem cells serve as the origin of plant vitality, as they maintain themselves while providing a steady supply of precursor cells to form differentiated tissues and organs in plants. Two distinct areas of stem cells are recognised: the apical meristem and the lateral meristem.
Plant stem cells are characterized by two distinctive properties, which are: the ability to create all differentiated cell types and the ability to self-renew such that the number of stem cells is maintained. Plant stem cells never undergo aging process but immortally give rise to new specialized and unspecialized cells, and they have the potential to grow into any organ, tissue, or cell in the body. Thus they are totipotent cells equipped with regenerative powers that facilitate plant growth and production of new organs throughout lifetime.
Unlike animals, plants are immobile. As plants cannot escape from danger by taking motion, they need a special mechanism to withstand various and sometimes unforeseen environmental stress. Here, what empowers them to withstand harsh external influence and preserve life is stem cells. In fact, plants comprise the oldest and the largest living organisms on earth, including Bristlecone Pines in California, U.S. (4,842 years old), and the Giant Sequoia in mountainous regions of California, U.S. (87 meters in height and 2,000 tons in weight). This is possible because they have a modular body plan that enables them to survive substantial damage by initiating continuous and repetitive formation of new structures and organs such as leaves and flowers.
Plant stem cells are also characterized by their location in specialized structures called meristematic tissues, which are located in root apical meristem (RAM), shoot apical meristem (SAM), and vascular system ((pro)cambium or vascular meristem.)
Research and development
Traditionally, plant stem ce
Document 1:::
A stem is one of two main structural axes of a vascular plant, the other being the root. It supports leaves, flowers and fruits, transports water and dissolved substances between the roots and the shoots in the xylem and phloem, photosynthesis takes place here, stores nutrients, and produces new living tissue. The stem can also be called halm or haulm or culms.
The stem is normally divided into nodes and internodes:
The nodes are the points of attachment for leaves and can hold one or more leaves. There are sometimes axillary buds between the stem and leaf which can grow into branches (with leaves, conifer cones, or flowers). Adventitious roots may also be produced from the nodes. Vines may produce tendrils from nodes.
The internodes distance one node from another.
The term "shoots" is often confused with "stems"; "shoots" generally refers to new fresh plant growth, including both stems and other structures like leaves or flowers.
In most plants, stems are located above the soil surface, but some plants have underground stems.
Stems have several main functions:
Support for and the elevation of leaves, flowers, and fruits. The stems keep the leaves in the light and provide a place for the plant to keep its flowers and fruits.
Transport of fluids between the roots and the shoots in the xylem and phloem.
Storage of nutrients.
Production of new living tissue. The normal lifespan of plant cells is one to three years. Stems have cells called meristems that annually generate new living tissue.
Photosynthesis.
Stems have two pipe-like tissues called xylem and phloem. The xylem tissue arises from the cell facing inside and transports water by the action of transpiration pull, capillary action, and root pressure. The phloem tissue arises from the cell facing outside and consists of sieve tubes and their companion cells. The function of phloem tissue is to distribute food from photosynthetic tissue to other tissues. The two tissues are separated by cambium, a tis
Document 2:::
The ground tissue of plants includes all tissues that are neither dermal nor vascular. It can be divided into three types based on the nature of the cell walls. This tissue system is present between the dermal tissue and forms the main bulk of the plant body.
Parenchyma cells have thin primary walls and usually remain alive after they become mature. Parenchyma forms the "filler" tissue in the soft parts of plants, and is usually present in cortex, pericycle, pith, and medullary rays in primary stem and root.
Collenchyma cells have thin primary walls with some areas of secondary thickening. Collenchyma provides extra mechanical and structural support, particularly in regions of new growth.
Sclerenchyma cells have thick lignified secondary walls and often die when mature. Sclerenchyma provides the main structural support to a plant.
Parenchyma
Parenchyma is a versatile ground tissue that generally constitutes the "filler" tissue in soft parts of plants. It forms, among other things, the cortex (outer region) and pith (central region) of stems, the cortex of roots, the mesophyll of leaves, the pulp of fruits, and the endosperm of seeds. Parenchyma cells are often living cells and may remain meristematic, meaning that they are capable of cell division if stimulated. They have thin and flexible cellulose cell walls and are generally polyhedral when close-packed, but can be roughly spherical when isolated from their neighbors. Parenchyma cells are generally large. They have large central vacuoles, which allow the cells to store and regulate ions, waste products, and water. Tissue specialised for food storage is commonly formed of parenchyma cells.
Parenchyma cells have a variety of functions:
In leaves, they form two layers of mesophyll cells immediately beneath the epidermis of the leaf, that are responsible for photosynthesis and the exchange of gases. These layers are called the palisade parenchyma and spongy mesophyll. Palisade parenchyma cells can be either cu
Document 3:::
In botany, a cortex is an outer layer of a stem or root in a vascular plant, lying below the epidermis but outside of the vascular bundles. The cortex is composed mostly of large thin-walled parenchyma cells of the ground tissue system and shows little to no structural differentiation. The outer cortical cells often acquire irregularly thickened cell walls, and are called collenchyma cells.
Plants
Stems and branches
In the three dimensional structure of herbaceous stems, the epidermis, cortex and vascular cambium form concentric cylinders around the inner cylindrical core of pith. Some of the outer cortical cells may contain chloroplasts, giving them a green color. They can therefore produce simple carbohydrates through photosynthesis.
In woody plants, the cortex is located between the periderm (bark) and the vascular tissue (phloem, in particular). It is responsible for the transportation of materials into the central cylinder of the root through diffusion and may also be used for storage of food in the form of starch.
Roots
In the roots of vascular plants, the cortex occupies a larger portion of the organ's volume than in herbaceous stems. The loosely packed cells of root cortex allow movement of water and oxygen in the intercellular spaces.
One of the main functions of the root cortex is to serve as a storage area for reserve foods. The innermost layer of the cortex in the roots of vascular plants is the endodermis. The endodermis is responsible for storing starch as well as regulating the transport of water, ions and plant hormones.
Lichen
On a lichen, the cortex is also the surface layer or "skin" of the nonfruiting part of the body of some lichens. It is the "skin", or outer layer of tissue, that covers the undifferentiated cells of the . Fruticose lichens have one cortex encircling the branches, even flattened, leaf-like forms. Foliose lichens have different upper and lower cortices. Crustose, placodioid, and squamulose lichens have an upper cor
Document 4:::
Cell mechanics is a sub-field of biophysics that focuses on the mechanical properties and behavior of living cells and how it relates to cell function. It encompasses aspects of cell biophysics, biomechanics, soft matter physics and rheology, mechanobiology and cell biology.
Eukaryotic
Eukaryotic cells are cells that consist of membrane-bound organelles, a membrane-bound nucleus, and more than one linear chromosome. Being much more complex than prokaryotic cells, cells without a true nucleus, eukaryotes must protect its organelles from outside forces.
Plant
Plant cell mechanics combines principles of biomechanics and mechanobiology to investigate the growth and shaping of the plant cells. Plant cells, similar to animal cells, respond to externally applied forces, such as by reorganization of their cytoskeletal network. The presence of a considerably rigid extracellular matrix, the cell wall, however, bestows the plant cells with a set of particular properties. Mainly, the growth of plant cells is controlled by the mechanics and chemical composition of the cell wall. A major part of research in plant cell mechanics is put toward the measurement and modeling of the cell wall mechanics to understand how modification of its composition and mechanical properties affects the cell function, growth and morphogenesis.
Animal
Because animal cells do not have cell walls to protect them like plant cells, they require other specialized structures to sustain external mechanical forces. All animal cells are encased within a cell membrane made of a thin lipid bilayer that protects the cell from exposure to the outside environment. Using receptors composed of protein structures, the cell membrane is able to let selected molecules within the cell. Inside the cell membrane includes the cytoplasm, which contains the cytoskeleton. A network of filamentous proteins including microtubules, intermediate filaments, and actin filaments makes up the cytoskeleton and helps maintain th
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What cell structures are like storage centers and tend to be larger in plant cells?
A. alleles
B. tubules
C. vacuoles
D. nuclei
Answer:
|
|
sciq-1061
|
multiple_choice
|
Because they spread seeds, fruits are an agent of what?
|
[
"predation",
"dispersal",
"propagation",
"disposal"
] |
B
|
Relavent Documents:
Document 0:::
Fruit tree propagation is usually carried out vegetatively (non-sexually) by grafting or budding a desired variety onto a suitable rootstock.
Perennial plants can be propagated either by sexual or vegetative means. Sexual reproduction begins when a male germ cell (pollen) from one flower fertilises a female germ cell (ovule, incipient seed) of the same species, initiating the development of a fruit containing seeds. Each seed, when germinated, can grow to become a new specimen tree. However, the new tree inherits characteristics of both its parents, and it will not grow true to the variety of either parent from which it came. That is, it will be a fresh individual with an unpredictable combination of characteristics of its own. Although this is desirable in terms of producing novel combinations from the richness of the gene pool of the two parent plants (such sexual recombination is the source of new cultivars), only rarely will the resulting new fruit tree be directly useful or attractive to the tastes of humankind. Most new plants will have characteristics that lie somewhere between those of the two parents.
Therefore, from the orchard grower or gardener's point of view, it is preferable to propagate fruit cultivars vegetatively in order to ensure reliability. This involves taking a cutting (or scion) of wood from a desirable parent tree which is then grown on to produce a new plant or "clone" of the original. In effect this means that the original Bramley apple tree, for example, was a successful variety grown from a pip, but that every Bramley since then has been propagated by taking cuttings of living matter from that tree, or one of its descendants.
Methods
The simplest method of propagating a tree vegetatively is rooting or taking cuttings. A cutting (usually a piece of stem of the parent plant) is cut off and stuck into soil. Artificial rooting hormones are sometimes used to improve chances of success. If the cutting does not die from rot-inducing fungi o
Document 1:::
Multiplex sensor is a hand-held multiparametric optical sensor developed by Force-A. The sensor is a result of 15 years of research on plant autofluorescence conducted by the CNRS (National Center for Scientific Research) and University of Paris-Sud Orsay. It provides accurate and complete information on the physiological state of the crop, allowing real-time and non-destructive measurements of chlorophyll and polyphenols contents in leaves and fruits.
Technology
Multiplex assesses the chlorophyll and polyphenols indices by making use of two attributes of plant fluorescence: the effect of fluorescence re-absorption by chlorophyll and screening effect of polyphenols.
The sensor is an optical head which contains:
Optical sources (UV, blue, green and red)
Detectors (blue-green or yellow, red and far-red (NIR))
Applications
Alongside with other data, Multiplex is designed to provide input for decision support systems (DSS) for a range of crops, including:
Fertilization applications
Crop quality assessments (nitrogen status, maturity, freshness and disease detection)
As a standalone sensor, Multiplex is a tool for rapid collection of information concerning chlorophyll and flavonoids contents of the plant to be applied on ecophysiological research.
Document 2:::
Seed predation, often referred to as granivory, is a type of plant-animal interaction in which granivores (seed predators) feed on the seeds of plants as a main or exclusive food source, in many cases leaving the seeds damaged and not viable. Granivores are found across many families of vertebrates (especially mammals and birds) as well as invertebrates (mainly insects); thus, seed predation occurs in virtually all terrestrial ecosystems. Seed predation is commonly divided into two distinctive temporal categories, pre-dispersal and post-dispersal predation, which affect the fitness of the parental plant and the dispersed offspring (the seed), respectively. Mitigating pre- and post-dispersal predation may involve different strategies. To counter seed predation, plants have evolved both physical defenses (e.g. shape and toughness of the seed coat) and chemical defenses (secondary compounds such as tannins and alkaloids). However, as plants have evolved seed defenses, seed predators have adapted to plant defenses (e.g., ability to detoxify chemical compounds). Thus, many interesting examples of coevolution arise from this dynamic relationship.
Seeds and their defenses
Plant seeds are important sources of nutrition for animals across most ecosystems. Seeds contain food storage organs (e.g., endosperm) that provide nutrients to the developing plant embryo (cotyledon). This makes seeds an attractive food source for animals because they are a highly concentrated and localized nutrient source in relation to other plant parts.
Seeds of many plants have evolved a variety of defenses to deter predation. Seeds are often contained inside protective structures or fruit pulp that encapsulate seeds until they are ripe. Other physical defenses include spines, hairs, fibrous seed coats and hard endosperm. Seeds, especially in arid areas, may have a mucilaginous seed coat that can glue soil to seed hiding it from granivores.
Some seeds have evolved strong anti-herbivore chemical
Document 3:::
An orchard is an intentional plantation of trees or shrubs that is maintained for food production. Orchards comprise fruit- or nut-producing trees that are generally grown for commercial production. Orchards are also sometimes a feature of large gardens, where they serve an aesthetic as well as a productive purpose. A fruit garden is generally synonymous with an orchard, although it is set on a smaller, non-commercial scale and may emphasize berry shrubs in preference to fruit trees. Most temperate-zone orchards are laid out in a regular grid, with a grazed or mown grass or bare soil base that makes maintenance and fruit gathering easy.
Most modern commercial orchards are planted for a single variety of fruit. While the importance of introducing biodiversity is recognized in forest plantations, it would seem beneficial to introduce some genetic diversity in orchard plantations as well by interspersing other trees through the orchard. Genetic diversity in an orchard would provide resilience to pests and diseases, just as in forests.
Orchards are sometimes concentrated near bodies of water where climatic extremes are moderated and blossom time is retarded until frost danger is past.
Layout
An orchard's layout is the technique of planting the crops in a proper system.
There are different methods of planting and thus different layouts. Some of these layout types are:
Square method
Rectangular method
Quincunx method
Triangular method
Hexagonal method
Contour or terrace method
For different varieties, these systems may vary to some extent.
Orchards by region
]
The most extensive orchards in the United States are apple and orange orchards, although citrus orchards are more commonly called groves. The most extensive apple orchard area is in eastern Washington state, with a lesser but significant apple orchard area in most of Upstate New York. Extensive orange orchards are found in Florida and southern California, where they are more widely known as "groves". In
Document 4:::
In agriculture, shattering is the dispersal of a crop's seeds upon their becoming ripe. From an agricultural perspective this is generally an undesirable process, and in the history of crop domestication several important advances have involved a mutation in a crop plant that reduced shattering—instead of the seeds being dispersed as soon as they were ripe, the mutant plants retained the seeds for longer, which made harvesting much more effective. Non-shattering phenotype is one of the prerequisites for plant breeding especially when introgressing valuable traits from wild varieties of domesticated crops.
A particularly important mutation that was selected very early in the history of agriculture removed the "brittle rachis" problem from wheat. A ripe head ("ear") of wild-type wheat is easily shattered into dispersal units when touched, or blown by the wind, because during ripening a series of abscission layers forms that divides the rachis into short segments, each attached to a single spikelet (which contains 2–3 grains along with chaff).
A different class of shattering mechanisms involves dehiscence of the mature fruit, which releases the seeds.
Current research priorities to understand the genetics of shattering include the following crops:
Barley
Buckwheat
Grain Amaranth
Oilseed rape (Brassica napus)
Sesame and rapeseed are harvested before the seed is fully mature, so that the pods do not split and drop the seeds.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Because they spread seeds, fruits are an agent of what?
A. predation
B. dispersal
C. propagation
D. disposal
Answer:
|
|
sciq-5980
|
multiple_choice
|
A fundamental particle of matter, protons and neutrons are made of these?
|
[
"quarks",
"particles",
"atoms",
"neutrinos"
] |
A
|
Relavent Documents:
Document 0:::
This is a list of known and hypothesized particles.
Standard Model elementary particles
Elementary particles are particles with no measurable internal structure; that is, it is unknown whether they are composed of other particles. They are the fundamental objects of quantum field theory. Many families and sub-families of elementary particles exist. Elementary particles are classified according to their spin. Fermions have half-integer spin while bosons have integer spin. All the particles of the Standard Model have been experimentally observed, including the Higgs boson in 2012. Many other hypothetical elementary particles, such as the graviton, have been proposed, but not observed experimentally.
Fermions
Fermions are one of the two fundamental classes of particles, the other being bosons. Fermion particles are described by Fermi–Dirac statistics and have quantum numbers described by the Pauli exclusion principle. They include the quarks and leptons, as well as any composite particles consisting of an odd number of these, such as all baryons and many atoms and nuclei.
Fermions have half-integer spin; for all known elementary fermions this is . All known fermions except neutrinos, are also Dirac fermions; that is, each known fermion has its own distinct antiparticle. It is not known whether the neutrino is a Dirac fermion or a Majorana fermion. Fermions are the basic building blocks of all matter. They are classified according to whether they interact via the strong interaction or not. In the Standard Model, there are 12 types of elementary fermions: six quarks and six leptons.
Quarks
Quarks are the fundamental constituents of hadrons and interact via the strong force. Quarks are the only known carriers of fractional charge, but because they combine in groups of three quarks (baryons) or in pairs of one quark and one antiquark (mesons), only integer charge is observed in nature. Their respective antiparticles are the antiquarks, which are identical except th
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
Physics First is an educational program in the United States, that teaches a basic physics course in the ninth grade (usually 14-year-olds), rather than the biology course which is more standard in public schools. This course relies on the limited math skills that the students have from pre-algebra and algebra I. With these skills students study a broad subset of the introductory physics canon with an emphasis on topics which can be experienced kinesthetically or without deep mathematical reasoning. Furthermore, teaching physics first is better suited for English Language Learners, who would be overwhelmed by the substantial vocabulary requirements of Biology.
Physics First began as an organized movement among educators around 1990, and has been slowly catching on throughout the United States. The most prominent movement championing Physics First is Leon Lederman's ARISE (American Renaissance in Science Education).
Many proponents of Physics First argue that turning this order around lays the foundations for better understanding of chemistry, which in turn will lead to more comprehension of biology. Due to the tangible nature of most introductory physics experiments, Physics First also lends itself well to an introduction to inquiry-based science education, where students are encouraged to probe the workings of the world in which they live.
The majority of high schools which have implemented "physics first" do so by way of offering two separate classes, at two separate levels: simple physics concepts in 9th grade, followed by more advanced physics courses in 11th or 12th grade. In schools with this curriculum, nearly all 9th grade students take a "Physical Science", or "Introduction to Physics Concepts" course. These courses focus on concepts that can be studied with skills from pre-algebra and algebra I. With these ideas in place, students then can be exposed to ideas with more physics related content in chemistry, and other science electives. After th
Document 3:::
Stable massive particles (SMPs) are hypothetical particles that are long-lived and have appreciable mass. The precise definition varies depending on the different experimental or observational searches. SMPs may be defined as being at least as massive as electrons, and not decaying during its passage through a detector. They can be neutral or charged or carry a fractional charge, and interact with matter through gravitational force, strong force, weak force, electromagnetic force or any unknown force.
If new SMPs are ever discovered, several questions related to the origin and constituent of dark matter, and about the unification of four fundamental forces may be answered.
Collider experiments
Heavy, exotic particles interacting with matter and which can be directly detected through collider experiments are termed as stable massive particles or SMPs. More specifically a SMP is defined to be a particle that can pass through a detector without decaying and can undergo electromagnetic or strong interaction with matter. Searches for SMPs have been carried out across a spectrum of collision experiments such as lepton–hadron, hadron–hadron, and electron–positron. Although none of these experiments have detected an SMP, they have put substantial constraints on the nature of SMPs.
ATLAS Experiment
During the proton–proton collisions with center of mass energy equal to 13 TeV at the ATLAS experiment, a search for charged SMPs was carried out. In this case SMPs were defined as particles with mass significantly more than that of standard model particles, sufficient lifetime to reach the ATLAS hadronic calorimeter and with measurable electric charge while it passes through the tracking chambers.
MoEDAL experiment
The MoEDAL experiment search for, among others, highly ionizing SMPs and pseudo-SMPs.
Non-collider experiments
In the case of the non-collider experiments, SMPs are defined as sufficiently long-lived particles which exist either as relics of the big bang sin
Document 4:::
Advanced Placement (AP) Physics B was a physics course administered by the College Board as part of its Advanced Placement program. It was equivalent to a year-long introductory university course covering Newtonian mechanics, electromagnetism, fluid mechanics, thermal physics, waves, optics, and modern physics. The course was algebra-based and heavily computational; in 2015, it was replaced by the more concept-focused AP Physics 1 and AP Physics 2.
Exam
The exam consisted of a 70 MCQ section, followed by a 6-7 FRQ section. Each section was 90 minutes and was worth 50% of the final score. The MCQ section banned calculators, while the FRQ allowed calculators and a list of common formulas. Overall, the exam was configured to approximately cover a set percentage of each of the five target categories:
Purpose
According to the College Board web site, the Physics B course provided "a foundation in physics for students in the life sciences, a pre medical career path, and some applied sciences, as well as other fields not directly related to science."
Discontinuation
Starting in the 2014–2015 school year, AP Physics B was no longer offered, and AP Physics 1 and AP Physics 2 took its place. Like AP Physics B, both are algebra-based, and both are designed to be taught as year-long courses.
Grade distribution
The grade distributions for the Physics B scores from 2010 until its discontinuation in 2014 are as follows:
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A fundamental particle of matter, protons and neutrons are made of these?
A. quarks
B. particles
C. atoms
D. neutrinos
Answer:
|
|
sciq-7887
|
multiple_choice
|
All matter in the universe is composed of one or more unique pure substances called what?
|
[
"molecules",
"elements",
"atoms",
"compounds"
] |
B
|
Relavent Documents:
Document 0:::
This is an index of lists of molecules (i.e. by year, number of atoms, etc.). Millions of molecules have existed in the universe since before the formation of Earth. Three of them, carbon dioxide, water and oxygen were necessary for the growth of life. Although humanity had always been surrounded by these substances, it has not always known what they were composed of.
By century
The following is an index of list of molecules organized by time of discovery of their molecular formula or their specific molecule in case of isomers:
List of compounds
By number of carbon atoms in the molecule
List of compounds with carbon number 1
List of compounds with carbon number 2
List of compounds with carbon number 3
List of compounds with carbon number 4
List of compounds with carbon number 5
List of compounds with carbon number 6
List of compounds with carbon number 7
List of compounds with carbon number 8
List of compounds with carbon number 9
List of compounds with carbon number 10
List of compounds with carbon number 11
List of compounds with carbon number 12
List of compounds with carbon number 13
List of compounds with carbon number 14
List of compounds with carbon number 15
List of compounds with carbon number 16
List of compounds with carbon number 17
List of compounds with carbon number 18
List of compounds with carbon number 19
List of compounds with carbon number 20
List of compounds with carbon number 21
List of compounds with carbon number 22
List of compounds with carbon number 23
List of compounds with carbon number 24
List of compounds with carbon numbers 25-29
List of compounds with carbon numbers 30-39
List of compounds with carbon numbers 40-49
List of compounds with carbon numbers 50+
Other lists
List of interstellar and circumstellar molecules
List of gases
List of molecules with unusual names
See also
Molecule
Empirical formula
Chemical formula
Chemical structure
Chemical compound
Chemical bond
Coordination complex
L
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
The idea that matter consists of smaller particles and that there exists a limited number of sorts of primary, smallest particles in nature has existed in natural philosophy at least since the 6th century BC. Such ideas gained physical credibility beginning in the 19th century, but the concept of "elementary particle" underwent some changes in its meaning: notably, modern physics no longer deems elementary particles indestructible. Even elementary particles can decay or collide destructively; they can cease to exist and create (other) particles in result.
Increasingly small particles have been discovered and researched: they include molecules, which are constructed of atoms, that in turn consist of subatomic particles, namely atomic nuclei and electrons. Many more types of subatomic particles have been found. Most such particles (but not electrons) were eventually found to be composed of even smaller particles such as quarks. Particle physics studies these smallest particles and their behaviour under high energies, whereas nuclear physics studies atomic nuclei and their (immediate) constituents: protons and neutrons.
Early development
The idea that all matter is composed of elementary particles dates to as far as the 6th century BC. The Jains in ancient India were the earliest to advocate the particular nature of material objects between 9th and 5th century BCE. According to Jain leaders like Parshvanatha and Mahavira, the ajiva (non living part of universe) consists of matter or pudgala, of definite or indefinite shape which is made up tiny uncountable and invisible particles called permanu. Permanu occupies space-point and each permanu has definite colour, smell, taste and texture. Infinite varieties of permanu unite and form pudgala. The philosophical doctrine of atomism and the nature of elementary particles were also studied by ancient Greek philosophers such as Leucippus, Democritus, and Epicurus; ancient Indian philosophers such as Kanada, Dignāga, and Dha
Document 3:::
This is a list of topics that are included in high school physics curricula or textbooks.
Mathematical Background
SI Units
Scalar (physics)
Euclidean vector
Motion graphs and derivatives
Pythagorean theorem
Trigonometry
Motion and forces
Motion
Force
Linear motion
Linear motion
Displacement
Speed
Velocity
Acceleration
Center of mass
Mass
Momentum
Newton's laws of motion
Work (physics)
Free body diagram
Rotational motion
Angular momentum (Introduction)
Angular velocity
Centrifugal force
Centripetal force
Circular motion
Tangential velocity
Torque
Conservation of energy and momentum
Energy
Conservation of energy
Elastic collision
Inelastic collision
Inertia
Moment of inertia
Momentum
Kinetic energy
Potential energy
Rotational energy
Electricity and magnetism
Ampère's circuital law
Capacitor
Coulomb's law
Diode
Direct current
Electric charge
Electric current
Alternating current
Electric field
Electric potential energy
Electron
Faraday's law of induction
Ion
Inductor
Joule heating
Lenz's law
Magnetic field
Ohm's law
Resistor
Transistor
Transformer
Voltage
Heat
Entropy
First law of thermodynamics
Heat
Heat transfer
Second law of thermodynamics
Temperature
Thermal energy
Thermodynamic cycle
Volume (thermodynamics)
Work (thermodynamics)
Waves
Wave
Longitudinal wave
Transverse waves
Transverse wave
Standing Waves
Wavelength
Frequency
Light
Light ray
Speed of light
Sound
Speed of sound
Radio waves
Harmonic oscillator
Hooke's law
Reflection
Refraction
Snell's law
Refractive index
Total internal reflection
Diffraction
Interference (wave propagation)
Polarization (waves)
Vibrating string
Doppler effect
Gravity
Gravitational potential
Newton's law of universal gravitation
Newtonian constant of gravitation
See also
Outline of physics
Physics education
Document 4:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
All matter in the universe is composed of one or more unique pure substances called what?
A. molecules
B. elements
C. atoms
D. compounds
Answer:
|
|
sciq-10939
|
multiple_choice
|
What is the process of processing used material into new ones called?
|
[
"renew",
"recycling",
"reuse",
"remake"
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Electronic waste recycling, electronics recycling or e-waste recycling is the disassembly and separation of components and raw materials of waste electronics; when referring to specific types of e-waste, the terms like computer recycling or mobile phone recycling may be used. Like other waste streams, re-use, donation and repair are common sustainable ways to dispose of IT waste.
Since its inception in the early 1990s, more and more devices are recycled worldwide due to increased awareness and investment. Electronic recycling occurs primarily in order to recover valuable rare earth metals and precious metals, which are in short supply, as well as plastics and metals. These are resold or used in new devices after purification, in effect creating a circular economy. Such processes involve specialised facilities and premises, but within the home or ordinary workplace, sound components of damaged or obsolete computers can often be reused, reducing replacement costs.
Recycling is considered environmentally friendly because it prevents hazardous waste, including heavy metals and carcinogens, from entering the atmosphere, landfill or waterways. While electronics consist a small fraction of total waste generated, they are far more dangerous. There is stringent legislation designed to enforce and encourage the sustainable disposal of appliances, the most notable being the Waste Electrical and Electronic Equipment Directive of the European Union and the United States National Computer Recycling Act. In 2009, 38% of computers and a quarter of total electronic waste was recycled in the United States, 5% and 3% up from 3 years prior respectively.
Reasons for recycling
Obsolete computers and old electronics are valuable sources for secondary raw materials if recycled; otherwise, these devices are a source of toxins and carcinogens. Rapid technology change, low initial cost, and planned obsolescence have resulted in a fast-growing surplus of computers and other electronic comp
Document 2:::
Microbiology of decomposition is the study of all microorganisms involved in decomposition, the chemical and physical processes during which organic matter is broken down and reduced to its original elements.
Decomposition microbiology can be divided into two fields of interest, namely the decomposition of plant materials and the decomposition of cadavers and carcasses.
The decomposition of plant materials is commonly studied in order to understand the cycling of carbon within a given environment and to understand the subsequent impacts on soil quality. Plant material decomposition is also often referred to as composting. The decomposition of cadavers and carcasses has become an important field of study within forensic taphonomy.
Decomposition microbiology of plant materials
The breakdown of vegetation is highly dependent on oxygen and moisture levels. During decomposition, microorganisms require oxygen for their respiration. If anaerobic conditions dominate the decomposition environment, microbial activity will be slow and thus decomposition will be slow. Appropriate moisture levels are required for microorganisms to proliferate and to actively decompose organic matter. In arid environments, bacteria and fungi dry out and are unable to take part in decomposition. In wet environments, anaerobic conditions will develop and decomposition can also be considerably slowed down. Decomposing microorganisms also require the appropriate plant substrates in order to achieve good levels of decomposition. This usually translates to having appropriate carbon to nitrogen ratios (C:N). The ideal composting carbon-to-nitrogen ratio is thought to be approximately 30:1. As in any microbial process, the decomposition of plant litter by microorganisms will also be dependent on temperature. For example, leaves on the ground will not undergo decomposition during the winter months where snow cover occurs as temperatures are too low to sustain microbial activities.
Decomposition mi
Document 3:::
Further Mathematics is the title given to a number of advanced secondary mathematics courses. The term "Higher and Further Mathematics", and the term "Advanced Level Mathematics", may also refer to any of several advanced mathematics courses at many institutions.
In the United Kingdom, Further Mathematics describes a course studied in addition to the standard mathematics AS-Level and A-Level courses. In the state of Victoria in Australia, it describes a course delivered as part of the Victorian Certificate of Education (see § Australia (Victoria) for a more detailed explanation). Globally, it describes a course studied in addition to GCE AS-Level and A-Level Mathematics, or one which is delivered as part of the International Baccalaureate Diploma.
In other words, more mathematics can also be referred to as part of advanced mathematics, or advanced level math.
United Kingdom
Background
A qualification in Further Mathematics involves studying both pure and applied modules. Whilst the pure modules (formerly known as Pure 4–6 or Core 4–6, now known as Further Pure 1–3, where 4 exists for the AQA board) build on knowledge from the core mathematics modules, the applied modules may start from first principles.
The structure of the qualification varies between exam boards.
With regard to Mathematics degrees, most universities do not require Further Mathematics, and may incorporate foundation math modules or offer "catch-up" classes covering any additional content. Exceptions are the University of Warwick, the University of Cambridge which requires Further Mathematics to at least AS level; University College London requires or recommends an A2 in Further Maths for its maths courses; Imperial College requires an A in A level Further Maths, while other universities may recommend it or may promise lower offers in return. Some schools and colleges may not offer Further mathematics, but online resources are available
Although the subject has about 60% of its cohort obtainin
Document 4:::
Types of mill include the following:
Manufacturing facilities
Categorized by power source
Watermill, a mill powered by moving water
Windmill, a mill powered by moving air (wind)
Tide mill, a water mill that uses the tide's movement
Treadmill or treadwheel, a mill powered by human or animal movement
Horse mill, a mill powered by horses' movement
Categorized by not being a fixed building
Ship mill, a water mill that floats on the river or bay whose current or tide provides the water movement
Field mill (carriage), a portable mill
Categorized by what is made and/or acted on
Materials recovery facility, processes raw garbage and turns it into purified commodities like aluminum, PET, and cardboard by processing and crushing (compressing and baling) it.
Rice mill, processes paddy to rice
Bark mill, produces tanbark for tanneries
Coffee mill
Colloid mill
Cider mill, crushes apples to give cider
Drainage mills such as the Clayrack Drainage Mill are used to pump water from low-lying land.
Flotation mill, in mining, uses grinding and froth flotation to concentrate ores using differences in materials' hydrophobicity
Gristmill, a grain mill (flour mill)
Herb grinder
Oil mill, see expeller pressing, extrusion
Ore mill, for crushing and processing ore
Paper mill
Pellet mill
Powder mill, produces gunpowder
Puppy mill, a breeding facility that produces puppies on a large scale, where the welfare of the dogs is jeopardized for profits
Rock crusher
Sugar cane mill
Sawmill, a lumber mill
Millwork
starch mill
Steel mill
sugar mill (also called a sugar refinery), processes sugar beets or sugar cane into various finished products
Textile mills for textile manufacturing:
Cotton mill
Flax mill, for flax
Silk mill, for silk
woollen mill, see textile manufacturing
huller (also called a rice mill, or rice husker) is used to hull rice
Wire mill, for wire drawing
Other types
See :Category:Industrial buildings and structures
Industrial tools for size re
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the process of processing used material into new ones called?
A. renew
B. recycling
C. reuse
D. remake
Answer:
|
|
sciq-2143
|
multiple_choice
|
What does the precise pattern of a crystal depend on?
|
[
"mass",
"compound",
"age",
"chance"
] |
B
|
Relavent Documents:
Document 0:::
A crystal filter allows some frequencies to 'pass' through an electrical circuit while attenuating undesired frequencies. An electronic filter can use quartz crystals as resonator components of a filter circuit. Quartz crystals are piezoelectric, so their mechanical characteristics can affect electronic circuits (see mechanical filter). In particular, quartz crystals can exhibit mechanical resonances with a very high factor (from 10,000 to 100,000 and greater – far higher than conventional resonators built from inductors and capacitors). The crystal's stability and its high Q factor allow crystal filters to have precise center frequencies and steep band-pass characteristics. Typical crystal filter attenuation in the band-pass is approximately 2-3dB. Crystal filters are commonly used in communication devices such as radio receivers.
Crystal filters are used in the intermediate frequency (IF) stages of high-quality radio receivers. They are preferred because they are very stable mechanically and thus have little change in resonant frequency with changes in operating temperature. For the highest available stability applications, crystals are placed in ovens with controlled temperature making operating temperature independent of ambient temperature.
Cheaper sets may use ceramic filters built from ceramic resonators (which also exploit the piezoelectric effect) or tuned LC circuits. Very high quality "crystal ladder" filters can be constructed of serial arrays of crystals.
The most common use of crystal filters are at frequencies of 9 MHz or 10.7 MHz to provide selectivity in communications receivers, or at higher frequencies as a roofing filter in receivers using up-conversion. The vibrating frequencies of the crystal are determined by its "cut" (physical shape), such as the common AT cut used for crystal filters designed for radio communications. The cut also determines some temperature characteristics, which affect the stability of the resonant frequency. However
Document 1:::
Quantum crystallography is a branch of crystallography that investigates crystalline materials within the framework of quantum mechanics, with analysis and representation, in position or in momentum space, of quantities like wave function, electron charge and spin density, density matrices and all properties related to them (like electric potential, electric or magnetic moments, energy densities, electron localization function, one electron potential, etc.).
Like the quantum chemistry, Quantum crystallography involves both experimental and computational work. The theoretical part of quantum crystallography is based on quantum mechanical calculations of atomic/molecular/crystal wave functions, density matrices or density models, used to simulate the electronic structure of a crystalline material. While in quantum chemistry, the experimental works mainly rely on spectroscopy, in quantum crystallography the scattering techniques (X-rays, neutrons, γ-Rays, electrons) play the central role, although spectroscopy as well as atomic microscopy are also sources of information.
The connection between crystallography and quantum chemistry has always been very tight, after X-ray diffraction techniques became available in crystallography. In fact, the scattering of radiation enables mapping the one-electron distribution or the elements of a density matrix.
The kind of radiation and scattering determines the quantity which is represented (electron charge or spin) and the space in which it is represented (position or momentum space).
Although the wave function is typically assumed not to be directly measurable, recent advances enable also to compute wave functions that are restrained to some experimentally measurable observable (like the scattering of a radiation).
The term Quantum Crystallography was first introduced in revisitation articles by L. Huang, L. Massa and Nobel Prize winner Jerome Karle, who associated it with two mainstreams: a) crystallographic information that
Document 2:::
This is a timeline of crystallography.
18th Century
1723 – Moritz Anton Cappeller introduces the term ‘crystallography’.
1766 – Pierre-Joseph Macquer, in his Dictionnaire de Chymie, promotes mechanisms of crystallization based on the idea that crystals are composed of polyhedral molecules (primitive integrantes).
1772 – Jean-Baptiste L. Romé de l'Isle develops geometrical ideas on crystal structure in his Essai de Cristallographie. He also described the twinning phenomenon in crystals.
1781 – Abbé René Just Haüy (often termed the "Father of Modern Crystallography") discovers that crystals always cleave along crystallographic planes. Based on this observation, and the fact that the inter-facial angles in each crystal species always have the same value, Haüy concluded that crystals must be periodic and composed of regularly arranged rows of tiny polyhedra (molécules intégrantes). This theory explained why all crystal planes are related by small rational numbers (the law of rational indices).
1783 – Jean-Baptiste L. Romé de l'Isle in the second edition of his Cristallographie uses the contact goniometer to discover the law of constant interfacial angles: angles are constant and characteristic for crystals of the same chemical substance.
1784 – René Just Haüy publishes his Law of Decrements: a crystal is composed of molecules arranged periodically in three dimensions.
1795 – René Just Haüy lectures on his Law of Symmetry: “[…] the manner in which Nature creates crystals is always obeying [...] the law of the greatest possible symmetry, in the sense that oppositely situated but corresponding parts are always equal in number, arrangement, and form of their faces”.
19th Century
1801 – René Just Haüy publishes his multi-volume Traité de Minéralogie in Paris. A second edition under the title Traité de Cristallographie was published in 1822.
1815 – René Just Haüy publishes his Law of Symmetry.
1815 – Christian Samuel Weiss, founder of the dynamist school of crysta
Document 3:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 4:::
In crystallography, a Strukturbericht designation or Strukturbericht type is a system of detailed crystal structure classification by analogy to another known structure. The designations were intended to be comprehensive but are mainly used as supplement to space group crystal structures designations, especially historically. Each Strukturbericht designation is described by a single space group, but the designation includes additional information about the positions of the individual atoms, rather than just the symmetry of the crystal structure. While Strukturbericht symbols exist for many of the earliest observed and most common crystal structures, the system is not comprehensive, and is no longer being updated. Modern databases such as Inorganic Crystal Structure Database index thousands of structure types directly by the prototype compound (i.e. "the NaCl structure" instead of "the B1 structure"). These are essentially equivalent to the old Stukturbericht designations.
History
The designations were established by the journal Zeitschrift für Kristallographie – Crystalline Materials, which published its first round of supplemental reviews under the name Strukturbericht from 1913-1928. These reports were collected into a book published in 1931 by Paul Peter Ewald and Carl Hermann which became Volume 1 of Strukturbericht. While the series was continued after the war under the name Structure reports, which was published through 1990, the series stopped generating new symbols. Instead, some new additional designations were given in books by Smithels, and Pearson.
For the first volume, the designation consisted of a capital letter (A,B,C,D,E,F,G,H,L,M,O) specifying a broad category of compounds, and then a number to specify a particular crystal structure. In the second volume, subscript numbers were added, some early symbols were modified (e.g. what was initially D1 became D01, D2 became D02, etc.), and the categories were modified (types I,K,S were added). In the
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What does the precise pattern of a crystal depend on?
A. mass
B. compound
C. age
D. chance
Answer:
|
|
sciq-7090
|
multiple_choice
|
What helps cells take up glucose from the blood?
|
[
"hemoglobin",
"oxygen",
"insulin",
"estrogen"
] |
C
|
Relavent Documents:
Document 0:::
Hematopoietic stem cells (HSCs) have high regenerative potentials and are capable of differentiating into all blood and immune system cells. Despite this impressive potential, HSCs have limited potential to produce more multipotent stem cells. This limited self-renewal potential is protected through maintenance of a quiescent state in HSCs. Stem cells maintained in this quiescent state are known as long term HSCs (LT-HSCs). During quiescence, HSCs maintain a low level of metabolic activity and do not divide. LT-HSCs can be signaled to proliferate, producing either myeloid or lymphoid progenitors. Production of these progenitors does not come without a cost: When grown under laboratory conditions that induce proliferation, HSCs lose their ability to divide and produce new progenitors. Therefore, understanding the pathways that maintain proliferative or quiescent states in HSCs could reveal novel pathways to improve existing therapeutics involving HSCs.
Background
All adult stem cells can undergo two types of division: symmetric and asymmetric. When a cell undergoes symmetric division, it can either produce two differentiated cells or two new stem cells. When a cell undergoes asymmetric division, it produces one stem and one differentiated cell. Production of new stem cells is necessary to maintain this population within the body. Like all cells, hematopoietic stem cells undergo metabolic shifts to meet their bioenergetic needs throughout development. These metabolic shifts play an important role in signaling, generating biomass, and protecting the cell from damage. Metabolic shifts also guide development in HSCs and are one key factor in determining if an HSC will remain quiescent, symmetrically divide, or asymmetrically divide. As mentioned above, quiescent cells maintain a low level of oxidative phosphorylation and primarily rely on glycolysis to generate energy. Fatty acid beta-oxidation has been shown to influence fate decisions in HSCs. In contrast, proliferat
Document 1:::
Glycogen is a multibranched polysaccharide of glucose that serves as a form of energy storage in animals, fungi, and bacteria. It is the main storage form of glucose in the human body.
Glycogen functions as one of three regularly used forms of energy reserves, creatine phosphate being for very short-term, glycogen being for short-term and the triglyceride stores in adipose tissue (i.e., body fat) being for long-term storage. Protein, broken down into amino acids, is seldom used as a main energy source except during starvation and glycolytic crisis (see bioenergetic systems).
In humans, glycogen is made and stored primarily in the cells of the liver and skeletal muscle. In the liver, glycogen can make up 5–6% of the organ's fresh weight: the liver of an adult, weighing 1.5 kg, can store roughly 100–120 grams of glycogen. In skeletal muscle, glycogen is found in a low concentration (1–2% of the muscle mass): the skeletal muscle of an adult weighing 70 kg stores roughly 400 grams of glycogen. Small amounts of glycogen are also found in other tissues and cells, including the kidneys, red blood cells, white blood cells, and glial cells in the brain. The uterus also stores glycogen during pregnancy to nourish the embryo.
The amount of glycogen stored in the body mostly depends on oxidative type 1 fibres, physical training, basal metabolic rate, and eating habits. Different levels of resting muscle glycogen are reached by changing the number of glycogen particles, rather than increasing the size of existing particles though most glycogen particles at rest are smaller than their theoretical maximum.
Approximately 4 grams of glucose are present in the blood of humans at all times; in fasting individuals, blood glucose is maintained constant at this level at the expense of glycogen stores in the liver and skeletal muscle. Glycogen stores in skeletal muscle serve as a form of energy storage for the muscle itself; however, the breakdown of muscle glycogen impedes muscle
Document 2:::
The glucose paradox was the observation that the large amount of glycogen in the liver was not explained by the small amount of glucose absorbed. The explanation was that the majority of glycogen is made from a number of substances other than glucose. The glucose paradox was first formulated by biochemists J. Denis McGarry and Joseph Katz in 1984.
The glucose paradox demonstrates the importance of the chemical compound lactate in the biochemical process of carbohydrate metabolism. The paradox is that the large amount of glycogen (10%) found in the liver cannot be explained by the liver's small absorption of glucose. After the body's digestion of carbohydrates and the entering the circulatory system in the form of glucose, some will be absorbed directly into the muscle tissue and will be converted into lactic acid throughout the anaerobic energy system, rather than going directly to the liver and being converted into glycogen. The lactate is then taken and converted by the liver, forming the material for liver glycogen. The majority of the body's liver glycogen is produced indirectly, rather than directly from glucose in the blood. Under normal physiological conditions, glucose is a poor precursor compound and use by the liver is limited.
Document 3:::
The cell is the basic structural and functional unit of all forms of life. Every cell consists of cytoplasm enclosed within a membrane, and contains many macromolecules such as proteins, DNA and RNA, as well as many small molecules of nutrients and metabolites. The term comes from the Latin word meaning 'small room'.
Cells can acquire specified function and carry out various tasks within the cell such as replication, DNA repair, protein synthesis, and motility. Cells are capable of specialization and mobility within the cell.
Most plant and animal cells are only visible under a light microscope, with dimensions between 1 and 100 micrometres. Electron microscopy gives a much higher resolution showing greatly detailed cell structure. Organisms can be classified as unicellular (consisting of a single cell such as bacteria) or multicellular (including plants and animals). Most unicellular organisms are classed as microorganisms.
The study of cells and how they work has led to many other studies in related areas of biology, including: discovery of DNA, cancer systems biology, aging and developmental biology.
Cell biology is the study of cells, which were discovered by Robert Hooke in 1665, who named them for their resemblance to cells inhabited by Christian monks in a monastery. Cell theory, first developed in 1839 by Matthias Jakob Schleiden and Theodor Schwann, states that all organisms are composed of one or more cells, that cells are the fundamental unit of structure and function in all living organisms, and that all cells come from pre-existing cells. Cells emerged on Earth about 4 billion years ago.
Discovery
With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as i
Document 4:::
Ethanol fermentation, also called alcoholic fermentation, is a biological process which converts sugars such as glucose, fructose, and sucrose into cellular energy, producing ethanol and carbon dioxide as by-products. Because yeasts perform this conversion in the absence of oxygen, alcoholic fermentation is considered an anaerobic process. It also takes place in some species of fish (including goldfish and carp) where (along with lactic acid fermentation) it provides energy when oxygen is scarce.
Ethanol fermentation is the basis for alcoholic beverages, ethanol fuel and bread dough rising.
Biochemical process of fermentation of sucrose
The chemical equations below summarize the fermentation of sucrose (C12H22O11) into ethanol (C2H5OH). Alcoholic fermentation converts one mole of glucose into two moles of ethanol and two moles of carbon dioxide, producing two moles of ATP in the process.
C6H12O6 → 2 C2H5OH + 2 CO2
Sucrose is a sugar composed of a glucose linked to a fructose. In the first step of alcoholic fermentation, the enzyme invertase cleaves the glycosidic linkage between the glucose and fructose molecules.
C12H22O11 + H2O + invertase → 2 C6H12O6
Next, each glucose molecule is broken down into two pyruvate molecules in a process known as glycolysis. Glycolysis is summarized by the equation:
C6H12O6 + 2 ADP + 2 Pi + 2 NAD+ → 2 CH3COCOO− + 2 ATP + 2 NADH + 2 H2O + 2 H+
CH3COCOO− is pyruvate, and Pi is inorganic phosphate. Finally, pyruvate is converted to ethanol and CO2 in two steps, regenerating oxidized NAD+ needed for glycolysis:
1. CH3COCOO− + H+ → CH3CHO + CO2
catalyzed by pyruvate decarboxylase
2. CH3CHO + NADH + H+ → C2H5OH + NAD+
This reaction is catalyzed by alcohol dehydrogenase (ADH1 in baker's yeast).
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What helps cells take up glucose from the blood?
A. hemoglobin
B. oxygen
C. insulin
D. estrogen
Answer:
|
|
sciq-10143
|
multiple_choice
|
Said to go hand-in-hand with science, what evolves as new materials, designs, and processes are invented?
|
[
"invention",
"technology",
"industry",
"biology"
] |
B
|
Relavent Documents:
Document 0:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 1:::
The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work.
History
It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council.
Function
Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres.
STEM ambassadors
To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell.
Funding
STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments.
See also
The WISE Campaign
Engineering and Physical Sciences Research Council
National Centre for Excellence in Teaching Mathematics
Association for Science Education
Glossary of areas of mathematics
Glossary of astronomy
Glossary of biology
Glossary of chemistry
Glossary of engineering
Glossary of physics
Document 2:::
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
Document 3:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
Document 4:::
Science, technology, engineering, and mathematics (STEM) is an umbrella term used to group together the distinct but related technical disciplines of science, technology, engineering, and mathematics. The term is typically used in the context of education policy or curriculum choices in schools. It has implications for workforce development, national security concerns (as a shortage of STEM-educated citizens can reduce effectiveness in this area), and immigration policy, with regard to admitting foreign students and tech workers.
There is no universal agreement on which disciplines are included in STEM; in particular, whether or not the science in STEM includes social sciences, such as psychology, sociology, economics, and political science. In the United States, these are typically included by organizations such as the National Science Foundation (NSF), the Department of Labor's O*Net online database for job seekers, and the Department of Homeland Security. In the United Kingdom, the social sciences are categorized separately and are instead grouped with humanities and arts to form another counterpart acronym HASS (Humanities, Arts, and Social Sciences), rebranded in 2020 as SHAPE (Social Sciences, Humanities and the Arts for People and the Economy). Some sources also use HEAL (health, education, administration, and literacy) as the counterpart of STEM.
Terminology
History
Previously referred to as SMET by the NSF, in the early 1990s the acronym STEM was used by a variety of educators, including Charles E. Vela, the founder and director of the Center for the Advancement of Hispanics in Science and Engineering Education (CAHSEE). Moreover, the CAHSEE started a summer program for talented under-represented students in the Washington, D.C., area called the STEM Institute. Based on the program's recognized success and his expertise in STEM education, Charles Vela was asked to serve on numerous NSF and Congressional panels in science, mathematics, and engineering edu
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Said to go hand-in-hand with science, what evolves as new materials, designs, and processes are invented?
A. invention
B. technology
C. industry
D. biology
Answer:
|
|
sciq-6680
|
multiple_choice
|
What is the color of mercury oxide?
|
[
"green",
"yellow",
"orange",
"red"
] |
D
|
Relavent Documents:
Document 0:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 1:::
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
Document 2:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 3:::
Colored music notation is a technique used to facilitate enhanced learning in young music students by adding visual color to written musical notation. It is based upon the concept that color can affect the observer in various ways, and combines this with standard learning of basic notation.
Basis
Viewing color has been widely shown to change an individual's emotional state and stimulate neurons. The Lüscher color test observes from experiments that when individuals are required to contemplate pure red for varying lengths of time, [the experiments] have shown that this color decidedly has a stimulating effect on the nervous system; blood pressure increases, and respiration rate and heart rate both increase. Pure blue, on the other hand, has the reverse effect; observers experience a decline in blood pressure, heart rate, and breathing. Given these findings, it has been suggested that the influence of colored musical notation would be similar.
Music education
In music education, color is typically used in method books to highlight new material. Stimuli received through several senses excite more neurons in several localized areas of the cortex, thereby reinforcing the learning process and improving retention. This information has been proven by other researchers; Chute (1978) reported that "elementary students who viewed a colored version of an instructional film scored significantly higher on both immediate and delayed tests than did students who viewed a monochrome version".
Color studies
Effect on achievement
A researcher in this field, George L. Rogers is the Director of Music Education at Westfield State College. He is also the author of 25 articles in publications that include the Music Educators Journal, The Instrumentalist, and the Journal of Research in Music Education. In 1991, George L. Rogers did a study that researched the effect of color-coded notation on music achievement of elementary instrumental students. Rogers states that the color-co
Document 4:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the color of mercury oxide?
A. green
B. yellow
C. orange
D. red
Answer:
|
|
sciq-10505
|
multiple_choice
|
Since electrons are charged, their intrinsic spin creates a what?
|
[
"intrinsic magnetic field",
"intrinsic electrical field",
"magnified rupulsed field",
"suppressed electrical field"
] |
A
|
Relavent Documents:
Document 0:::
Spintronics (a portmanteau meaning spin transport electronics), also known as spin electronics, is the study of the intrinsic spin of the electron and its associated magnetic moment, in addition to its fundamental electronic charge, in solid-state devices. The field of spintronics concerns spin-charge coupling in metallic systems; the analogous effects in insulators fall into the field of multiferroics.
Spintronics fundamentally differs from traditional electronics in that, in addition to charge state, electron spins are used as a further degree of freedom, with implications in the efficiency of data storage and transfer. Spintronic systems are most often realised in dilute magnetic semiconductors (DMS) and Heusler alloys and are of particular interest in the field of quantum computing and neuromorphic computing.
History
Spintronics emerged from discoveries in the 1980s concerning spin-dependent electron transport phenomena in solid-state devices. This includes the observation of spin-polarized electron injection from a ferromagnetic metal to a normal metal by Johnson and Silsbee (1985) and the discovery of giant magnetoresistance independently by Albert Fert et al. and Peter Grünberg et al. (1988). The origin of spintronics can be traced to the ferromagnet/superconductor tunneling experiments pioneered by Meservey and Tedrow and initial experiments on magnetic tunnel junctions by Julliere in the 1970s. The use of semiconductors for spintronics began with the theoretical proposal of a spin field-effect-transistor by Datta and Das in 1990 and of the electric dipole spin resonance by Rashba in 1960.
Theory
The spin of the electron is an intrinsic angular momentum that is separate from the angular momentum due to its orbital motion. The magnitude of the projection of the electron's spin along an arbitrary axis is , implying that the electron acts as a fermion by the spin-statistics theorem. Like orbital angular momentum, the spin has an associated magnetic moment,
Document 1:::
In particle physics, spin polarization is the degree to which the spin, i.e., the intrinsic angular momentum of elementary particles, is aligned with a given direction. This property may pertain to the spin, hence to the magnetic moment, of conduction electrons in ferromagnetic metals, such as iron, giving rise to spin-polarized currents. It may refer to (static) spin waves, preferential correlation of spin orientation with ordered lattices (semiconductors or insulators).
It may also pertain to beams of particles, produced for particular aims, such as polarized neutron scattering or muon spin spectroscopy. Spin polarization of electrons or of nuclei, often called simply magnetization, is also produced by the application of a magnetic field. Curie law is used to produce an induction signal in electron spin resonance (ESR or EPR) and in nuclear magnetic resonance (NMR).
Spin polarization is also important for spintronics, a branch of electronics. Magnetic semiconductors are being researched as possible spintronic materials.
The spin of free electrons is measured either by a LEED image from a clean wolfram-crystal (SPLEED) or by an electron microscope composed purely of electrostatic lenses and a gold foil as a sample. Back scattered electrons are decelerated by annular optics and focused onto a ring shaped electron multiplier at about 15°. The position on the ring is recorded. This whole device is called a Mott-detector. Depending on their spin the electrons have the chance to hit the ring at different positions. 1% of the electrons are scattered in the foil. Of these 1% are collected by the detector and then about 30% of the electrons hit the detector at the wrong position. Both devices work due to spin orbit coupling.
The circular polarization of electromagnetic fields is due to spin polarization of their constituent photons.
In the most generic context, spin polarization is any alignment of the components of a non-scalar (vectorial, tensorial, spinor) field w
Document 2:::
Electron magnetic circular dichroism (EMCD) (also known as electron energy-loss magnetic chiral dichroism) is the EELS equivalent of XMCD.
The effect was first proposed in 2003 and experimentally confirmed in 2006 by the group of Prof. Peter Schattschneider at the Vienna University of Technology.
Similarly to XMCD, EMCD is a difference spectrum of two EELS spectra taken in a magnetic field with opposite helicities. Under appropriate scattering conditions virtual photons with specific circular polarizations can be absorbed, giving rise to spectral differences. The largest difference is expected between the case where one virtual photon with left circular polarization and one with right circular polarization are absorbed. By closely analyzing the difference in the EMCD spectrum, information can be obtained on the magnetic properties of the atom, such as its spin and orbital magnetic moment.
In the case of transition metals such as iron, cobalt, and nickel, the absorption spectra for EMCD are usually measured at the L-edge. This corresponds to the excitation of a 2p electron to a 3d state by the absorption of a virtual photon providing the ionisation energy. The absorption is visible as a spectral feature in the electron energy loss spectrum (EELS). Because the 3d electron states are the origin of the magnetic properties of the elements, the spectra contain information on the magnetic properties. Moreover, since the energy of each transition depends on the atomic number, the information obtained is element specific, that is, it is possible to distinguish the magnetic properties of a given element by examining the EMCD spectrum at its characteristic energy (708 eV for iron).
Since in both EMCD and XMCD the same electronic transitions are probed, the information obtained is the same. However EMCD has a higher spatial resolution and depth sensitivity than its X-ray counterpart. Moreover, EMCD can be measured on any TEM equipped with an EELS detector, whereas XMCD is
Document 3:::
Spin is an intrinsic form of angular momentum carried by elementary particles, and thus by composite particles such as hadrons, atomic nuclei, and atoms. Spin should not be conceptualized as involving the "rotation" of a particle's "internal mass", as ordinary use of the word may suggest: spin is a quantized property of waves.
The existence of electron spin angular momentum is inferred from experiments, such as the Stern–Gerlach experiment, in which silver atoms were observed to possess two possible discrete angular momenta despite having no orbital angular momentum. The existence of the electron spin can also be inferred theoretically from the spin–statistics theorem and from the Pauli exclusion principle—and vice versa, given the particular spin of the electron, one may derive the Pauli exclusion principle.
Spin is described mathematically as a vector for some particles such as photons, and as spinors and bispinors for other particles such as electrons. Spinors and bispinors behave similarly to vectors: they have definite magnitudes and change under rotations; however, they use an unconventional "direction". All elementary particles of a given kind have the same magnitude of spin angular momentum, though its direction may change. These are indicated by assigning the particle a spin quantum number.
The SI unit of spin is the same as classical angular momentum (i.e., N·m·s, J·s, or kg·m2·s−1). In practice, spin is usually given as a dimensionless spin quantum number by dividing the spin angular momentum by the reduced Planck constant , which has the same dimensions as angular momentum. Often, the "spin quantum number" is simply called "spin".
Relation to classical rotation
The very earliest models for electron spin imagined a rotating charged mass, but this model fails when examined in detail: the required space distribution does not match limits on the electron radius: the required rotation speed exceeds the speed of light. In the Standard Model, the fundament
Document 4:::
Spin engineering describes the control and manipulation of quantum spin systems to develop devices and materials. This includes the use of the spin degrees of freedom as a probe for spin based phenomena.
Because of the basic importance of quantum spin for physical and chemical processes, spin engineering is relevant for a wide range of scientific and technological applications. Current examples range from Bose–Einstein condensation to spin-based data storage and reading in state-of-the-art hard disk drives, as well as from powerful analytical tools like nuclear magnetic resonance spectroscopy and electron paramagnetic resonance spectroscopy to the development of magnetic molecules as qubits and magnetic nanoparticles. In addition, spin engineering exploits the functionality of spin to design materials with novel properties as well as to provide a better understanding and advanced applications of conventional material systems. Many chemical reactions are devised to create bulk materials or single molecules with well defined spin properties, such as a single-molecule magnet.
The aim of this article is to provide an outline of fields of research and development where the focus is on the properties and applications of quantum spin.
Introduction
As spin is one of the fundamental quantum properties of elementary particles it is relevant for a large range of physical and chemical phenomena. For instance, the spin of the electron plays a key role in the electron configuration of atoms which is the basis of the periodic table of elements. The origin of ferromagnetism is also closely related to the magnetic moment associated with the spin and the spin-dependent Pauli exclusion principle. Thus, the engineering of ferromagnetic materials like mu-metals or Alnico at the beginning of the last century can be considered as early examples of spin engineering, although the concept of spin was not yet known at that time. Spin engineering in its generic sense became possible onl
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Since electrons are charged, their intrinsic spin creates a what?
A. intrinsic magnetic field
B. intrinsic electrical field
C. magnified rupulsed field
D. suppressed electrical field
Answer:
|
|
sciq-1120
|
multiple_choice
|
When bread bakes, yeast releases which gas?
|
[
"carbon dioxide",
"hydrogen",
"oxygen",
"carbon monoxide"
] |
A
|
Relavent Documents:
Document 0:::
In cooking, proofing (also called proving) is a step in the preparation of yeast bread and other baked goods in which the dough is allowed to rest and rise a final time before baking. During this rest period, yeast ferments the dough and produces gases, thereby leavening the dough.
In contrast, proofing or blooming yeast (as opposed to proofing the dough) may refer to the process of first suspending yeast in warm water, a necessary hydration step when baking with active dry yeast. Proofing can also refer to the process of testing the viability of dry yeast by suspending it in warm water with carbohydrates (sugars). If the yeast is still alive, it will feed on the sugar and produce a visible layer of foam on the surface of the water mixture.
Fermentation rest periods are not always explicitly named, and can appear in recipes as "Allow dough to rise." When they are named, terms include "bulk fermentation", "first rise", "second rise", "final proof" and "shaped proof".
Dough processes
The process of making yeast-leavened bread involves a series of alternating work and rest periods. Work periods occur when the dough is manipulated by the baker. Some work periods are called mixing, kneading, and folding, as well as division, shaping, and panning. Work periods are typically followed by rest periods, which occur when dough is allowed to sit undisturbed. Particular rest periods include, but are not limited to, autolyse, bulk fermentation and proofing. Proofing, also sometimes called final fermentation, is the specific term for allowing dough to rise after it has been shaped and before it is baked.
Some breads begin mixing with an autolyse. This refers to a period of rest after the initial mixing of flour and water, a rest period that occurs sequentially before the addition of yeast, salt and other ingredients. This rest period allows for better absorption of water and helps the gluten and starches to align. The autolyse is credited to Raymond Calvel, who recommende
Document 1:::
Baker's yeast is the common name for the strains of yeast commonly used in baking bread and other bakery products, serving as a leavening agent which causes the bread to rise (expand and become lighter and softer) by converting the fermentable sugars present in the dough into carbon dioxide and ethanol. Baker's yeast is of the species Saccharomyces cerevisiae, and is the same species (but a different strain) as the kind commonly used in alcoholic fermentation, which is called brewer's yeast or the deactivated form nutritional yeast. Baker's yeast is also a single-cell microorganism found on and around the human body.
The use of steamed or boiled potatoes, water from potato boiling, or sugar in a bread dough provides food for the growth of yeasts; however, too much sugar will dehydrate them. Yeast growth is inhibited by both salt and sugar, but more so by salt than sugar. Some sources say fats, such as butter and eggs, slow down yeast growth; others say the effect of fat on dough remains unclear, presenting evidence that small amounts of fat are beneficial for baked bread volume.
Saccharomyces exiguus (also known as S. minor) is a wild yeast found on plants, grains, and fruits that is occasionally used for baking; however, in general, it is not used in a pure form but comes from being propagated in a sourdough starter.
History
It is not known when yeast was first used to bake bread; the earliest definite records come from Ancient Egypt. Researchers speculate that a mixture of flour meal and water was left longer than usual on a warm day and the yeasts that occur in natural contaminants of the flour caused it to ferment before baking. The resulting bread would have been lighter and tastier than the previous hard flatbreads. It is generally assumed that the earliest forms of leavening were likely very similar to modern sourdough; the leavening action of yeast would have been discovered from its action on flatbread doughs and would have been either cultivated separa
Document 2:::
A ferment (also known as bread starter) is a fermentation starter used in indirect methods of bread making. It may also be called mother dough.
A ferment and a longer fermentation in the bread-making process have several benefits: there is more time for yeast, enzyme and, if sourdough, bacterial actions on the starch and proteins in the dough; this in turn improves the keeping time of the baked bread, and it creates greater complexities of flavor. Though ferments have declined in popularity as direct additions of yeast in bread recipes have streamlined the process on a commercial level, ferments of various forms are widely used in artisanal bread recipes and formulas.
Classifications
In general, there are two ferment varieties: sponges, based on baker's yeast, and the starters of sourdough, based on wild yeasts and lactic acid bacteria. There are several kinds of pre-ferment commonly named and used in bread baking. They all fall on a varying process and time spectrum, from a mature mother dough of many generations of age to a first-generation sponge based on a fresh batch of baker's yeast:
Biga and poolish (or pouliche) are terms used in Italian and French baking, respectively, for sponges made with domestic baker's yeast. Poolish is a fairly wet sponge (typically one-to-one, this is made with a one-part-flour-to-one-part-water ratio by weight), and it is called biga liquida, whereas the "normal" biga is usually drier. Bigas can be held longer at their peak than wetter sponges, while a poolish is one known technique to increase a dough's extensibility.
Sourdough starter is likely the oldest, being reliant on organisms present in the grain and local environment. In general, these starters have fairly complex microbiological makeups, the most notable including wild yeasts, lactobacillus, and acetobacteria in symbiotic relationship referred to as a SCOBY. They are often maintained over long periods of time. For example, the Boudin Bakery in San Francisco has used t
Document 3:::
The Chorleywood bread process (CBP) is a method of efficient dough production to make yeasted bread quickly, producing a soft, fluffy loaf. Compared to traditional bread-making processes, CBP uses more yeast, added fats, chemicals, and high-speed mixing to allow the dough to be made with lower-protein wheat, and produces bread in a shorter time. It was developed by Bill Collins, George Elton and Norman Chamberlain of the British Baking Industries Research Association at Chorleywood in 1961. , 80% of bread made in the United Kingdom used the process.
For millennia, bread had been made from wheat flour by manually kneading dough with a raising agent (typically yeast) leaving it to ferment before it was baked. In 1862 a cheaper industrial-scale process was developed by John Dauglish, using water with dissolved carbon dioxide instead of yeast. Dauglish's method, used by the Aerated Bread Company that he set up, dominated commercial bread baking for a century until the yeast-based Chorleywood process was developed.
Some protein is lost during traditional bulk fermentation of bread; this does not occur to the same degree in mechanically developed doughs, allowing CBP to use lower-protein wheat. This feature had an important impact in the United Kingdom where, at the time, few domestic wheat varieties were of sufficient quality to make high-quality bread; the CBP permitted a much greater proportion of lower-protein domestic wheat to be used in the grist.
Description
The Chorleywood bread process allows the use of lower-protein wheats and reduces processing time, the system being able to produce a loaf of bread from flour to sliced and packaged form in about three and a half hours. This is achieved through the addition of Vitamin C, fat, yeast, and intense mechanical working by high-speed mixers, not feasible in a small-scale kitchen.
Flour, water, yeast, salt, and fat (if used) are mixed together, along with minor ingredients common to many bread-making techniques, suc
Document 4:::
In cooking, a leavening agent () or raising agent, also called a leaven () or leavener, is any one of a number of substances used in doughs and batters that cause a foaming action (gas bubbles) that lightens and softens the mixture. An alternative or supplement to leavening agents is mechanical action by which air is incorporated (i.e. kneading). Leavening agents can be biological or synthetic chemical compounds. The gas produced is often carbon dioxide, or occasionally hydrogen.
When a dough or batter is mixed, the starch in the flour and the water in the dough form a matrix (often supported further by proteins like gluten or polysaccharides, such as pentosans or xanthan gum). The starch then gelatinizes and sets, leaving gas bubbles that remain.
Biological leavening agents
Saccharomyces cerevisiae producing carbon dioxide found in:
baker's yeast
Beer barm (unpasteurised—live yeast)
ginger beer
kefir
sourdough starter
Clostridium perfringens producing hydrogen found in salt-rising bread
Chemical leavening agents
Chemical leavens are mixtures or compounds that release gases when they react with each other, with moisture, or with heat. Most are based on a combination of acid (usually a low molecular weight organic acid) and a salt of bicarbonate . After they act, these compounds leave behind a chemical salt. Chemical leavens are used in quick breads and cakes, as well as cookies and numerous other applications where a long biological fermentation is impractical or undesirable.
History
Chemical leavening using pearl ash as a leavening agent was mentioned by Amelia Simmons in her American Cookery, published in 1796.
Since chemical expertise is required to create a functional chemical leaven without producing off-flavors from the chemical precursors involved, such substances are often mixed into premeasured combinations for maximum results. These are generally referred to as baking powders. Sour milk and carbonates were used in the 1800s. The breakthr
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
When bread bakes, yeast releases which gas?
A. carbon dioxide
B. hydrogen
C. oxygen
D. carbon monoxide
Answer:
|
|
sciq-3276
|
multiple_choice
|
Chemical reactions always involve what?
|
[
"physical change",
"energy",
"heating",
"fuel"
] |
B
|
Relavent Documents:
Document 0:::
An elementary reaction is a chemical reaction in which one or more chemical species react directly to form products in a single reaction step and with a single transition state. In practice, a reaction is assumed to be elementary if no reaction intermediates have been detected or need to be postulated to describe the reaction on a molecular scale. An apparently elementary reaction may be in fact a stepwise reaction, i.e. a complicated sequence of chemical reactions, with reaction intermediates of variable lifetimes.
In a unimolecular elementary reaction, a molecule dissociates or isomerises to form the products(s)
At constant temperature, the rate of such a reaction is proportional to the concentration of the species
In a bimolecular elementary reaction, two atoms, molecules, ions or radicals, and , react together to form the product(s)
The rate of such a reaction, at constant temperature, is proportional to the product of the concentrations of the species and
The rate expression for an elementary bimolecular reaction is sometimes referred to as the Law of Mass Action as it was first proposed by Guldberg and Waage in 1864. An example of this type of reaction is a cycloaddition reaction.
This rate expression can be derived from first principles by using collision theory for ideal gases. For the case of dilute fluids equivalent results have been obtained from simple probabilistic arguments.
According to collision theory the probability of three chemical species reacting simultaneously with each other in a termolecular elementary reaction is negligible. Hence such termolecular reactions are commonly referred as non-elementary reactions and can be broken down into a more fundamental set of bimolecular reactions, in agreement with the law of mass action. It is not always possible to derive overall reaction schemes, but solutions based on rate equations are often possible in terms of steady-state or Michaelis-Menten approximations.
Notes
Chemical kinetics
Phy
Document 1:::
Physical biochemistry is a branch of biochemistry that deals with the theory, techniques, and methodology used to study the physical chemistry of biomolecules.
It also deals with the mathematical approaches for the analysis of biochemical reaction and the modelling of biological systems. It provides insight into the structure of macromolecules, and how chemical structure influences the physical properties of a biological substance.
It involves the use of physics, physical chemistry principles, and methodology to study biological systems. It employs various physical chemistry techniques such as chromatography, spectroscopy, Electrophoresis, X-ray crystallography, electron microscopy, and hydrodynamics.
See also
Physical chemistry
Document 2:::
Chemical reaction network theory is an area of applied mathematics that attempts to model the behaviour of real-world chemical systems. Since its foundation in the 1960s, it has attracted a growing research community, mainly due to its applications in biochemistry and theoretical chemistry. It has also attracted interest from pure mathematicians due to the interesting problems that arise from the mathematical structures involved.
History
Dynamical properties of reaction networks were studied in chemistry and physics after the invention of the law of mass action. The essential steps in this study were introduction of detailed balance for the complex chemical reactions by Rudolf Wegscheider (1901), development of the quantitative theory of chemical chain reactions by Nikolay Semyonov (1934), development of kinetics of catalytic reactions by Cyril Norman Hinshelwood, and many other results.
Three eras of chemical dynamics can be revealed in the flux of research and publications. These eras may be associated with leaders: the first is the van 't Hoff era, the second may be called the Semenov–Hinshelwood era and the third is definitely the Aris era.
The "eras" may be distinguished based on the main focuses of the scientific leaders:
van’t Hoff was searching for the general law of chemical reaction related to specific chemical properties. The term "chemical dynamics" belongs to van’t Hoff.
The Semenov-Hinshelwood focus was an explanation of critical phenomena observed in many chemical systems, in particular in flames. A concept chain reactions elaborated by these researchers influenced many sciences, especially nuclear physics and engineering.
Aris’ activity was concentrated on the detailed systematization of mathematical ideas and approaches.
The mathematical discipline "chemical reaction network theory" was originated by Rutherford Aris, a famous expert in chemical engineering, with the support of Clifford Truesdell, the founder and editor-in-chief of the journ
Document 3:::
In molecular biology, biosynthesis is a multi-step, enzyme-catalyzed process where substrates are converted into more complex products in living organisms. In biosynthesis, simple compounds are modified, converted into other compounds, or joined to form macromolecules. This process often consists of metabolic pathways. Some of these biosynthetic pathways are located within a single cellular organelle, while others involve enzymes that are located within multiple cellular organelles. Examples of these biosynthetic pathways include the production of lipid membrane components and nucleotides. Biosynthesis is usually synonymous with anabolism.
The prerequisite elements for biosynthesis include: precursor compounds, chemical energy (e.g. ATP), and catalytic enzymes which may need coenzymes (e.g. NADH, NADPH). These elements create monomers, the building blocks for macromolecules. Some important biological macromolecules include: proteins, which are composed of amino acid monomers joined via peptide bonds, and DNA molecules, which are composed of nucleotides joined via phosphodiester bonds.
Properties of chemical reactions
Biosynthesis occurs due to a series of chemical reactions. For these reactions to take place, the following elements are necessary:
Precursor compounds: these compounds are the starting molecules or substrates in a reaction. These may also be viewed as the reactants in a given chemical process.
Chemical energy: chemical energy can be found in the form of high energy molecules. These molecules are required for energetically unfavorable reactions. Furthermore, the hydrolysis of these compounds drives a reaction forward. High energy molecules, such as ATP, have three phosphates. Often, the terminal phosphate is split off during hydrolysis and transferred to another molecule.
Catalysts: these may be for example metal ions or coenzymes and they catalyze a reaction by increasing the rate of the reaction and lowering the activation energy.
In the sim
Document 4:::
Activation energy asymptotics (AEA), also known as large activation energy asymptotics, is an asymptotic analysis used in the combustion field utilizing the fact that the reaction rate is extremely sensitive to temperature changes due to the large activation energy of the chemical reaction.
History
The techniques were pioneered by the Russian scientists Yakov Borisovich Zel'dovich, David A. Frank-Kamenetskii and co-workers in the 30s, in their study on premixed flames and thermal explosions (Frank-Kamenetskii theory), but not popular to western scientists until the 70s. In the early 70s, due to the pioneering work of Williams B. Bush, Francis E. Fendell, Forman A. Williams, Amable Liñán and John F. Clarke, it became popular in western community and since then it was widely used to explain more complicated problems in combustion.
Method overview
In combustion processes, the reaction rate is dependent on temperature in the following form (Arrhenius law),
where is the activation energy, and is the universal gas constant. In general, the condition is satisfied, where is the burnt gas temperature. This condition forms the basis for activation energy asymptotics. Denoting for unburnt gas temperature, one can define the Zel'dovich number and heat release parameter as follows
In addition, if we define a non-dimensional temperature
such that approaching zero in the unburnt region and approaching unity in the burnt gas region (in other words, ), then the ratio of reaction rate at any temperature to reaction rate at burnt gas temperature is given by
Now in the limit of (large activation energy) with , the reaction rate is exponentially small i.e., and negligible everywhere, but non-negligible when . In other words, the reaction rate is negligible everywhere, except in a small region very close to burnt gas temperature, where . Thus, in solving the conservation equations, one identifies two different regimes, at leading order,
Outer convective-diffusive zone
I
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Chemical reactions always involve what?
A. physical change
B. energy
C. heating
D. fuel
Answer:
|
|
ai2_arc-1107
|
multiple_choice
|
Why are the rocks and pebbles found on riverbeds usually smooth?
|
[
"The rocks and pebbles in riverbeds are not very old.",
"The rocks and pebbles rub against each other as water flows over them.",
"Rivers can only flow over smooth rocks and pebbles.",
"Organisms in the rivers break down the rocks and pebbles."
] |
B
|
Relavent Documents:
Document 0:::
Large woody debris (LWD) are the logs, sticks, branches, and other wood that falls into streams and rivers. This debris can influence the flow and the shape of the stream channel. Large woody debris, grains, and the shape of the bed of the stream are the three main providers of flow resistance, and are thus, a major influence on the shape of the stream channel. Some stream channels have less LWD than they would naturally because of removal by watershed managers for flood control and aesthetic reasons.
The study of woody debris is important for its forestry management implications. Plantation thinning can reduce the potential for recruitment of LWD into proximal streams. The presence of large woody debris is important in the formation of pools which serve as salmon habitat in the Pacific Northwest. Entrainment of the large woody debris in a stream can also cause erosion and scouring around and under the LWD. The amount of scouring and erosion is determined by the ratio of the diameter of the piece, to the depth of the stream, and the embedding and orientation of the piece.
Influence on stream flow around bends
Large woody debris slow the flow through a bend in the stream, while accelerating flow in the constricted area downstream of the obstruction.
See also
Beaver dam
Coarse woody debris
Driftwood
Log jam
Stream restoration
Document 1:::
Hydraulic roughness is the measure of the amount of frictional resistance water experiences when passing over land and channel features.
One roughness coefficient is Manning's n-value. Manning's n is used extensively around the world to predict the degree of roughness in channels. Flow velocity is strongly dependent on the resistance to flow. An increase in this n value will cause a decrease in the velocity of water flowing across a surface.
Manning's n
The value of Manning's n is affected by many variables. Factors like suspended load, sediment grain size, presence of bedrock or boulders in the stream channel, variations in channel width and depth, and overall sinuosity of the stream channel can all affect Manning's n value. Biological factors have the greatest overall effect on Manning's n; bank stabilization by vegetation, height of grass and brush across a floodplain, and stumps and logs creating natural dams are the main observable influences.
Biological Importance
Recent studies have found a relationship between hydraulic roughness and salmon spawning habitat; “bed-surface grain size is responsive to hydraulic roughness caused by bank irregularities, bars, and wood debris… We find that wood debris plays an important role at our study sites, not only providing hydraulic roughness but also influencing pool spacing, frequency of textural patches, and the amplitude and wavelength of bank and bar topography and their consequent roughness. Channels with progressively greater hydraulic roughness have systematically finer bed surfaces, presumably due to reduced bed shear stress, resulting in lower channel competence and diminished bed load transport capacity, both of which promote textural fining”. Textural fining of stream beds can effect more than just salmon spawning habitats, “bar and wood roughness create a greater variety of textural patches, offering a range of aquatic habitats that may promote biologic diversity or be of use to specific animals at differe
Document 2:::
Sediment transport is the movement of solid particles (sediment), typically due to a combination of gravity acting on the sediment, and the movement of the fluid in which the sediment is entrained. Sediment transport occurs in natural systems where the particles are clastic rocks (sand, gravel, boulders, etc.), mud, or clay; the fluid is air, water, or ice; and the force of gravity acts to move the particles along the sloping surface on which they are resting. Sediment transport due to fluid motion occurs in rivers, oceans, lakes, seas, and other bodies of water due to currents and tides. Transport is also caused by glaciers as they flow, and on terrestrial surfaces under the influence of wind. Sediment transport due only to gravity can occur on sloping surfaces in general, including hillslopes, scarps, cliffs, and the continental shelf—continental slope boundary.
Sediment transport is important in the fields of sedimentary geology, geomorphology, civil engineering, hydraulic engineering and environmental engineering (see applications, below). Knowledge of sediment transport is most often used to determine whether erosion or deposition will occur, the magnitude of this erosion or deposition, and the time and distance over which it will occur.
Mechanisms
Aeolian
Aeolian or eolian (depending on the parsing of æ) is the term for sediment transport by wind. This process results in the formation of ripples and sand dunes. Typically, the size of the transported sediment is fine sand (<1 mm) and smaller, because air is a fluid with low density and viscosity, and can therefore not exert very much shear on its bed.
Bedforms are generated by aeolian sediment transport in the terrestrial near-surface environment. Ripples and dunes form as a natural self-organizing response to sediment transport.
Aeolian sediment transport is common on beaches and in the arid regions of the world, because it is in these environments that vegetation does not prevent the presence and motion
Document 3:::
A grassed waterway is a to 48-metre-wide (157 ft) native grassland strip of green belt. It is generally installed in the thalweg, the deepest continuous line along a valley or watercourse, of a cultivated dry valley in order to control erosion. A study carried out on a grassed waterway during 8 years in Bavaria showed that it can lead to several other types of positive impacts, e.g. on biodiversity.
Distinctions
Confusion between "grassed waterway" and "vegetative filter strips" should be avoided. The latter are generally narrower (only a few metres wide) and rather installed along rivers as well as along or within cultivated fields. However, buffer strip can be a synonym, with shrubs and trees added to the plant component, as does a riparian zone.
Runoff and erosion mitigation
Runoff generated on cropland during storms or long winter rains concentrates in the thalweg where it can lead to rill or gully erosion.
Rills and gullies further concentrate runoff and speed up its transfer, which can worsen damage occurring downstream. This can result in a muddy flood.
In this context, a grassed waterway allows increasing soil cohesion and roughness. It also prevents the formation of rills and gullies. Furthermore, it can slow down runoff and allow its re-infiltration during long winter rains. In contrast, its infiltration capacity is generally not sufficient to reinfiltrate runoff produced by heavy spring and summer storms. It can therefore be useful to combine it with extra measures, like the installation of earthen dams across the grassed waterway, in order to buffer runoff temporarily.
Document 4:::
Clay-water interaction is an all-inclusive term to describe various progressive interactions between clay minerals and water. In the dry state, clay packets exist in face-to-face stacks like a deck of playing cards, but clay packets begin to change when exposed to water. Five descriptive terms describe the progressive interactions that can occur in a clay-water system, such as a water mud.
(1) Hydration occurs as clay packets absorb water and swell.
(2) Dispersion (or disaggregation) causes clay platelets to break apart and disperse into the water due to loss of attractive forces as water forces the platelets farther apart.
(3) Flocculation begins when mechanical shearing stops and platelets previously dispersed come together due to the attractive force of surface charges on the platelets.
(4) Deflocculation, or peptization, the opposite effect, occurs by addition of chemical deflocculant to flocculated mud; the positive edge charges are covered and attraction forces are greatly reduced.
(5) Aggregation, a result of ionic or thermal conditions, alters the hydrational layer around clay platelets, removes the deflocculant from positive edge charges and allows platelets to assume a face-to-face structure.
See also
Dispersity
Quick clay behaviour
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Why are the rocks and pebbles found on riverbeds usually smooth?
A. The rocks and pebbles in riverbeds are not very old.
B. The rocks and pebbles rub against each other as water flows over them.
C. Rivers can only flow over smooth rocks and pebbles.
D. Organisms in the rivers break down the rocks and pebbles.
Answer:
|
|
sciq-9823
|
multiple_choice
|
What is the clear, protective covering on the outside of the eye?
|
[
"cornea",
"retina",
"vitreous fluid",
"iris"
] |
A
|
Relavent Documents:
Document 0:::
The ocular immune system protects the eye from infection and regulates healing processes following injuries. The interior of the eye lacks lymph vessels but is highly vascularized, and many immune cells reside in the uvea, including mostly macrophages, dendritic cells, and mast cells. These cells fight off intraocular infections, and intraocular inflammation can manifest as uveitis (including iritis) or retinitis. The cornea of the eye is immunologically a very special tissue. Its constant exposure to the exterior world means that it is vulnerable to a wide range of microorganisms while its moist mucosal surface makes the cornea particularly susceptible to attack. At the same time, its lack of vasculature and relative immune separation from the rest of the body makes immune defense difficult. Lastly, the cornea is a multifunctional tissue. It provides a large part of the eye's refractive power, meaning it has to maintain remarkable transparency, but must also serve as a barrier to keep pathogens from reaching the rest of the eye, similar to function of the dermis and epidermis in keeping underlying tissues protected. Immune reactions within the cornea come from surrounding vascularized tissues as well as innate immune responsive cells that reside within the cornea.
Immune difficulties for the cornea
The most important function of the cornea is to transmit and refract light so as to allow sharp (high-resolution) images to be produced on the back of the retina. To do this, collagen within the cornea is highly ordered to be 30 nanometers in diameter and placed 60 nanometers apart so as to reduce light scatter. Furthermore, the tissue is not vascularized, and does not contain lymphoid cells or other defense mechanisms, apart from some dendritic cells (DC). Both of these factors necessitate the small number of cells within the cornea. However, this necessitates keeping immune cells at a relative distance, effectively creating a time delay between exposures to a patho
Document 1:::
The sclera and cornea form the fibrous tunic of the bulb of the eye; the sclera is opaque, and constitutes the posterior five-sixths of the tunic; the cornea is transparent, and forms the anterior sixth.
The term "corneosclera" is also used to describe the sclera and cornea together.
Document 2:::
The pupil is a hole located in the center of the iris of the eye that allows light to strike the retina. It appears black because light rays entering the pupil are either absorbed by the tissues inside the eye directly, or absorbed after diffuse reflections within the eye that mostly miss exiting the narrow pupil. The size of the pupil is controlled by the iris, and varies depending on many factors, the most significant being the amount of light in the environment. The term "pupil" was coined by Gerard of Cremona.
In humans, the pupil is circular, but its shape varies between species; some cats, reptiles, and foxes have vertical slit pupils, goats have horizontally oriented pupils, and some catfish have annular types. In optical terms, the anatomical pupil is the eye's aperture and the iris is the aperture stop. The image of the pupil as seen from outside the eye is the entrance pupil, which does not exactly correspond to the location and size of the physical pupil because it is magnified by the cornea. On the inner edge lies a prominent structure, the collarette, marking the junction of the embryonic pupillary membrane covering the embryonic pupil.
Function
The iris is a contractile structure, consisting mainly of smooth muscle, surrounding the pupil. Light enters the eye through the pupil, and the iris regulates the amount of light by controlling the size of the pupil. This is known as the pupillary light reflex.
The iris contains two groups of smooth muscles; a circular group called the sphincter pupillae, and a radial group called the dilator pupillae. When the sphincter pupillae contract, the iris decreases or constricts the size of the pupil. The dilator pupillae, innervated by sympathetic nerves from the superior cervical ganglion, cause the pupil to dilate when they contract. These muscles are sometimes referred to as intrinsic eye muscles.
The sensory pathway (rod or cone, bipolar, ganglion) is linked with its counterpart in the other eye by a partial
Document 3:::
The LEA Vision Test System is a series of pediatric vision tests designed specifically for children who do not know how to read the letters of the alphabet that are typically used in eye charts. There are numerous variants of the LEA test which can be used to assess the visual capabilities of near vision and distance vision, as well as several other aspects of occupational health, such as contrast sensitivity, visual field, color vision, visual adaptation, motion perception, and ocular function and accommodation (eye).
History
The first version of the LEA test was developed in 1976 by Finnish pediatric ophthalmologist Lea Hyvärinen, MD, PhD. Dr. Hyvärinen completed her thesis on fluorescein angiography and helped start the first clinical laboratory in that area while serving as a fellow at the Wilmer Eye Institute of Johns Hopkins Hospital in 1967. During her time with the Wilmer Institute, she became interested in vision rehabilitation and assessment and has been working in that field since the 1970s, training rehabilitation teams, designing new visual assessment devices, and teaching. The first test within the LEA Vision Test System that Dr. Hyvarinen created was the classic LEA Symbols Test followed shortly by the LEA Numbers Test which was used in comparison studies within the field of occupational medicine.
Accuracy
Among the array of visual assessment picture tests that exist, the LEA symbols tests are the only tests that have been calibrated against the standardized Landolt C vision test symbol. The Landolt C is an optotype that is used throughout most of the world as the standardized symbol for measuring visual acuity. It is identical to the "C" that is used in the traditional Snellen chart.
In addition to this, the LEA symbols test has been experimentally verified to be both a valid and reliable measure of visual acuity. As is desirable of a good vision test, each of the four optotypes used in the symbols test has been proven to measure visual acuity sim
Document 4:::
Glands of Zeis are unilobar sebaceous glands located on the margin of the eyelid. The glands of Zeis service the eyelash. These glands produce an oily substance that is issued through the excretory ducts of the sebaceous lobule into the middle portion of the hair follicle. In the same area of the eyelid, near the base of the eyelashes are apocrine glands called the "glands of Moll".
If eyelashes are not kept clean, conditions such as folliculitis may take place, and if the sebaceous gland becomes infected, it can lead to abscesses and styes. The glands of Zeis are named after German ophthalmologist Eduard Zeis (1807–68).
See also
Meibomian gland
Moll's gland
List of specialized glands within the human integumentary system
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the clear, protective covering on the outside of the eye?
A. cornea
B. retina
C. vitreous fluid
D. iris
Answer:
|
|
sciq-4554
|
multiple_choice
|
What is a pair of valence electrons in a bonded atom that does not participate in bonding called?
|
[
"hostile pair",
"lone pair",
"opposite pair",
"isolated pair"
] |
B
|
Relavent Documents:
Document 0:::
In chemistry, a lone pair refers to a pair of valence electrons that are not shared with another atom in a covalent bond and is sometimes called an unshared pair or non-bonding pair. Lone pairs are found in the outermost electron shell of atoms. They can be identified by using a Lewis structure. Electron pairs are therefore considered lone pairs if two electrons are paired but are not used in chemical bonding. Thus, the number of electrons in lone pairs plus the number of electrons in bonds equals the number of valence electrons around an atom.
Lone pair is a concept used in valence shell electron pair repulsion theory (VSEPR theory) which explains the shapes of molecules. They are also referred to in the chemistry of Lewis acids and bases. However, not all non-bonding pairs of electrons are considered by chemists to be lone pairs. Examples are the transition metals where the non-bonding pairs do not influence molecular geometry and are said to be stereochemically inactive. In molecular orbital theory (fully delocalized canonical orbitals or localized in some form), the concept of a lone pair is less distinct, as the correspondence between an orbital and components of a Lewis structure is often not straightforward. Nevertheless, occupied non-bonding orbitals (or orbitals of mostly nonbonding character) are frequently identified as lone pairs.
A single lone pair can be found with atoms in the nitrogen group, such as nitrogen in ammonia. Two lone pairs can be found with atoms in the chalcogen group, such as oxygen in water. The halogens can carry three lone pairs, such as in hydrogen chloride.
In VSEPR theory the electron pairs on the oxygen atom in water form the vertices of a tetrahedron with the lone pairs on two of the four vertices. The H–O–H bond angle is 104.5°, less than the 109° predicted for a tetrahedral angle, and this can be explained by a repulsive interaction between the lone pairs.
Various computational criteria for the presence of lone pairs have
Document 1:::
In chemistry, an electron pair or Lewis pair consists of two electrons that occupy the same molecular orbital but have opposite spins. Gilbert N. Lewis introduced the concepts of both the electron pair and the covalent bond in a landmark paper he published in 1916.
Because electrons are fermions, the Pauli exclusion principle forbids these particles from having the same quantum numbers. Therefore, for two electrons to occupy the same orbital, and thereby have the same orbital quantum number, they must have different spin quantum number. This also limits the number of electrons in the same orbital to two.
The pairing of spins is often energetically favorable, and electron pairs therefore play a large role in chemistry. They can form a chemical bond between two atoms, or they can occur as a lone pair of valence electrons. They also fill the core levels of an atom.
Because the spins are paired, the magnetic moment of the electrons cancel one another, and the pair's contribution to magnetic properties is generally diamagnetic.
Although a strong tendency to pair off electrons can be observed in chemistry, it is also possible that electrons occur as unpaired electrons.
In the case of metallic bonding the magnetic moments also compensate to a large extent, but the bonding is more communal so that individual pairs of electrons cannot be distinguished and it is better to consider the electrons as a collective 'sea'.
A very special case of electron pair formation occurs in superconductivity: the formation of Cooper pairs. In unconventional superconductors, whose crystal structure contains copper anions, the electron pair bond is due to antiferromagnetic spin fluctuations.
See also
Electron pair production
Frustrated Lewis pair
Jemmis mno rules
Lewis acids and bases
Nucleophile
Polyhedral skeletal electron pair theory
Document 2:::
A non-bonding electron is an electron not involved in chemical bonding. This can refer to:
Lone pair, with the electron localized on one atom.
Non-bonding orbital, with the electron delocalized throughout the molecule.
Chemical bonding
Document 3:::
A bonding electron is an electron involved in chemical bonding. This can refer to:
Chemical bond, a lasting attraction between atoms, ions or molecules
Covalent bond or molecular bond, a sharing of electron pairs between atoms
Bonding molecular orbital, an attraction between the atomic orbitals of atoms in a molecule
Chemical bonding
Document 4:::
Electron deficiency (and electron-deficient) is jargon that is used in two contexts: species that violate the octet rule because they have too few valence electrons and species that happen to follow the octet rule but have electron-acceptor properties, forming donor-acceptor charge-transfer salts.
Octet rule violations
Traditionally, "electron-deficiency" is used as a general descriptor for boron hydrides and other molecules which do not have enough valence electrons to form localized (2-centre 2-electron) bonds joining all atoms. For example, diborane (B2H6) would require a minimum of 7 localized bonds with 14 electrons to join all 8 atoms, but there are only 12 valence electrons. A similar situation exists in trimethylaluminium. The electron deficiency in such compounds is similar to metallic bonding.
Electron-acceptor molecules
Alternatively, electron-deficiency describes molecules or ions that function as electron acceptors. Such electron-deficient species obey the octet rule, but they have (usually mild) oxidizing properties. 1,3,5-Trinitrobenzene and related polynitrated aromatic compounds are often described as electron-deficient. Electron deficiency can be measured by linear free-energy relationships: "a strongly negative ρ value indicates a large electron demand at the reaction center, from which it may be concluded that a highly electron-deficient center, perhaps an incipient carbocation, is involved."
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is a pair of valence electrons in a bonded atom that does not participate in bonding called?
A. hostile pair
B. lone pair
C. opposite pair
D. isolated pair
Answer:
|
|
sciq-4332
|
multiple_choice
|
What do earth and the other planets in the solar system make around the sun?
|
[
"elliptical orbits",
"radial orbits",
"smooth orbits",
"elevated orbits"
] |
A
|
Relavent Documents:
Document 0:::
This is a list of most likely gravitationally rounded objects of the Solar System, which are objects that have a rounded, ellipsoidal shape due to their own gravity (but are not necessarily in hydrostatic equilibrium). Apart from the Sun itself, these objects qualify as planets according to common geophysical definitions of that term. The sizes of these objects range over three orders of magnitude in radius, from planetary-mass objects like dwarf planets and some moons to the planets and the Sun. This list does not include small Solar System bodies, but it does include a sample of possible planetary-mass objects whose shapes have yet to be determined. The Sun's orbital characteristics are listed in relation to the Galactic Center, while all other objects are listed in order of their distance from the Sun.
Star
The Sun is a G-type main-sequence star. It contains almost 99.9% of all the mass in the Solar System.
Planets
In 2006, the International Astronomical Union (IAU) defined a planet as a body in orbit around the Sun that was large enough to have achieved hydrostatic equilibrium and to have "cleared the neighbourhood around its orbit". The practical meaning of "cleared the neighborhood" is that a planet is comparatively massive enough for its gravitation to control the orbits of all objects in its vicinity. In practice, the term "hydrostatic equilibrium" is interpreted loosely. Mercury is round but not actually in hydrostatic equilibrium, but it is universally regarded as a planet nonetheless.
According to the IAU's explicit count, there are eight planets in the Solar System; four terrestrial planets (Mercury, Venus, Earth, and Mars) and four giant planets, which can be divided further into two gas giants (Jupiter and Saturn) and two ice giants (Uranus and Neptune). When excluding the Sun, the four giant planets account for more than 99% of the mass of the Solar System.
Dwarf planets
Dwarf planets are bodies orbiting the Sun that are massive and warm eno
Document 1:::
This article is a list of notable unsolved problems in astronomy. Some of these problems are theoretical, meaning that existing theories may be incapable of explaining certain observed phenomena or experimental results. Others are experimental, meaning that experiments necessary to test proposed theory or investigate a phenomenon in greater detail have not yet been performed. Some pertain to unique events or occurrences that have not repeated themselves and whose causes remain unclear.
Planetary astronomy
Our solar system
Orbiting bodies and rotation:
Are there any non-dwarf planets beyond Neptune?
Why do extreme trans-Neptunian objects have elongated orbits?
Rotation rate of Saturn:
Why does the magnetosphere of Saturn rotate at a rate close to that at which the planet's clouds rotate?
What is the rotation rate of Saturn's deep interior?
Satellite geomorphology:
What is the origin of the chain of high mountains that closely follows the equator of Saturn's moon, Iapetus?
Are the mountains the remnant of hot and fast-rotating young Iapetus?
Are the mountains the result of material (either from the rings of Saturn or its own ring) that over time collected upon the surface?
Extra-solar
How common are Solar System-like planetary systems? Some observed planetary systems contain Super-Earths and Hot Jupiters that orbit very close to their stars. Systems with Jupiter-like planets in Jupiter-like orbits appear to be rare. There are several possibilities why Jupiter-like orbits are rare, including that data is lacking or the grand tack hypothesis.
Stellar astronomy and astrophysics
Solar cycle:
How does the Sun generate its periodically reversing large-scale magnetic field?
How do other Sol-like stars generate their magnetic fields, and what are the similarities and differences between stellar activity cycles and that of the Sun?
What caused the Maunder Minimum and other grand minima, and how does the solar cycle recover from a minimum state?
Coronal heat
Document 2:::
A planetary system is a set of gravitationally bound non-stellar objects in or out of orbit around a star or star system. Generally speaking, systems with one or more planets constitute a planetary system, although such systems may also consist of bodies such as dwarf planets, asteroids, natural satellites, meteoroids, comets, planetesimals and circumstellar disks. The Sun together with the planetary system revolving around it, including Earth, forms the Solar System. The term exoplanetary system is sometimes used in reference to other planetary systems.
Debris disks are also known to be common, though other objects are more difficult to observe.
Of particular interest to astrobiology is the habitable zone of planetary systems where planets could have surface liquid water, and thus the capacity to support Earth-like life.
History
Heliocentrism
Historically, heliocentrism (the doctrine that the Sun is at the centre of the universe) was opposed to geocentrism (placing Earth at the centre of the universe).
The notion of a heliocentric Solar System with the Sun at its centre is possibly first suggested in the Vedic literature of ancient India, which often refer to the Sun as the "centre of spheres". Some interpret Aryabhatta's writings in Āryabhaṭīya as implicitly heliocentric.
The idea was first proposed in Western philosophy and Greek astronomy as early as the 3rd century BC by Aristarchus of Samos, but received no support from most other ancient astronomers.
Discovery of the Solar System
De revolutionibus orbium coelestium by Nicolaus Copernicus, published in 1543, presented the first mathematically predictive heliocentric model of a planetary system. 17th-century successors Galileo Galilei, Johannes Kepler, and Sir Isaac Newton developed an understanding of physics which led to the gradual acceptance of the idea that the Earth moves around the Sun and that the planets are governed by the same physical laws that governed Earth.
Speculation on extrasolar pla
Document 3:::
Orbit modeling is the process of creating mathematical models to simulate motion of a massive body as it moves in orbit around another massive body due to gravity. Other forces such as gravitational attraction from tertiary bodies, air resistance, solar pressure, or thrust from a propulsion system are typically modeled as secondary effects. Directly modeling an orbit can push the limits of machine precision due to the need to model small perturbations to very large orbits. Because of this, perturbation methods are often used to model the orbit in order to achieve better accuracy.
Background
The study of orbital motion and mathematical modeling of orbits began with the first attempts to predict planetary motions in the sky, although in ancient times the causes remained a mystery. Newton, at the time he formulated his laws of motion and of gravitation, applied them to the first analysis of perturbations, recognizing the complex difficulties of their calculation.
Many of the great mathematicians since then have given attention to the various problems involved; throughout the 18th and 19th centuries there was demand for accurate tables of the position of the Moon and planets for purposes of navigation at sea.
The complex motions of orbits can be broken down. The hypothetical motion that the body follows under the gravitational effect of one other body only is typically a conic section, and can be readily modeled with the methods of geometry. This is called a two-body problem, or an unperturbed Keplerian orbit. The differences between the Keplerian orbit and the actual motion of the body are caused by perturbations. These perturbations are caused by forces other than the gravitational effect between the primary and secondary body and must be modeled to create an accurate orbit simulation. Most orbit modeling approaches model the two-body problem and then add models of these perturbing forces and simulate these models over time. Perturbing forces may include gravitatio
Document 4:::
A fundamental ephemeris of the Solar System is a model of the objects of the system in space, with all of their positions and motions accurately represented. It is intended to be a high-precision primary reference for prediction and observation of those positions and motions, and which provides a basis for further refinement of the model. It is generally not intended to cover the entire life of the Solar System; usually a short-duration time span, perhaps a few centuries, is represented to high accuracy. Some long ephemerides cover several millennia to medium accuracy.
They are published by the Jet Propulsion Laboratory as Development Ephemeris. The latest releases include DE430 which covers planetary and lunar ephemeris from Dec 21, 1549 to Jan 25, 2650 with high precision and is intended for general use for modern time periods . DE431 was created to cover a longer time period Aug 15, -13200 to March 15, 17191 with slightly less precision for use with historic observations and far reaching forecasted positions. DE432 was released as a minor update to DE430 with improvements to the Pluto barycenter in support of the New Horizons mission.
Description
The set of physical laws and numerical constants used in the calculation of the ephemeris must be self-consistent and precisely specified. The ephemeris must be calculated strictly in accordance with this set, which represents the most current knowledge of all relevant physical forces and effects. Current fundamental ephemerides are typically released with exact descriptions of all mathematical models, methods of computation, observational data, and adjustment to the observations at the time of their announcement. This may not have been the case in the past, as fundamental ephemerides were then computed from a collection of methods derived over a span of decades by many researchers.
The independent variable of the ephemeris is always time. In the case of the most current ephemerides, it is a relativistic coordinate t
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do earth and the other planets in the solar system make around the sun?
A. elliptical orbits
B. radial orbits
C. smooth orbits
D. elevated orbits
Answer:
|
|
sciq-3574
|
multiple_choice
|
Which theory is the idea that the characteristics of living organisms are controlled by genes, which are passed from parents to their offspring?
|
[
"gene theory",
"species theory",
"fossil theory",
"evolution theory"
] |
A
|
Relavent Documents:
Document 0:::
Tinbergen's four questions, named after 20th century biologist Nikolaas Tinbergen, are complementary categories of explanations for animal behaviour. These are also commonly referred to as levels of analysis. It suggests that an integrative understanding of behaviour must include ultimate (evolutionary) explanations, in particular:
behavioural adaptive functions
phylogenetic history; and the proximate explanations
underlying physiological mechanisms
ontogenetic/developmental history.
Four categories of questions and explanations
When asked about the purpose of sight in humans and animals, even elementary-school children can answer that animals have vision to help them find food and avoid danger (function/adaptation). Biologists have three additional explanations: sight is caused by a particular series of evolutionary steps (phylogeny), the mechanics of the eye (mechanism/causation), and even the process of an individual's development (ontogeny).
This schema constitutes a basic framework of the overlapping behavioural fields of ethology, behavioural ecology, comparative psychology, sociobiology, evolutionary psychology, and anthropology. Julian Huxley identified the first three questions. Niko Tinbergen gave only the fourth question, as Huxley's questions failed to distinguish between survival value and evolutionary history; Tinbergen's fourth question helped resolve this problem.
Evolutionary (ultimate) explanations
First question: Function (adaptation)
Darwin's theory of evolution by natural selection is the only scientific explanation for why an animal's behaviour is usually well adapted for survival and reproduction in its environment. However, claiming that a particular mechanism is well suited to the present environment is different from claiming that this mechanism was selected for in the past due to its history of being adaptive.
The literature conceptualizes the relationship between function and evolution in two ways. On the one hand, function
Document 1:::
Many scientists and philosophers of science have described evolution as fact and theory, a phrase which was used as the title of an article by paleontologist Stephen Jay Gould in 1981. He describes fact in science as meaning data, not known with absolute certainty but "confirmed to such a degree that it would be perverse to withhold provisional assent". A scientific theory is a well-substantiated explanation of such facts. The facts of evolution come from observational evidence of current processes, from imperfections in organisms recording historical common descent, and from transitions in the fossil record. Theories of evolution provide a provisional explanation for these facts.
Each of the words evolution, fact and theory has several meanings in different contexts. In biology, evolution refers to observed changes in organisms over successive generations, to their descent from a common ancestor, and at a technical level to a change in gene frequency over time; it can also refer to explanatory theories (such as Charles Darwin's theory of natural selection) which explain the mechanisms of evolution. To a scientist, fact can describe a repeatable observation capable of great consensus; it can refer to something that is so well established that nobody in a community disagrees with it; and it can also refer to the truth or falsity of a proposition. To the public, theory can mean an opinion or conjecture (e.g., "it's only a theory"), but among scientists it has a much stronger connotation of "well-substantiated explanation". With this number of choices, people can often talk past each other, and meanings become the subject of linguistic analysis.
Evidence for evolution continues to be accumulated and tested. The scientific literature includes statements by evolutionary biologists and philosophers of science demonstrating some of the different perspectives on evolution as fact and theory.
Evolution, fact and theory
Evolution has been described as "fact and theory"; "
Document 2:::
The status of creation and evolution in public education has been the subject of substantial debate and conflict in legal, political, and religious circles. Globally, there are a wide variety of views on the topic. Most western countries have legislation that mandates only evolutionary biology is to be taught in the appropriate scientific syllabuses.
Overview
While many Christian denominations do not raise theological objections to the modern evolutionary synthesis as an explanation for the present forms of life on planet Earth, various socially conservative, traditionalist, and fundamentalist religious sects and political groups within Christianity and Islam have objected vehemently to the study and teaching of biological evolution. Some adherents of these Christian and Islamic religious sects or political groups are passionately opposed to the consensus view of the scientific community. Literal interpretations of religious texts are the greatest cause of conflict with evolutionary and cosmological investigations and conclusions.
Internationally, biological evolution is taught in science courses with limited controversy, with the exception of a few areas of the United States and several Muslim-majority countries, primarily Turkey. In the United States, the Supreme Court has ruled the teaching of creationism as science in public schools to be unconstitutional, irrespective of how it may be purveyed in theological or religious instruction. In the United States, intelligent design (ID) has been represented as an alternative explanation to evolution in recent decades, but its "demonstrably religious, cultural, and legal missions" have been ruled unconstitutional by a lower court.
By country
Australia
Although creationist views are popular among religious education teachers and creationist teaching materials have been distributed by volunteers in some schools, many Australian scientists take an aggressive stance supporting the right of teachers to teach the theory
Document 3:::
Evolutionary biology is the subfield of biology that studies the evolutionary processes (natural selection, common descent, speciation) that produced the diversity of life on Earth. It is also defined as the study of the history of life forms on Earth. Evolution holds that all species are related and gradually change over generations. In a population, the genetic variations affect the phenotypes (physical characteristics) of an organism. These changes in the phenotypes will be an advantage to some organisms, which will then be passed onto their offspring. Some examples of evolution in species over many generations are the peppered moth and flightless birds. In the 1930s, the discipline of evolutionary biology emerged through what Julian Huxley called the modern synthesis of understanding, from previously unrelated fields of biological research, such as genetics and ecology, systematics, and paleontology.
The investigational range of current research has widened to encompass the genetic architecture of adaptation, molecular evolution, and the different forces that contribute to evolution, such as sexual selection, genetic drift, and biogeography. Moreover, the newer field of evolutionary developmental biology ("evo-devo") investigates how embryogenesis is controlled, thus yielding a wider synthesis that integrates developmental biology with the fields of study covered by the earlier evolutionary synthesis.
Subfields
Evolution is the central unifying concept in biology. Biology can be divided into various ways. One way is by the level of biological organization, from molecular to cell, organism to population. Another way is by perceived taxonomic group, with fields such as zoology, botany, and microbiology, reflecting what was once seen as the major divisions of life. A third way is by approaches, such as field biology, theoretical biology, experimental evolution, and paleontology. These alternative ways of dividing up the subject have been combined with evolution
Document 4:::
Evolutionary musicology is a subfield of biomusicology that grounds the cognitive mechanisms of music appreciation and music creation in evolutionary theory. It covers vocal communication in other animals, theories of the evolution of human music, and holocultural universals in musical ability and processing.
History
The origins of the field can be traced back to Charles Darwin who wrote in The Descent of Man, and Selection in Relation to Sex:
This theory of a musical protolanguage has been revived and re-discovered repeatedly.
The origins of music
Like the origin of language, the origin of music has been a topic for speculation and debate for centuries. Leading theories include Darwin's theory of partner choice (women choose male partners based on musical displays), the idea that human musical behaviors are primarily based on behaviors of other animals (see zoomusicology), the idea that music emerged because it promotes social cohesion, the idea that music emerged because it helps children acquire verbal, social, and motor skills, and the idea that musical sound and movement patterns, and links between music, religion and spirituality, originated in prenatal psychology and mother-infant attachment.
Two major topics for any subfield of evolutionary psychology are the adaptive function (if any) and phylogenetic history of the mechanism or behavior of interest including when music arose in human ancestry and from what ancestral traits it developed. Current debate addresses each of these.
One part of the adaptive function question is whether music constitutes an evolutionary adaptation or exaptation (i.e. by-product of evolution). Steven Pinker, in his book How the Mind Works, for example, argues that music is merely "auditory cheesecake"—it was evolutionarily adaptive to have a preference for fat and sugar but cheesecake did not play a role in that selection process. This view has been directly countered by numerous music researchers.
Adaptation, on the other
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which theory is the idea that the characteristics of living organisms are controlled by genes, which are passed from parents to their offspring?
A. gene theory
B. species theory
C. fossil theory
D. evolution theory
Answer:
|
|
sciq-5455
|
multiple_choice
|
What is the attachment of ducklings to their mother an example of?
|
[
"imprinting",
"validating",
"magnetism",
"impressionism"
] |
A
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Animal science is described as "studying the biology of animals that are under the control of humankind". It can also be described as the production and management of farm animals. Historically, the degree was called animal husbandry and the animals studied were livestock species, like cattle, sheep, pigs, poultry, and horses. Today, courses available look at a broader area, including companion animals, like dogs and cats, and many exotic species. Degrees in Animal Science are offered at a number of colleges and universities. Animal science degrees are often offered at land-grant universities, which will often have on-campus farms to give students hands-on experience with livestock animals.
Education
Professional education in animal science prepares students for careers in areas such as animal breeding, food and fiber production, nutrition, animal agribusiness, animal behavior, and welfare. Courses in a typical Animal Science program may include genetics, microbiology, animal behavior, nutrition, physiology, and reproduction. Courses in support areas, such as genetics, soils, agricultural economics and marketing, legal aspects, and the environment also are offered.
Bachelor degree
At many universities, a Bachelor of Science (BS) degree in Animal Science allows emphasis in certain areas. Typical areas are species-specific or career-specific. Species-specific areas of emphasis prepare students for a career in dairy management, beef management, swine management, sheep or small ruminant management, poultry production, or the horse industry. Other career-specific areas of study include pre-veterinary medicine studies, livestock business and marketing, animal welfare and behavior, animal nutrition science, animal reproduction science, or genetics. Youth programs are also an important part of animal science programs.
Pre-veterinary emphasis
Many schools that offer a degree option in Animal Science also offer a pre-veterinary emphasis such as Iowa State University, th
Document 2:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
Document 3:::
The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work.
History
It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council.
Function
Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres.
STEM ambassadors
To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell.
Funding
STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments.
See also
The WISE Campaign
Engineering and Physical Sciences Research Council
National Centre for Excellence in Teaching Mathematics
Association for Science Education
Glossary of areas of mathematics
Glossary of astronomy
Glossary of biology
Glossary of chemistry
Glossary of engineering
Glossary of physics
Document 4:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the attachment of ducklings to their mother an example of?
A. imprinting
B. validating
C. magnetism
D. impressionism
Answer:
|
|
sciq-8032
|
multiple_choice
|
Compared to red light, blue light has a shorter what?
|
[
"gravity",
"life span",
"wavelength",
"absorption"
] |
C
|
Relavent Documents:
Document 0:::
Green Light, green light, green-light or greenlight may refer to:
Green-colored light, part of the visible spectrum
Arts, entertainment, and media
Films and television
Green Light (1937 film), starring Errol Flynn
Green Light (2002 film), a Turkish film written and directed by Faruk Aksoy
"Green Light" (Breaking Bad), a third-season episode of Breaking Bad
Greenlight, formal approval of a project to move forward
Literature
Green Light, a 1935 novel by Lloyd C. Douglas
"Green Light", the final passage of F. Scott Fitzgerald's novel The Great Gatsby
Greenlights (book), a 2020 book by Matthew McConaughey
Music
Albums
Green Light (Bonnie Raitt album), 1982
Green Light (Cliff Richard album), 1978
The Green Light, a 2009 mixtape by Bow Wow
Songs
"Green Light" (Cliff Richard song) (1979)
"Green Light" (Beyoncé song) (2006)
"Green Light" (John Legend song) (2008)
"Green Light" (Roll Deep song) (2010)
"Green Light" (Lorde song) (2017)
"Green Light" (Valery Leontiev song) (1984)
"Green Light", by the American Breed from Bend Me, Shape Me (1968)
"Green Light", by Girls' Generation from Lion Heart
"Green Light", by Hank Thompson (1954)
"Green Light", by Lil Durk from Love Songs 4 the Streets 2
"Green Light", by R. Kelly from Write Me Back
"Green Light", by Sonic Youth from Evol
"Green Light", by the Bicycles from Oh No, It's Love
"Green Lights", by Aloe Blacc (2011)
"Greenlight" (Pitbull song) (2016)
"Green Lights", by Sarah Jarosz from Undercurrent (2016)
"Green Light", by Kylie Minogue from Tension (2023)
"Greenlight", by 5 Seconds of Summer from 5 Seconds of Summer
"Greenlight", by Enisa Nikaj which represented New York in the American Song Contest
"Greenlights" (song), by Krewella
Computing and technology
Greenlight (Internet service), a fiber-optic Internet service provided by the city of Wilson, North Carolina, US
Greenlight Networks, a fiber-optic Internet service in Rochester, New York, US
Steam Greenlight, a service part of Val
Document 1:::
A colorimeter is a device used in colorimetry that measures the absorbance of particular wavelengths of light by a specific solution. It is commonly used to determine the concentration of a known solute in a given solution by the application of the Beer–Lambert law, which states that the concentration of a solute is proportional to the absorbance.
Construction
The essential parts of a colorimeter are:
a light source (often an ordinary low-voltage filament lamp);
an adjustable aperture;
a set of colored filters;
a cuvette to hold the working solution;
a detector (usually a photoresistor) to measure the transmitted light;
a meter to display the output from the detector.
In addition, there may be:
a voltage regulator, to protect the instrument from fluctuations in mains voltage;
a second light path, cuvette and detector. This enables comparison between the working solution and a "blank", consisting of pure solvent, to improve accuracy.
There are many commercialized colorimeters as well as open source versions with construction documentation for education and for research.
Filters
Changeable optics filters are used in the colorimeter to select the wavelength which the solute absorbs the most, in order to maximize accuracy. The usual wavelength range is from 400 to 700 nm. If it is necessary to operate in the ultraviolet range then some modifications to the colorimeter are needed. In modern colorimeters the filament lamp and filters may be replaced by several (light-emitting diode) of different colors.
Cuvettes
In a manual colorimeter the cuvettes are inserted and removed by hand. An automated colorimeter (as used in an AutoAnalyzer) is fitted with a flowcell through which solution flows continuously.
Output
The output from a colorimeter may be displayed by an analogue or digital meter and may be shown as transmittance (a linear scale from 0 to 100%) or as absorbance (a logarithmic scale from zero to infinity). The useful range of the absorbance scale is
Document 2:::
On the coloured light of the binary stars and some other stars of the heavens or in the original German is a treatise by Christian Doppler (1842) in which he postulated his principle that the observed frequency changes if either the source or the observer is moving, which later has been coined the Doppler effect. The original German text can be found in wikisource. The following annotated summary serves as a companion to that original.
Title
The title "" (On the coloured light of the binary stars and some other stars of the heavens - Attempt at a general theory including Bradley's theorem as an integral part) specifies the purpose: describe the hypothesis of the Doppler effect, use it to explain the colours of binary stars, and establish a relation with Bradley's stellar aberration.
Content
§ 1 In which Doppler reminds the readers that light is a wave, and that there is debate as to whether it is a transverse wave, with aether particles oscillating perpendicular to the propagation direction. Proponents claim this is necessary to explain polarised light, whereas opponents object to implications for the aether. Doppler doesn't choose sides, although the issue returns in § 6.
§ 2 Doppler observes that colour is a manifestation of the frequency of the light wave, in the eye of the beholder. He describes his principle that a frequency shift occurs when the source or the observer moves. A ship meets waves at a faster rate when sailing against the waves than when sailing along with them. The same goes for sound and light.
§ 3 Doppler derives his equations for the frequency shift, in two cases:
§ 4 Doppler provides imaginary examples of large and small frequency shifts for sound:
§ 5 Doppler provides imaginary examples of large and small frequency shifts for light from stars. Velocities are expressed in Meilen/s, and the light speed has a rounded value of 42000 Meilen/s. Doppler assumes that 458 THz (extreme red) and 727 THz (extreme violet) are the borders of the v
Document 3:::
Red light or redlight may refer to:
Science and technology
Red, any of a number of similar colors evoked by light in the wavelength range of 630–740 nm
Red light, a traffic light color signifying stop
Red light, a color of safelight used in photographic darkrooms
Red light therapy
Arts and entertainment
Red Lights (novel) (Feux Rouges), a 1953 book by Georges Simenon
Film
Red Lights (1923 film), a 1923 American silent film
Red Light (film), a 1949 crime film starring George Raft
Red Lights (2004 film) (Feux rouges), a French thriller directed by Cédric Kahn
Red Lights (2012 film), a thriller by Rodrigo Cortés
Redlight (film), a 2009 documentary film
Music
Red Light, a sublabel of Tunnel Records
Redlight (musician) (born 1980), British electronic musician
Albums
Red Light (Bladee album), 2018
Red Light (f(x) album), 2014
Redlight (Grails album), 2004
Redlight (The Slackers album), 1997
Red Light! (Indigo Swing album), 1999
Songs
"Red Light" (Linda Clifford song)
"Red Light" (David Nail song)
"Red Light" (Siouxsie and the Banshees song)
"Red Light" (U2 song)
"Red Light", by Fastball from Keep Your Wig On
"Red Light", by Jonny Lang from Long Time Coming
"Red Light", by Eddie Murphy, featuring Snoop Dogg
"Red Light", by The Strokes from First Impressions of Earth
"Red Light", by Wall of Voodoo from Dark Continent
"Redlight" (song), by Ian Carey
"Redlight", by Kelly Osbourne from Sleeping in the Nothing
"Red Lights" (song), by Tiësto
"Red Lights" (Stray Kids song)
"Red Lights", by Chloe x Halle from Sugar Symphony
Other uses
Redlight Children Campaign, an American non-profit organization
Common synonym for goals in ice hockey, derived from the red lamp behind the net activated to confirm a goal
André Racicot (born 1969), nicknamed "Red Light", retired ice hockey goalie
Operation Red Light II, a 2006 coalition military operation of the Iraq War
See also
Red-light district, a part of an urban area where there is a concentration o
Document 4:::
Colored music notation is a technique used to facilitate enhanced learning in young music students by adding visual color to written musical notation. It is based upon the concept that color can affect the observer in various ways, and combines this with standard learning of basic notation.
Basis
Viewing color has been widely shown to change an individual's emotional state and stimulate neurons. The Lüscher color test observes from experiments that when individuals are required to contemplate pure red for varying lengths of time, [the experiments] have shown that this color decidedly has a stimulating effect on the nervous system; blood pressure increases, and respiration rate and heart rate both increase. Pure blue, on the other hand, has the reverse effect; observers experience a decline in blood pressure, heart rate, and breathing. Given these findings, it has been suggested that the influence of colored musical notation would be similar.
Music education
In music education, color is typically used in method books to highlight new material. Stimuli received through several senses excite more neurons in several localized areas of the cortex, thereby reinforcing the learning process and improving retention. This information has been proven by other researchers; Chute (1978) reported that "elementary students who viewed a colored version of an instructional film scored significantly higher on both immediate and delayed tests than did students who viewed a monochrome version".
Color studies
Effect on achievement
A researcher in this field, George L. Rogers is the Director of Music Education at Westfield State College. He is also the author of 25 articles in publications that include the Music Educators Journal, The Instrumentalist, and the Journal of Research in Music Education. In 1991, George L. Rogers did a study that researched the effect of color-coded notation on music achievement of elementary instrumental students. Rogers states that the color-co
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Compared to red light, blue light has a shorter what?
A. gravity
B. life span
C. wavelength
D. absorption
Answer:
|
|
sciq-4349
|
multiple_choice
|
How do gametophyte plants form haploid gametes?
|
[
"during after omniosis",
"during after mitosis",
"during omniosis",
"through mitosis"
] |
D
|
Relavent Documents:
Document 0:::
Microgametogenesis is the process in plant reproduction where a microgametophyte develops in a pollen grain to the three-celled stage of its development. In flowering plants it occurs with a microspore mother cell inside the anther of the plant.
When the microgametophyte is first formed inside the pollen grain four sets of fertile cells called sporogenous cells are apparent. These cells are surrounded by a wall of sterile cells called the tapetum, which supplies food to the cell and eventually becomes the cell wall for the pollen grain. These sets of sporogenous cells eventually develop into diploid microspore mother cells. These microspore mother cells, also called microsporocytes, then undergo meiosis and become four microspore haploid cells. These new microspore cells then undergo mitosis and form a tube cell and a generative cell. The generative cell then undergoes mitosis one more time to form two male gametes, also called sperm.
See also
Gametogenesis
Document 1:::
Alternation of generations (also known as metagenesis or heterogenesis) is the predominant type of life cycle in plants and algae. In plants both phases are multicellular: the haploid sexual phase – the gametophyte – alternates with a diploid asexual phase – the sporophyte.
A mature sporophyte produces haploid spores by meiosis, a process which reduces the number of chromosomes to half, from two sets to one. The resulting haploid spores germinate and grow into multicellular haploid gametophytes. At maturity, a gametophyte produces gametes by mitosis, the normal process of cell division in eukaryotes, which maintains the original number of chromosomes. Two haploid gametes (originating from different organisms of the same species or from the same organism) fuse to produce a diploid zygote, which divides repeatedly by mitosis, developing into a multicellular diploid sporophyte. This cycle, from gametophyte to sporophyte (or equally from sporophyte to gametophyte), is the way in which all land plants and most algae undergo sexual reproduction.
The relationship between the sporophyte and gametophyte phases varies among different groups of plants. In the majority of algae, the sporophyte and gametophyte are separate independent organisms, which may or may not have a similar appearance. In liverworts, mosses and hornworts, the sporophyte is less well developed than the gametophyte and is largely dependent on it. Although moss and hornwort sporophytes can photosynthesise, they require additional photosynthate from the gametophyte to sustain growth and spore development and depend on it for supply of water, mineral nutrients and nitrogen. By contrast, in all modern vascular plants the gametophyte is less well developed than the sporophyte, although their Devonian ancestors had gametophytes and sporophytes of approximately equivalent complexity. In ferns the gametophyte is a small flattened autotrophic prothallus on which the young sporophyte is briefly dependent for its n
Document 2:::
Megagametogenesis is the process of maturation of the female gametophyte, or megagametophyte, in plants During the process of megagametogenesis, the megaspore, which arises from megasporogenesis, develops into the embryo sac, which is where the female gamete is housed. These megaspores then develop into the haploid female gametophytes. This occurs within the ovule, which is housed inside the ovary.
The Process
Prior to megagametogenesis, a developing embryo undergoes meiosis during a process called megasporogenesis. Next, three out of four megaspores disintegrate, leaving only the megaspore that will undergo the megagametogenesis. The following steps are shown in Figure 1, and detailed below.
The remaining megaspore undergoes a round of mitosis. This results in a structure with two nuclei, also called a binucleate embryo sac.
The two nuclei migrate to opposite sides of the embryo sac.
Each haploid nucleus then undergoes two rounds of mitosis which creates 4 haploid nuclei on each end of the embryo sac.
One nucleus from each set of 4 migrates to the center of the embryo sac. These form the binucleate endosperm mother cell. This leaves three remaining nuclei on the micropylar end and three remaining nuclei on the antipodal end. The nuclei on the micropylar end is composed of an egg cell, two synergid cells, and the micropyle, an opening that allows the pollen tube to enter the structure. The nuclei on the antipodal end are simply known as the antipodal cells. These cells are involved with nourishing the embryo, but often undergo programmed cell death before fertilization occurs.
Cell plates form around the antipodal nuclei, egg ell, and synergid cells.
Variations
Plants exhibit three main types of megagametogenesis. The number of haploid nuclei in the functional megaspore that is involved in megagametogenesis is the main difference between these three types.
Monosporic
The most common type of megagametogenesis, monosporic megagametogenesis, is outlined a
Document 3:::
Gametogenesis is a biological process by which diploid or haploid precursor cells undergo cell division and differentiation to form mature haploid gametes. Depending on the biological life cycle of the organism, gametogenesis occurs by meiotic division of diploid gametocytes into various gametes, or by mitosis. For example, plants produce gametes through mitosis in gametophytes. The gametophytes grow from haploid spores after sporic meiosis. The existence of a multicellular, haploid phase in the life cycle between meiosis and gametogenesis is also referred to as alternation of generations.
It is the biological process of gametogenesis; cells that are haploid or diploid divide to create other cells. matured haploid gametes. It can take place either through mitosis or meiotic division of diploid gametocytes into different depending on an organism's biological life cycle, gametes. For instance, gametophytes in plants undergo mitosis to produce gametes. Both male and female have different forms.
In animals
Animals produce gametes directly through meiosis from diploid mother cells in organs called gonads (testis in males and ovaries in females). In mammalian germ cell development, sexually dimorphic gametes differentiates into primordial germ cells from pluripotent cells during initial mammalian development. Males and females of a species that reproduce sexually have different forms of gametogenesis:
spermatogenesis (male): Immature germ cells are produced in a man's testes. To mature into sperms, males' immature germ cells, or spermatogonia, go through spermatogenesis during adolescence. Spermatogonia are diploid cells that become larger as they divide through mitosis. These primary spermatocytes. These diploid cells undergo meiotic division to create secondary spermatocytes. These secondary spermatocytes undergo a second meiotic division to produce immature sperms or spermatids. These spermatids undergo spermiogenesis in order to develop into sperm. LH, FSH, GnRH
Document 4:::
In biology and genetics, the germline is the population of a multicellular organism's cells that pass on their genetic material to the progeny (offspring). In other words, they are the cells that form the egg, sperm and the fertilised egg. They are usually differentiated to perform this function and segregated in a specific place away from other bodily cells.
As a rule, this passing-on happens via a process of sexual reproduction; typically it is a process that includes systematic changes to the genetic material, changes that arise during recombination, meiosis and fertilization for example. However, there are many exceptions across multicellular organisms, including processes and concepts such as various forms of apomixis, autogamy, automixis, cloning or parthenogenesis. The cells of the germline are called germ cells. For example, gametes such as a sperm and an egg are germ cells. So are the cells that divide to produce gametes, called gametocytes, the cells that produce those, called gametogonia, and all the way back to the zygote, the cell from which an individual develops.
In sexually reproducing organisms, cells that are not in the germline are called somatic cells. According to this view, mutations, recombinations and other genetic changes in the germline may be passed to offspring, but a change in a somatic cell will not be. This need not apply to somatically reproducing organisms, such as some Porifera and many plants. For example, many varieties of citrus, plants in the Rosaceae and some in the Asteraceae, such as Taraxacum, produce seeds apomictically when somatic diploid cells displace the ovule or early embryo.
In an earlier stage of genetic thinking, there was a clear distinction between germline and somatic cells. For example, August Weismann proposed and pointed out, a germline cell is immortal in the sense that it is part of a lineage that has reproduced indefinitely since the beginning of life and, barring accident, could continue doing so indef
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How do gametophyte plants form haploid gametes?
A. during after omniosis
B. during after mitosis
C. during omniosis
D. through mitosis
Answer:
|
|
ai2_arc-425
|
multiple_choice
|
Which characteristic do single-celled organisms and multicellular organisms have in common?
|
[
"Both have cells with specialized functions for each life process.",
"Both perform all life processes within one cell.",
"Both have a way to get rid of waste materials.",
"Both are able to make food from sunlight."
] |
C
|
Relavent Documents:
Document 0:::
A cell type is a classification used to identify cells that share morphological or phenotypical features. A multicellular organism may contain cells of a number of widely differing and specialized cell types, such as muscle cells and skin cells, that differ both in appearance and function yet have identical genomic sequences. Cells may have the same genotype, but belong to different cell types due to the differential regulation of the genes they contain. Classification of a specific cell type is often done through the use of microscopy (such as those from the cluster of differentiation family that are commonly used for this purpose in immunology). Recent developments in single cell RNA sequencing facilitated classification of cell types based on shared gene expression patterns. This has led to the discovery of many new cell types in e.g. mouse cortex, hippocampus, dorsal root ganglion and spinal cord.
Animals have evolved a greater diversity of cell types in a multicellular body (100–150 different cell types), compared
with 10–20 in plants, fungi, and protists. The exact number of cell types is, however, undefined, and the Cell Ontology, as of 2021, lists over 2,300 different cell types.
Multicellular organisms
All higher multicellular organisms contain cells specialised for different functions. Most distinct cell types arise from a single totipotent cell that differentiates into hundreds of different cell types during the course of development. Differentiation of cells is driven by different environmental cues (such as cell–cell interaction) and intrinsic differences (such as those caused by the uneven distribution of molecules during division). Multicellular organisms are composed of cells that fall into two fundamental types: germ cells and somatic cells. During development, somatic cells will become more specialized and form the three primary germ layers: ectoderm, mesoderm, and endoderm. After formation of the three germ layers, cells will continue to special
Document 1:::
In biology, cell theory is a scientific theory first formulated in the mid-nineteenth century, that organisms are made up of cells, that they are the basic structural/organizational unit of all organisms, and that all cells come from pre-existing cells. Cells are the basic unit of structure in all organisms and also the basic unit of reproduction.
The theory was once universally accepted, but now some biologists consider non-cellular entities such as viruses living organisms, and thus disagree with the first tenet. As of 2021: "expert opinion remains divided roughly a third each between yes, no and don’t know". As there is no universally accepted definition of life, discussion still continues.
History
With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as it was believed no one else had seen these. To further support his theory, Matthias Schleiden and Theodor Schwann both also studied cells of both animal and plants. What they discovered were significant differences between the two types of cells. This put forth the idea that cells were not only fundamental to plants, but animals as well.
Microscopes
The discovery of the cell was made possible through the invention of the microscope. In the first century BC, Romans were able to make glass. They discovered that objects appeared to be larger under the glass. The expanded use of lenses in eyeglasses in the 13th century probably led to wider spread use of simple microscopes (magnifying glasses) with limited magnification. Compound microscopes, which combine an objective lens with an eyepiece to view a real image achieving much higher magnification, first appeared in Europe around 1620. In 1665, Robert Hooke used a microscope
Document 2:::
Cell physiology is the biological study of the activities that take place in a cell to keep it alive. The term physiology refers to normal functions in a living organism. Animal cells, plant cells and microorganism cells show similarities in their functions even though they vary in structure.
General characteristics
There are two types of cells: prokaryotes and eukaryotes.
Prokaryotes were the first of the two to develop and do not have a self-contained nucleus. Their mechanisms are simpler than later-evolved eukaryotes, which contain a nucleus that envelops the cell's DNA and some organelles.
Prokaryotes
Prokaryotes have DNA located in an area called the nucleoid, which is not separated from other parts of the cell by a membrane. There are two domains of prokaryotes: bacteria and archaea. Prokaryotes have fewer organelles than eukaryotes. Both have plasma membranes and ribosomes (structures that synthesize proteins and float free in cytoplasm). Two unique characteristics of prokaryotes are fimbriae (finger-like projections on the surface of a cell) and flagella (threadlike structures that aid movement).
Eukaryotes
Eukaryotes have a nucleus where DNA is contained. They are usually larger than prokaryotes and contain many more organelles. The nucleus, the feature of a eukaryote that distinguishes it from a prokaryote, contains a nuclear envelope, nucleolus and chromatin. In cytoplasm, endoplasmic reticulum (ER) synthesizes membranes and performs other metabolic activities. There are two types, rough ER (containing ribosomes) and smooth ER (lacking ribosomes). The Golgi apparatus consists of multiple membranous sacs, responsible for manufacturing and shipping out materials such as proteins. Lysosomes are structures that use enzymes to break down substances through phagocytosis, a process that comprises endocytosis and exocytosis. In the mitochondria, metabolic processes such as cellular respiration occur. The cytoskeleton is made of fibers that support the str
Document 3:::
The cell is the basic structural and functional unit of all forms of life. Every cell consists of cytoplasm enclosed within a membrane, and contains many macromolecules such as proteins, DNA and RNA, as well as many small molecules of nutrients and metabolites. The term comes from the Latin word meaning 'small room'.
Cells can acquire specified function and carry out various tasks within the cell such as replication, DNA repair, protein synthesis, and motility. Cells are capable of specialization and mobility within the cell.
Most plant and animal cells are only visible under a light microscope, with dimensions between 1 and 100 micrometres. Electron microscopy gives a much higher resolution showing greatly detailed cell structure. Organisms can be classified as unicellular (consisting of a single cell such as bacteria) or multicellular (including plants and animals). Most unicellular organisms are classed as microorganisms.
The study of cells and how they work has led to many other studies in related areas of biology, including: discovery of DNA, cancer systems biology, aging and developmental biology.
Cell biology is the study of cells, which were discovered by Robert Hooke in 1665, who named them for their resemblance to cells inhabited by Christian monks in a monastery. Cell theory, first developed in 1839 by Matthias Jakob Schleiden and Theodor Schwann, states that all organisms are composed of one or more cells, that cells are the fundamental unit of structure and function in all living organisms, and that all cells come from pre-existing cells. Cells emerged on Earth about 4 billion years ago.
Discovery
With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as i
Document 4:::
An organism () is any biological living system that functions as an individual life form. All organisms are composed of cells. The idea of organism is based on the concept of minimal functional unit of life. Three traits have been proposed to play the main role in qualification as an organism:
noncompartmentability – structure that cannot be divided without its functionality loss,
individuality – the entity has simultaneous holding of genetic uniqueness, genetic homogeneity and autonomy,
distinctness – genetic information has to maintain open-system (a cell).
Organisms include multicellular animals, plants, and fungi; or unicellular microorganisms such as protists, bacteria, and archaea. All types of organisms are capable of reproduction, growth and development, maintenance, and some degree of response to stimuli. Most multicellular organisms differentiate into specialized tissues and organs during their development.
In 2016, a set of 355 genes from the last universal common ancestor (LUCA) of all organisms from Earth was identified.
Etymology
The term "organism" (from Greek ὀργανισμός, organismos, from ὄργανον, organon, i.e. "instrument, implement, tool, organ of sense or apprehension") first appeared in the English language in 1703 and took on its current definition by 1834 (Oxford English Dictionary). It is directly related to the term "organization". There is a long tradition of defining organisms as self-organizing beings, going back at least to Immanuel Kant's 1790 Critique of Judgment.
Definitions
An organism may be defined as an assembly of molecules functioning as a more or less stable whole that exhibits the properties of life. Dictionary definitions can be broad, using phrases such as "any living structure, such as a plant, animal, fungus or bacterium, capable of growth and reproduction". Many definitions exclude viruses and possible synthetic non-organic life forms, as viruses are dependent on the biochemical machinery of a host cell for repr
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which characteristic do single-celled organisms and multicellular organisms have in common?
A. Both have cells with specialized functions for each life process.
B. Both perform all life processes within one cell.
C. Both have a way to get rid of waste materials.
D. Both are able to make food from sunlight.
Answer:
|
|
sciq-4587
|
multiple_choice
|
The word antibiotic comes from the greek anti, meaning “against,” and bios, meaning this?
|
[
"virus",
"life",
"germ",
"bacteria"
] |
B
|
Relavent Documents:
Document 0:::
An antibiotic is a type of antimicrobial substance active against bacteria. It is the most important type of antibacterial agent for fighting bacterial infections, and antibiotic medications are widely used in the treatment and prevention of such infections. They may either kill or inhibit the growth of bacteria. A limited number of antibiotics also possess antiprotozoal activity. Antibiotics are not effective against viruses such as the common cold or influenza; drugs which inhibit growth of viruses are termed antiviral drugs or antivirals rather than antibiotics. They are also not effective against fungi; drugs which inhibit growth of fungi are called antifungal drugs.
Sometimes, the term antibiotic—literally "opposing life", from the Greek roots ἀντι anti, "against" and βίος bios, "life"—is broadly used to refer to any substance used against microbes, but in the usual medical usage, antibiotics (such as penicillin) are those produced naturally (by one microorganism fighting another), whereas non-antibiotic antibacterials (such as sulfonamides and antiseptics) are fully synthetic. However, both classes have the same goal of killing or preventing the growth of microorganisms, and both are included in antimicrobial chemotherapy. "Antibacterials" include antiseptic drugs, antibacterial soaps, and chemical disinfectants, whereas antibiotics are an important class of antibacterials used more specifically in medicine and sometimes in livestock feed.
Antibiotics have been used since ancient times. Many civilizations used topical application of moldy bread, with many references to its beneficial effects arising from ancient Egypt, Nubia, China, Serbia, Greece, and Rome. The first person to directly document the use of molds to treat infections was John Parkinson (1567–1650). Antibiotics revolutionized medicine in the 20th century. Alexander Fleming (1881–1955) discovered modern day penicillin in 1928, the widespread use of which proved significantly beneficial during wa
Document 1:::
1972 – amoxicillin
1972 – cefradine
1972 – minocycline
1972 – pristinamycin
1973 – fosfomycin
1974 – talampicillin
1975 – tobramycin
1975 – bacampicillin
1975 – ticarcillin
1976 – amikacin
1977 – azlocillin
1977 – cefadroxil
1977 – cefamandole
1977 – cefoxitin
1977 – c
Document 2:::
Antibiosis is a biological interaction between two or more organisms that is detrimental to at least one of them; it can also be an antagonistic association between an organism and the metabolic substances produced by another. Examples of antibiosis include the relationship between antibiotics and bacteria or animals and disease-causing pathogens. The study of antibiosis and its role in antibiotics has led to the expansion of knowledge in the field of microbiology. Molecular processes such cell wall synthesis and recycling, for example, have become better understood through the study of how antibiotics affect beta-lactam development through the antibiosis relationship and interaction of the particular drugs with the bacteria subjected to the compound.
Antibiosis is typically studied in host plant populations and extends to the insects which feed upon them.
"Antibiosis resistance affects the biology of the insect so pest abundance and subsequent damage is reduced compared to that which would have occurred if the insect was on a susceptible crop variety. Antibiosis resistance often results in increased mortality or reduced longevity and reproduction of the insect."
During a study of antibiosis, it was determine that the means to achieving effective antibiosis is remaining still. "When you give antibiotic-producing bacteria a structured medium, they affix to substrate, grow clonally, and produce a “no mans land,” absent competitors, where the antibiotics diffuse outward." Antibiosis is most effective when resources are neither plentiful nor sparse. Antibiosis should be considered as the median on the scale of resource, due to its ideal performance.
See also
Antibiotic
Biological pest control
Biotechnology
Symbiosis
Document 3:::
MicrobeLibrary is a permanent collection of over 1400 original peer-reviewed resources for teaching undergraduate microbiology. It is provided by the American Society for Microbiology, Washington DC, United States.
Contents include curriculum activities; images and animations; reviews of books, websites and other resources; and articles from Focus on Microbiology Education, Microbiology Education and Microbe. Around 40% of the materials are free to educators and students, the remainder require a subscription. the service is suspended with the message to:
"Please check back with us in 2017".
External links
MicrobeLibrary
Microbiology
Document 4:::
An antimicrobial is an agent that kills microorganisms (microbicide) or stops their growth (bacteriostatic agent). Antimicrobial medicines can be grouped according to the microorganisms they act primarily against. For example, antibiotics are used against bacteria, and antifungals are used against fungi. They can also be classified according to their function. The use of antimicrobial medicines to treat infection is known as antimicrobial chemotherapy, while the use of antimicrobial medicines to prevent infection is known as antimicrobial prophylaxis.
The main classes of antimicrobial agents are disinfectants (non-selective agents, such as bleach), which kill a wide range of microbes on non-living surfaces to prevent the spread of illness, antiseptics (which are applied to living tissue and help reduce infection during surgery), and antibiotics (which destroy microorganisms within the body). The term antibiotic originally described only those formulations derived from living microorganisms but is now also applied to synthetic agents, such as sulfonamides or fluoroquinolones. Though the term used to be restricted to antibacterials (and is often used as a synonym for them by medical professionals and in medical literature), its context has broadened to include all antimicrobials. Antibacterial agents can be further subdivided into bactericidal agents, which kill bacteria, and bacteriostatic agents, which slow down or stall bacterial growth. In response, further advancements in antimicrobial technologies have resulted in solutions that can go beyond simply inhibiting microbial growth. Instead, certain types of porous media have been developed to kill microbes on contact. Overuse or misuse of antimicrobials can lead to the development of antimicrobial resistance.
History
Antimicrobial use has been common practice for at least 2000 years. Ancient Egyptians and ancient Greeks used specific molds and plant extracts to treat infection.
In the 19th century, microbiologist
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The word antibiotic comes from the greek anti, meaning “against,” and bios, meaning this?
A. virus
B. life
C. germ
D. bacteria
Answer:
|
|
sciq-467
|
multiple_choice
|
By shocking ocean water, earthquakes can cause what deadly ocean waves?
|
[
"typhoons",
"ebb tides",
"tsunamis",
"deep currents"
] |
C
|
Relavent Documents:
Document 0:::
This list of rogue waves compiles incidents of known and likely rogue waves – also known as freak waves, monster waves, killer waves, and extreme waves. These are dangerous and rare ocean surface waves that unexpectedly reach at least twice the height of the tallest waves around them, and are often described by witnesses as "walls of water". They occur in deep water, usually far out at sea, and are a threat even to capital ships , ocean liners and land structures such as lighthouses.
In addition to the incidents listed below, it has also been suggested that these types of waves may be responsible for the loss of several low-flying United States Coast Guard helicopters on search and rescue missions.
Background
Anecdotal evidence from mariners' testimonies and incidents of wave damage to ships have long suggested rogue waves occurred; however, their scientific measurement was positively confirmed only following measurements of the Draupner wave, a rogue wave at the Draupner platform, in the North Sea on 1 January 1995. During this event, minor damage was inflicted on the platform, confirming that the reading was valid.
In modern oceanography, rogue waves are defined not as the biggest possible waves at sea, but instead as extreme sized waves for a given sea state.
Many of these encounters are only reported in the media, and are not examples of open ocean rogue waves. Often a huge wave is loosely and incorrectly denoted as a rogue wave. Extremely large waves offer an explanation for the otherwise-inexplicable disappearance of many ocean-going vessels. However, the claim is contradicted by information held by Lloyd's Register. One of the very few cases where evidence suggests a freak wave incident is the 1978 loss of the freighter . This claim, however, is contradicted by other sources, which maintain that, over a time period from 1969 to 1994 alone, rogue waves were responsible for the complete loss of 22 supertankers, often with their entire crew. In 2007, resear
Document 1:::
A submarine, undersea, or underwater earthquake is an earthquake that occurs underwater at the bottom of a body of water, especially an ocean. They are the leading cause of tsunamis. The magnitude can be measured scientifically by the use of the moment magnitude scale and the intensity can be assigned using the Mercalli intensity scale.
Understanding plate tectonics helps to explain the cause of submarine earthquakes. The Earth's surface or lithosphere comprises tectonic plates which average approximately 50 miles in thickness, and are continuously moving very slowly upon a bed of magma in the asthenosphere and inner mantle. The plates converge upon one another, and one subducts below the other, or, where there is only shear stress, move horizontally past each other (see transform plate boundary below). Little movements called fault creep are minor and not measurable. The plates meet with each other, and if rough spots cause the movement to stop at the edges, the motion of the plates continue. When the rough spots can no longer hold, the sudden release of the built-up motion releases, and the sudden movement under the sea floor causes a submarine earthquake. This area of slippage both horizontally and vertically is called the epicenter, and has the highest magnitude, and causes the greatest damage.
As with a continental earthquake the severity of the damage is not often caused by the earthquake at the rift zone, but rather by events which are triggered by the earthquake. Where a continental earthquake will cause damage and loss of life on land from fires, damaged structures, and flying objects; a submarine earthquake alters the seabed, resulting in a series of waves, and depending on the length and magnitude of the earthquake, tsunami, which bear down on coastal cities causing property damage and loss of life.
Submarine earthquakes can also damage submarine communications cables, leading to widespread disruption of the Internet and international telephone networ
Document 2:::
The Human-Induced Earthquake Database (HiQuake) is an online database that documents all reported cases of induced seismicity proposed on scientific grounds. It is the most complete compilation of its kind and is freely available to download via the associated website. The database is periodically updated to correct errors, revise existing entries, and add new entries reported in new scientific papers and reports. Suggestions for revisions and new entries can be made via the associated website.
History
In 2016, Nederlandse Aardolie Maatschappij funded a team of researchers from Durham University and Newcastle University to conduct a full review of induced seismicity. This review formed part of a scientific workshop aimed at estimating the maximum possible magnitude earthquake that might be induced by conventional gas production in the Groningen gas field.
The resulting database from the review was publicly released online on the 26 January 2017. The database was accompanied by the publication of two scientific papers, the more detailed of which is freely available online.
Document 3:::
Branched flow refers to a phenomenon in wave dynamics, that produces a tree-like pattern involving successive mostly forward scattering events by smooth obstacles deflecting traveling rays or waves. Sudden and significant momentum or wavevector changes are absent, but accumulated small changes can lead to large momentum changes. The path of a single ray is less important than the environs around a ray, which rotate, compress, and stretch around in an area preserving way. Even more revealing are groups, or manifolds of neighboring rays extending over significant zones. Starting rays out from a point but varying their direction over a range, one to the next, or from different points along a line all with the same initial directions are examples of a manifold. Waves have analogous launching conditions, such as a point source spraying in many directions, or an extended plane wave heading on one direction. The ray bending or refraction leads to characteristic structure in phase space and nonuniform distributions in coordinate space that look somehow universal and resemble branches in trees or stream beds. The branches taken on non-obvious paths through the refracting landscape that are indirect and nonlocal results of terrain already traversed. For a given refracting landscape, the branches will look completely different depending on the initial manifold.
Examples
Two-dimensional electron gas
Branched flow was first identified in experiments with a two-dimensional electron gas. Electrons flowing from a quantum point contact were scanned using a scanning probe microscope. Instead of usual diffraction patterns, the electrons flowed forming branching strands that persisted for several correlation lengths of the background potential.
Ocean dynamics
Focusing of random waves in the ocean can also lead to branched flow. The fluctuation in the depth of the ocean floor can be described as a random potential. A tsunami wave propagating in such medium will form branches which
Document 4:::
Wave loading is most commonly the application of a pulsed or wavelike load to a material or object. This is most commonly used in the analysis of piping, ships, or building structures which experience wind, water, or seismic disturbances.
Examples of wave loading
Offshore storms and pipes: As large waves pass over shallowly buried pipes, water pressure increases above it. As the trough approaches, pressure over the pipe drops and this sudden and repeated variation in pressure can break pipes. The difference in pressure for a wave with wave height of about 10 m would be equivalent to one atmosphere (101.3 kPa or 14.7 psi) pressure variation between crest and trough and repeated fluctuations over pipes in relatively shallow environments could set up resonance vibrations within pipes or structures and cause problems.
Engineering oil platforms: The effects of wave-loading are a serious issue for engineers designing oil platforms, which must contend with the effects of wave loading, and have devised a number of algorithms to do so.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
By shocking ocean water, earthquakes can cause what deadly ocean waves?
A. typhoons
B. ebb tides
C. tsunamis
D. deep currents
Answer:
|
|
sciq-5898
|
multiple_choice
|
What is the term for the method of sending out ultrasound waves to determine the locations of objects?
|
[
"magnetism",
"catabolism",
"echolocation",
"morphology"
] |
C
|
Relavent Documents:
Document 0:::
Ultrasonic transducers and ultrasonic sensors are devices that generate or sense ultrasound energy. They can be divided into three broad categories: transmitters, receivers and transceivers. Transmitters convert electrical signals into ultrasound, receivers convert ultrasound into electrical signals, and transceivers can both transmit and receive ultrasound.
Applications and performance
Ultrasound can be used for measuring wind speed and direction (anemometer), tank or channel fluid level, and speed through air or water. For measuring speed or direction, a device uses multiple detectors and calculates the speed from the relative distances to particulates in the air or water. To measure tank or channel liquid level, and also sea level (tide gauge), the sensor measures the distance (ranging) to the surface of the fluid. Further applications include: humidifiers, sonar, medical ultrasonography, burglar alarms and non-destructive testing.
Systems typically use a transducer that generates sound waves in the ultrasonic range, above 18 kHz, by turning electrical energy into sound, then upon receiving the echo turn the sound waves into electrical energy which can be measured and displayed.
This technology, as well, can detect approaching objects and track their positions.
Ultrasound can also be used to make point-to-point distance measurements by transmitting and receiving discrete bursts of ultrasound between transducers. This technique is known as Sonomicrometry where the transit-time of the ultrasound signal is measured electronically (ie digitally) and converted mathematically to the distance between transducers assuming the speed of sound of the medium between the transducers is known. This method can be very precise in terms of temporal and spatial resolution because the time-of-flight measurement can be derived from tracking the same incident (received) waveform either by reference level or zero crossing. This enables the measurement resolution to far exceed
Document 1:::
Schlieren imaging is a method to visualize density variations in transparent media.
The term "schlieren imaging" is commonly used as a synonym for schlieren photography, though this article particularly treats visualization of the pressure field produced by ultrasonic transducers, generally in water or tissue-mimicking media. The method provides a two-dimensional (2D) projection image of the acoustic beam in real-time ("live video").
The unique properties of the method enable the investigation of specific features of the acoustic field (e.g. focal point in HIFU transducers), detection of acoustic beam-profile irregularities (e.g. due to defects in transducer) and on-line identification of time-dependent phenomena (e.g. in phased array transducers). Some researchers say that schlieren imaging is equivalent to an X-ray radiograph of the acoustic field.
Setup
The optical setup of a schlieren imaging system may comprise the following main sections:
Parallel beam, focusing element, stop (sharp edge) and a camera.
The parallel beam may be achieved by a point-like light source (a laser focused into a pinhole is sometimes used) placed in the focal point of a collimating optical element (lens or mirror).
The focusing element may be a lens or a mirror.
The optical stop may be realized by a razor placed horizontally or vertically in the focal point of the focusing element, carefully positioned to block the light spot image on its edge.
The camera is positioned behind the stop and may be equipped with a suitable lens.
Physics
Ray optics description
A parallel beam is described as a group of straight and parallel 'rays'.
The rays cross through the transparent medium while potentially interacting with the contained acoustic field, and finally reach the focusing element.
Note that the principle of a focusing element is directing (i.e. focusing) rays that are parallel - into a single point on the focal plane of the element.
Thus, the population of rays crossing the focal
Document 2:::
Acoustics is a branch of physics that deals with the study of mechanical waves in gases, liquids, and solids including topics such as vibration, sound, ultrasound and infrasound. A scientist who works in the field of acoustics is an acoustician while someone working in the field of acoustics technology may be called an acoustical engineer. The application of acoustics is present in almost all aspects of modern society with the most obvious being the audio and noise control industries.
Hearing is one of the most crucial means of survival in the animal world and speech is one of the most distinctive characteristics of human development and culture. Accordingly, the science of acoustics spreads across many facets of human society—music, medicine, architecture, industrial production, warfare and more. Likewise, animal species such as songbirds and frogs use sound and hearing as a key element of mating rituals or for marking territories. Art, craft, science and technology have provoked one another to advance the whole, as in many other fields of knowledge. Robert Bruce Lindsay's "Wheel of Acoustics" is a well accepted overview of the various fields in acoustics.
History
Etymology
The word "acoustic" is derived from the Greek word ἀκουστικός (akoustikos), meaning "of or for hearing, ready to hear" and that from ἀκουστός (akoustos), "heard, audible", which in turn derives from the verb ἀκούω(akouo), "I hear".
The Latin synonym is "sonic", after which the term sonics used to be a synonym for acoustics and later a branch of acoustics. Frequencies above and below the audible range are called "ultrasonic" and "infrasonic", respectively.
Early research in acoustics
In the 6th century BC, the ancient Greek philosopher Pythagoras wanted to know why some combinations of musical sounds seemed more beautiful than others, and he found answers in terms of numerical ratios representing the harmonic overtone series on a string. He is reputed to have observed that when the length
Document 3:::
Home ultrasound is the provision of therapeutic ultrasound via the use of a portable or home ultrasound machine. This method of medical ultrasound therapy can be used for various types of pain relief and physical therapy.
In physics, the term "ultrasound" applies to all acoustic energy with a frequency above the audible range of human hearing. The audible range of sound is 20 hertz – 20 kilohertz. Ultrasound frequency is greater than 20 kilohertz.
Machines
Ultrasound energy is transferred based on the frequency and power output of the ultrasonic waves that an ultrasound machine or device creates. Home ultrasound machines and doctor's office machines both operate between 1 and 5 megahertz, however, home machines utilize pulsed ultrasonic waves while professional ultrasound machines in a doctor's office use continuous waves.
Typically, when using a home ultrasound machine, you will use it more frequently than if you were to have ultrasound treatments at a therapist's office, but the end results are the same as if using a continuous wave machine less frequently. Treatments towards a pre-workout in deep muscles and relieving tendons such as arthritis, frozen shoulder, strains, and sprains.
There are home ultrasounds available for purchase prices ranging from 46.00 U.S. dollars to 5,000.00.
Benefits
Home ultrasound machines may have several benefits: long-term cost savings, portable physical therapy treatment, long-term pain relief for multiple symptoms, possible decrease in healing time, and can reduce chronic inflammation. Increase in knee range of motion after use for an injury's such as Osteoarthritis OA, which is the most common joint disorder and incidence increases with age. treatment of OA aims to reduce joint pain and stiffness, preserve and improve the joint mobility. The benefits have improvements for pain, function, and quality of life scales were effected by ultrasounds.
Types of ultrasound therapy
Home ultrasound machines operate within the rang
Document 4:::
In acoustics, dynamic aperture is analogous to aperture in photography. The arrays in side-scan sonar can be programmed to transmit just a few elements at a time or all the elements at once. The more elements transmitting, the narrower the beam and the better the resolution.
The ratio of the imaging depth to the aperture size is known as the F-number. Dynamic aperture is keeping this number constant by growing the aperture with the imaging depth until the physical aperture cannot be increased. A modern medical ultrasound machine has a typical F-number of 0.5.
Side Scan Sonar systems produce images by forming angular “beams”. Beam width is determined by length of the sonar array, narrower beams resolve finer detail. Longer arrays with narrower beams provide finer spatial resolution.
Acoustics
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the term for the method of sending out ultrasound waves to determine the locations of objects?
A. magnetism
B. catabolism
C. echolocation
D. morphology
Answer:
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.