id
stringlengths
6
15
question_type
stringclasses
1 value
question
stringlengths
15
683
choices
listlengths
4
4
answer
stringclasses
5 values
explanation
stringclasses
481 values
prompt
stringlengths
1.75k
10.9k
sciq-10675
multiple_choice
What is the study of chemical processes that occur in living things?
[ "phrenology", "biochemistry", "cardiology", "physiology" ]
B
Relavent Documents: Document 0::: This list of life sciences comprises the branches of science that involve the scientific study of life – such as microorganisms, plants, and animals including human beings. This science is one of the two major branches of natural science, the other being physical science, which is concerned with non-living matter. Biology is the overall natural science that studies life, with the other life sciences as its sub-disciplines. Some life sciences focus on a specific type of organism. For example, zoology is the study of animals, while botany is the study of plants. Other life sciences focus on aspects common to all or many life forms, such as anatomy and genetics. Some focus on the micro-scale (e.g. molecular biology, biochemistry) other on larger scales (e.g. cytology, immunology, ethology, pharmacy, ecology). Another major branch of life sciences involves understanding the mindneuroscience. Life sciences discoveries are helpful in improving the quality and standard of life and have applications in health, agriculture, medicine, and the pharmaceutical and food science industries. For example, it has provided information on certain diseases which has overall aided in the understanding of human health. Basic life science branches Biology – scientific study of life Anatomy – study of form and function, in plants, animals, and other organisms, or specifically in humans Astrobiology – the study of the formation and presence of life in the universe Bacteriology – study of bacteria Biotechnology – study of combination of both the living organism and technology Biochemistry – study of the chemical reactions required for life to exist and function, usually a focus on the cellular level Bioinformatics – developing of methods or software tools for storing, retrieving, organizing and analyzing biological data to generate useful biological knowledge Biolinguistics – the study of the biology and evolution of language. Biological anthropology – the study of humans, non-hum Document 1::: Biochemical engineering, also known as bioprocess engineering, is a field of study with roots stemming from chemical engineering and biological engineering. It mainly deals with the design, construction, and advancement of unit processes that involve biological organisms (such as fermentation) or organic molecules (often enzymes) and has various applications in areas of interest such as biofuels, food, pharmaceuticals, biotechnology, and water treatment processes. The role of a biochemical engineer is to take findings developed by biologists and chemists in a laboratory and translate that to a large-scale manufacturing process. History For hundreds of years, humans have made use of the chemical reactions of biological organisms in order to create goods. In the mid-1800s, Louis Pasteur was one of the first people to look into the role of these organisms when he researched fermentation. His work also contributed to the use of pasteurization, which is still used to this day. By the early 1900s, the use of microorganisms had expanded, and was used to make industrial products. Up to this point, biochemical engineering hadn't developed as a field yet. It wasn't until 1928 when Alexander Fleming discovered penicillin that the field of biochemical engineering was established. After this discovery, samples were gathered from around the world in order to continue research into the characteristics of microbes from places such as soils, gardens, forests, rivers, and streams. Today, biochemical engineers can be found working in a variety of industries, from food to pharmaceuticals. This is due to the increasing need for efficiency and production which requires knowledge of how biological systems and chemical reactions interact with each other and how they can be used to meet these needs. Education Biochemical engineering is not a major offered by most universities and is instead an area of interest under the chemical engineering major in most cases. The following universiti Document 2::: Biochemistry is the study of the chemical processes in living organisms. It deals with the structure and function of cellular components such as proteins, carbohydrates, lipids, nucleic acids and other biomolecules. Articles related to biochemistry include: 0–9 2-amino-5-phosphonovalerate - 3' end - 5' end Document 3::: The following outline is provided as an overview of and topical guide to biophysics: Biophysics – interdisciplinary science that uses the methods of physics to study biological systems. Nature of biophysics Biophysics is An academic discipline – branch of knowledge that is taught and researched at the college or university level. Disciplines are defined (in part), and recognized by the academic journals in which research is published, and the learned societies and academic departments or faculties to which their practitioners belong. A scientific field (a branch of science) – widely recognized category of specialized expertise within science, and typically embodies its own terminology and nomenclature. Such a field will usually be represented by one or more scientific journals, where peer-reviewed research is published. A natural science – one that seeks to elucidate the rules that govern the natural world using empirical and scientific methods. A biological science – concerned with the study of living organisms, including their structure, function, growth, evolution, distribution, and taxonomy. A branch of physics – concerned with the study of matter and its motion through space and time, along with related concepts such as energy and force. An interdisciplinary field – field of science that overlaps with other sciences Scope of biophysics research Biomolecular scale Biomolecule Biomolecular structure Organismal scale Animal locomotion Biomechanics Biomineralization Motility Environmental scale Biophysical environment Biophysics research overlaps with Agrophysics Biochemistry Biophysical chemistry Bioengineering Biogeophysics Nanotechnology Systems biology Branches of biophysics Astrobiophysics – field of intersection between astrophysics and biophysics concerned with the influence of the astrophysical phenomena upon life on planet Earth or some other planet in general. Medical biophysics – interdisciplinary field that applies me Document 4::: Biochemists are scientists who are trained in biochemistry. They study chemical processes and chemical transformations in living organisms. Biochemists study DNA, proteins and cell parts. The word "biochemist" is a portmanteau of "biological chemist." Biochemists also research how certain chemical reactions happen in cells and tissues and observe and record the effects of products in food additives and medicines. Biochemist researchers focus on playing and constructing research experiments, mainly for developing new products, updating existing products and analyzing said products. It is also the responsibility of a biochemist to present their research findings and create grant proposals to obtain funds for future research. Biochemists study aspects of the immune system, the expressions of genes, isolating, analyzing, and synthesizing different products, mutations that lead to cancers, and manage laboratory teams and monitor laboratory work. Biochemists also have to have the capabilities of designing and building laboratory equipment and devise new methods of producing correct results for products. The most common industry role is the development of biochemical products and processes. Identifying substances' chemical and physical properties in biological systems is of great importance, and can be carried out by doing various types of analysis. Biochemists must also prepare technical reports after collecting, analyzing and summarizing the information and trends found. In biochemistry, researchers often break down complicated biological systems into their component parts. They study the effects of foods, drugs, allergens and other substances on living tissues; they research molecular biology, the study of life at the molecular level and the study of genes and gene expression; and they study chemical reactions in metabolism, growth, reproduction, and heredity, and apply techniques drawn from biotechnology and genetic engineering to help them in their research. Abou The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the study of chemical processes that occur in living things? A. phrenology B. biochemistry C. cardiology D. physiology Answer:
sciq-1010
multiple_choice
What is it in bone marrow transplants that may cause a graft versus host reaction?
[ "lymphocytes", "neutrophils", "tumors", "cancer" ]
A
Relavent Documents: Document 0::: Graft-versus-host disease (GvHD) is a syndrome, characterized by inflammation in different organs. GvHD is commonly associated with bone marrow transplants and stem cell transplants. White blood cells of the donor's immune system which remain within the donated tissue (the graft) recognize the recipient (the host) as foreign (non-self). The white blood cells present within the transplanted tissue then attack the recipient's body's cells, which leads to GvHD. This should not be confused with a transplant rejection, which occurs when the immune system of the transplant recipient rejects the transplanted tissue; GvHD occurs when the donor's immune system's white blood cells reject the recipient. The underlying principle (alloimmunity) is the same, but the details and course may differ. GvHD can also occur after a blood transfusion, known as Transfusion-associated graft-versus-host disease or TA-GvHD if the blood products used have not been gamma irradiated or treated with an approved leukocyte reduction system. In contrast to organ/tissue transplant associated GvHD, the incidence of TA-GvHD is increased with HLA matching (first-degree or close relatives). Types In the clinical setting, graft-versus-host disease is divided into acute and chronic forms, and scored or graded on the basis of the tissue affected and the severity of the reaction. In the classical sense, acute graft-versus-host disease is characterized by selective damage to the liver, skin (rash), mucosa, and the gastrointestinal tract. Newer research indicates that other graft-versus-host disease target organs include the immune system (the hematopoietic system, e.g., the bone marrow and the thymus) itself, and the lungs in the form of immune-mediated pneumonitis. Biomarkers can be used to identify specific causes of GvHD, such as elafin in the skin. Chronic graft-versus-host disease also attacks the above organs, but over its long-term course can also cause damage to the connective tissue and exocri Document 1::: Bone marrow is a semi-solid tissue found within the spongy (also known as cancellous) portions of bones. In birds and mammals, bone marrow is the primary site of new blood cell production (or haematopoiesis). It is composed of hematopoietic cells, marrow adipose tissue, and supportive stromal cells. In adult humans, bone marrow is primarily located in the ribs, vertebrae, sternum, and bones of the pelvis. Bone marrow comprises approximately 5% of total body mass in healthy adult humans, such that a man weighing 73 kg (161 lbs) will have around 3.7 kg (8 lbs) of bone marrow. Human marrow produces approximately 500 billion blood cells per day, which join the systemic circulation via permeable vasculature sinusoids within the medullary cavity. All types of hematopoietic cells, including both myeloid and lymphoid lineages, are created in bone marrow; however, lymphoid cells must migrate to other lymphoid organs (e.g. thymus) in order to complete maturation. Bone marrow transplants can be conducted to treat severe diseases of the bone marrow, including certain forms of cancer such as leukemia. Several types of stem cells are related to bone marrow. Hematopoietic stem cells in the bone marrow can give rise to hematopoietic lineage cells, and mesenchymal stem cells, which can be isolated from the primary culture of bone marrow stroma, can give rise to bone, adipose, and cartilage tissue. Structure The composition of marrow is dynamic, as the mixture of cellular and non-cellular components (connective tissue) shifts with age and in response to systemic factors. In humans, marrow is colloquially characterized as "red" or "yellow" marrow (, , respectively) depending on the prevalence of hematopoietic cells vs fat cells. While the precise mechanisms underlying marrow regulation are not understood, compositional changes occur according to stereotypical patterns. For example, a newborn baby's bones exclusively contain hematopoietically active "red" marrow, and there is a pro Document 2::: Graft-versus-tumor effect (GvT) appears after allogeneic hematopoietic stem cell transplantation (HSCT). The graft contains donor T cells (T lymphocytes) that can be beneficial for the recipient by eliminating residual malignant cells. GvT might develop after recognizing tumor-specific or recipient-specific alloantigens. It could lead to remission or immune control of hematologic malignancies. This effect applies in myeloma and lymphoid leukemias, lymphoma, multiple myeloma and possibly breast cancer. It is closely linked with graft-versus-host disease (GvHD), as the underlying principle of alloimmunity is the same. CD4+CD25+ regulatory T cells (Treg) can be used to suppress GvHD without loss of beneficial GvT effect. The biology of GvT response is still not fully understood but it is probable that the reaction with polymorphic minor histocompatibility antigens expressed either specifically on hematopoietic cells or more widely on a number of tissue cells or tumor-associated antigens is involved. This response is mediated largely by cytotoxic T lymphocytes (CTL) but it can be employed by natural killers (NK cells) as separate effectors, particularly in T-cell-depleted HLA-haploidentical HSCT. Graft-versus-leukemia Graft-versus-leukemia (GvL) is a specific type of GvT effect. As the name of this effect indicates, GvL is a reaction against leukemic cells of the host. GvL requires genetic disparity because the effect is dependent on the alloimmunity principle. GvL is a part of the reaction of the graft against the host. Whereas graft-versus-host-disease (GvHD) has a negative impact on the host, GvL is beneficial for patients with hematopeietic malignancies. After HSC transplantation both GvL and GvHD develop. The interconnection of those two effects can be seen by comparison of leukemia relapse after HSC transplantation with development of GvHD. Patients who develop chronic or acute GvHD have lower chance of leukemia relapse. When transplanting T-cell depleted stem Document 3::: Transplant rejection occurs when transplanted tissue is rejected by the recipient's immune system, which destroys the transplanted tissue. Transplant rejection can be lessened by determining the molecular similitude between donor and recipient and by use of immunosuppressant drugs after transplant. Types of transplant rejection Transplant rejection can be classified into three types: hyperacute, acute, and chronic. These types are differentiated by how quickly the recipient's immune system is activated and the specific aspect or aspects of immunity involved. Hyperacute rejection Hyperacute rejection is a form of rejection that manifests itself in the minutes to hours following transplantation. It is caused by the presence of pre-existing antibodies in the recipient that recognize antigens in the donor organ. These antigens are located on the endothelial lining of blood vessels within the transplanted organ and, once antibodies bind, will lead to the rapid activation of the complement system. Irreversible damage via thrombosis and subsequent graft necrosis is to be expected. Tissue left implanted will fail to work and could lead to high fever and malaise as immune system acts against foreign tissue. Graft failure secondary to hyperacute rejection has significantly decreased in incidence as a result of improved pre-transplant screening for antibodies to donor tissues. While these preformed antibodies may result from prior transplants, prior blood transfusions, or pregnancy, hyperacute rejection is most commonly from antibodies to ABO blood group antigens. Consequently, transplants between individuals with differing ABO blood types is generally avoided though may be pursued in very young children (generally under 12 months, but often as old as 24 months) who do not have fully developed immune systems. Shortages of organs and the morbidity and mortality associated with being on transplant waitlists has also increased interest in ABO-incompatible transplantation in o Document 4::: Donna L. Farber is the Chief of Surgical Sciences, George H. Humphreys, II Professor of Surgical Sciences, and Professor of Microbiology and Immunology at Columbia University. Her research focuses on transplant immunology and memory T-cells. Education and career Farber received her B.S. in microbiology from the University of Michigan and her Ph.D. in biochemistry and molecular biology from the University of California Santa Barbara. She did postdoctoral research at Yale University and at the Pasteur Institute in Paris, France. Career She joined the faculty at the University of Maryland in 1996. In 2010, she moved to Columbia University. In 2019, she was elected as a fellow of the American Association for the Advancement of Science. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is it in bone marrow transplants that may cause a graft versus host reaction? A. lymphocytes B. neutrophils C. tumors D. cancer Answer:
sciq-1801
multiple_choice
What is the ideal mechanical advantage in the single fixed pulley?
[ "1", "4", "zero", "2" ]
A
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered. The 1995 version has 30 five-way multiple choice questions. Example question (question 4): Gender differences The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher. Document 2::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 3::: A simple machine that exhibits mechanical advantage is called a mechanical advantage device - e.g.: Lever: The beam shown is in static equilibrium around the fulcrum. This is due to the moment created by vector force "A" counterclockwise (moment A*a) being in equilibrium with the moment created by vector force "B" clockwise (moment B*b). The relatively low vector force "B" is translated in a relatively high vector force "A". The force is thus increased in the ratio of the forces A : B, which is equal to the ratio of the distances to the fulcrum b : a. This ratio is called the mechanical advantage. This idealised situation does not take into account friction. Wheel and axle motion (e.g. screwdrivers, doorknobs): A wheel is essentially a lever with one arm the distance between the axle and the outer point of the wheel, and the other the radius of the axle. Typically this is a fairly large difference, leading to a proportionately large mechanical advantage. This allows even simple wheels with wooden axles running in wooden blocks to still turn freely, because their friction is overwhelmed by the rotational force of the wheel multiplied by the mechanical advantage. A block and tackle of multiple pulleys creates mechanical advantage, by having the flexible material looped over several pulleys in turn. Adding more loops and pulleys increases the mechanical advantage. Screw: A screw is essentially an inclined plane wrapped around a cylinder. The run over the rise of this inclined plane is the mechanical advantage of a screw. Pulleys Consider lifting a weight with rope and pulleys. A rope looped through a pulley attached to a fixed spot, e.g. a barn roof rafter, and attached to the weight is called a single pulley. It has a mechanical advantage (MA) = 1 (assuming frictionless bearings in the pulley), moving no mechanical advantage (or disadvantage) however advantageous the change in direction may be. A single movable pulley has an MA of 2 (assuming frictionless be Document 4::: There are four Advanced Placement (AP) Physics courses administered by the College Board as part of its Advanced Placement program: the algebra-based Physics 1 and Physics 2 and the calculus-based Physics C: Mechanics and Physics C: Electricity and Magnetism. All are intended to be at the college level. Each AP Physics course has an exam for which high-performing students may receive credit toward their college coursework. AP Physics 1 and 2 AP Physics 1 and AP Physics 2 were introduced in 2015, replacing AP Physics B. The courses were designed to emphasize critical thinking and reasoning as well as learning through inquiry. They are algebra-based and do not require any calculus knowledge. AP Physics 1 AP Physics 1 covers Newtonian mechanics, including: Unit 1: Kinematics Unit 2: Dynamics Unit 3: Circular Motion and Gravitation Unit 4: Energy Unit 5: Momentum Unit 6: Simple Harmonic Motion Unit 7: Torque and Rotational Motion Until 2020, the course also covered topics in electricity (including Coulomb's Law and resistive DC circuits), mechanical waves, and sound. These units were removed because they are included in AP Physics 2. AP Physics 2 AP Physics 2 covers the following topics: Unit 1: Fluids Unit 2: Thermodynamics Unit 3: Electric Force, Field, and Potential Unit 4: Electric Circuits Unit 5: Magnetism and Electromagnetic Induction Unit 6: Geometric and Physical Optics Unit 7: Quantum, Atomic, and Nuclear Physics AP Physics C From 1969 to 1972, AP Physics C was a single course with a single exam that covered all standard introductory university physics topics, including mechanics, fluids, electricity and magnetism, optics, and modern physics. In 1973, the College Board split the course into AP Physics C: Mechanics and AP Physics C: Electricity and Magnetism. The exam was also split into two separate 90-minute tests, each equivalent to a semester-length calculus-based college course. Until 2006, both exams could be taken for a single The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the ideal mechanical advantage in the single fixed pulley? A. 1 B. 4 C. zero D. 2 Answer:
sciq-6699
multiple_choice
What can human facial expressions communicate?
[ "behaviors", "ideas", "emotions", "theories" ]
C
Relavent Documents: Document 0::: A facial expression is one or more motions or positions of the muscles beneath the skin of the face. According to one set of controversial theories, these movements convey the emotional state of an individual to observers. Facial expressions are a form of nonverbal communication. They are a primary means of conveying social information between humans, but they also occur in most other mammals and some other animal species. (For a discussion of the controversies on these claims, see Fridlund and Russell & Fernandez Dols.) Humans can adopt a facial expression voluntarily or involuntarily, and the neural mechanisms responsible for controlling the expression differ in each case. Voluntary facial expressions are often socially conditioned and follow a cortical route in the brain. Conversely, involuntary facial expressions are believed to be innate and follow a subcortical route in the brain. Facial recognition can be an emotional experience for the brain and the amygdala is highly involved in the recognition process. The Amygdala is the integrative center for emotions, emotional behavior, and motivation. The eyes are often viewed as important features of facial expressions. Aspects such as blinking rate can possibly be used to indicate whether a person is nervous or whether they are lying. Also, eye contact is considered an important aspect of interpersonal communication. However, there are cultural differences regarding the social propriety of maintaining eye contact or not. Beyond the accessory nature of facial expressions in spoken communication between people, they play a significant role in communication with sign language. Many phrases in sign language include facial expressions in the display. There is controversy surrounding the question of whether facial expressions are a worldwide and universal display among humans. Supporters of the Universality Hypothesis claim that many facial expressions are innate and have roots in evolutionary ancestors. Opponents o Document 1::: Facial electromyography (fEMG) refers to an electromyography (EMG) technique that measures muscle activity by detecting and amplifying the tiny electrical impulses that are generated by muscle fibers when they contract. It primarily focuses on two major muscle groups in the face, the corrugator supercilii group which is associated with frowning and the zygomaticus major muscle group which is associated with smiling. Uses Facial EMG has been studied to assess its utility as a tool for measuring emotional reaction. Studies have found that activity of the corrugator muscle, which lowers the eyebrow and is involved in producing frowns, varies inversely with the emotional valence of presented stimuli and reports of mood state. Activity of the zygomatic major muscle, which controls smiling, is said to be positively associated with positive emotional stimuli and positive mood state. Facial EMG has been used as a technique to distinguish and track positive and negative emotional reactions to a stimulus as they occur. A large number of those experiments have been conducted in controlled laboratory environments using a range of stimuli, e.g., still pictures, movie clips and music pieces. It has also been used to investigate emotional responses in individuals with autism spectrum disorders. Although commonly used as an index of emotional responses, facial muscle activity is also influenced by the social context in which it is measured. Using facial EMG in immersive virtual environments, Philipp, Storrs, and Vanman showed that even relatively impoverished social cues in a virtual environment can cause increases in zygomaticus major activity that are unrelated to self-reported emotional states. In 2012 Durso et al. were able to show that facial EMG could be used to detect confusion, both in participants who admitted being confused and in those who did not, suggesting that it could be used as an effective addition to a sensor suite as a monitor of loss of understanding or Document 2::: The study of the evolution of emotions dates back to the 19th century. Evolution and natural selection has been applied to the study of human communication, mainly by Charles Darwin in his 1872 work, The Expression of the Emotions in Man and Animals. Darwin researched the expression of emotions in an effort to support his materialist theory of unguided evolution. He proposed that much like other traits found in animals, emotions apparently also evolved and were adapted over time. His work looked at not only facial expressions in animals and specifically humans, but attempted to point out parallels between behaviors in humans and other animals. According to evolutionary theory, different emotions evolved at different times. Primal emotions, such as love and fear, are associated with ancient parts of the psyche. Social emotions, such as guilt and pride, evolved among social primates. Evolutionary psychologists consider human emotions to be best adapted to the life our ancestors led in nomadic foraging bands. Origins Darwin's original plan was to include his findings about expression of emotions in a chapter of his work, The Descent of Man, and Selection in Relation to Sex (Darwin, 1871) but found that he had enough material for a whole book. It was based on observations, both those around him and of people in many parts of the world. One important observation he made was that even in individuals who were born blind, body and facial expressions displayed are similar to those of anyone else. The ideas found in his book on universality of emotions were intended to go against Sir Charles Bell's 1844 claim that human facial muscles were created to give them the unique ability to express emotions. The main purpose of Darwin's work was to support the theory of evolution by demonstrating that emotions in humans and other animals are similar. Most of the similarities he found were between species closely related, but he found some similarities between distantly related spe Document 3::: The EmojiGrid is an affective self-report tool consisting of a rectangular grid that is labelled with emojis. It is trademark of Kikkoman. The facial expressions of the emoji labels vary from disliking via neutral to liking along the x-axis, and gradually increase in intensity along the y-axis. To report their affective appraisal of a given stimulus, users mark the location inside the grid that best represents their impression. The EmojiGrid can either be used as a paper or computer-based response tool. The images needed to implement the EmojiGrid are freely available from the OSF repository. Applications The EmojiGrid was inspired by Russell's Affect Grid and was originally developed and validated for the affective appraisal of food stimuli, since conventional affective self-report tools (e.g., Self-Assessment Mannikin are frequently misunderstood in that context. It has since been used and validated for the affective appraisal of a wide range of affective stimuli such as images, audio and video clips, 360 VR videos, touch events, food, and odors. It has also been used for the affective analysis of architectural spaces to assess affective experience of trail racing, and to assess the emotional face evaluation capability of people with early dementia. Since it is intuitive and language independent, the EmojiGrid is also suitable for cross-cultural research. Implementation In a computer-based response paradigm, only the image area inside the horizontal and vertical grid borders should be responsive (clickable), so that users can report their affective response by pointing and/or clicking inside the grid.  In practice, this may be achieved by superimposing (1) a clickable image of the unlabeled grid area on top of (2) a larger image showing the grid area together with the emoji labels. The images needed to implement the EmojiGrid are freely available from the OSF repository. An implementation of the EmojiGrid rating task in the Gorilla experiment builder is free Document 4::: Affect displays are the verbal and non-verbal displays of affect (emotion). These displays can be through facial expressions, gestures and body language, volume and tone of voice, laughing, crying, etc. Affect displays can be altered or faked so one may appear one way, when they feel another (e.g., smiling when sad). Affect can be conscious or non-conscious and can be discreet or obvious. The display of positive emotions, such as smiling, laughing, etc., is termed "positive affect", while the displays of more negative emotions, such as crying and tense gestures, is respectively termed "negative affect". Affect is important in psychology as well as in communication, mostly when it comes to interpersonal communication and non-verbal communication. In both psychology and communication, there are a multitude of theories that explain affect and its impact on humans and quality of life. Theoretical perspective Affect can be taken to indicate an instinctual reaction to stimulation occurring before the typical cognitive processes considered necessary for the formation of a more complex emotion. Robert B. Zajonc asserts that this reaction to stimuli is primary for human beings and is the dominant reaction for lower organisms. Zajonc suggests affective reactions can occur without extensive perceptual and cognitive encoding, and can be made sooner and with greater confidence than cognitive judgments. Lazarus on the other hand considers affect to be post-cognitive. That is, affect is elicited only after a certain amount of cognitive processing of information has been accomplished. In this view, an affective reaction, such as liking, disliking, evaluation, or the experience of pleasure or displeasure, is based on a prior cognitive process in which a variety of content discriminations are made and features are identified, examined for their value, and weighted for their contributions. A divergence from a narrow reinforcement model for emotion allows for other perspectives on The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What can human facial expressions communicate? A. behaviors B. ideas C. emotions D. theories Answer:
sciq-8344
multiple_choice
What term is defined as the ability to locate objects in the dark by bouncing sound waves off them.
[ "night vision", "echolocation", "morphology", "thermodynamics" ]
B
Relavent Documents: Document 0::: Mammalian vision is the process of mammals perceiving light, analyzing it and forming subjective sensations, on the basis of which the animal's idea of the spatial structure of the external world is formed. Responsible for this process in mammals is the visual sensory system, the foundations of which were formed at an early stage in the evolution of chordates. Its peripheral part is formed by the eyes, the intermediate (by the transmission of nerve impulses) - the optic nerves, and the central - the visual centers in the cerebral cortex. The recognition of visual stimuli in mammals is the result of the joint work of the eyes and the brain. At the same time, a significant part of the visual information is processed already at the receptor level, which allows to significantly reduce the amount of such information received by the brain. Elimination of redundancy in the amount of information is inevitable: if the amount of information delivered to the receptors of the visual system is measured in millions of bits per second (in humans - about 1 bits/s), the capabilities of the nervous system to process it are limited to tens of bits per second. The organs of vision in mammals are, as a rule, well developed, although in their life they are of less importance than for birds: usually mammals pay little attention to immovable objects, so even cautious animals such as a fox or a hare may come close to a human who stands still without movement. The size of the eyes in mammals is relatively small; in humans, eye weight is 1% of the mass of the head, while in a starling it reaches 15%. Nocturnal animals (for example, tarsiers) and animals that live in open landscapes have larger eyes. The vision of forest animals is not so sharp, and in burrowing underground species (moles, gophers, zokors), eyes are reduced to a greater extent, in some cases (marsupial moles, mole rats, blind mole), they are even covered by a skin membrane. Mammalian eye Like other vertebrates, the mammal Document 1::: Acoustics is a branch of physics that deals with the study of mechanical waves in gases, liquids, and solids including topics such as vibration, sound, ultrasound and infrasound. A scientist who works in the field of acoustics is an acoustician while someone working in the field of acoustics technology may be called an acoustical engineer. The application of acoustics is present in almost all aspects of modern society with the most obvious being the audio and noise control industries. Hearing is one of the most crucial means of survival in the animal world and speech is one of the most distinctive characteristics of human development and culture. Accordingly, the science of acoustics spreads across many facets of human society—music, medicine, architecture, industrial production, warfare and more. Likewise, animal species such as songbirds and frogs use sound and hearing as a key element of mating rituals or for marking territories. Art, craft, science and technology have provoked one another to advance the whole, as in many other fields of knowledge. Robert Bruce Lindsay's "Wheel of Acoustics" is a well accepted overview of the various fields in acoustics. History Etymology The word "acoustic" is derived from the Greek word ἀκουστικός (akoustikos), meaning "of or for hearing, ready to hear" and that from ἀκουστός (akoustos), "heard, audible", which in turn derives from the verb ἀκούω(akouo), "I hear". The Latin synonym is "sonic", after which the term sonics used to be a synonym for acoustics and later a branch of acoustics. Frequencies above and below the audible range are called "ultrasonic" and "infrasonic", respectively. Early research in acoustics In the 6th century BC, the ancient Greek philosopher Pythagoras wanted to know why some combinations of musical sounds seemed more beautiful than others, and he found answers in terms of numerical ratios representing the harmonic overtone series on a string. He is reputed to have observed that when the length Document 2::: The spectro-temporal receptive field or spatio-temporal receptive field (STRF) of a neuron represents which types of stimuli excite or inhibit that neuron. "Spectro-temporal" refers most commonly to audition, where the neuron's response depends on frequency versus time, while "spatio-temporal" refers to vision, where the neuron's response depends on spatial location versus time. Thus they are not exactly the same concept, but both are referred to as STRF and serve a similar role in the analysis of neural responses. If linearity is assumed, the neuron can be modeled as having a time-varying firing rate equal to the convolution of the stimulus with the STRF. Auditory STRFs The example STRF here is for an auditory neuron from the area CM (caudal medial) of a male zebra finch, when played conspecific birdsong. The colour of this plot shows the effect of sound on this neuron: this neuron tends to be excited by sound from about 2.5 kHz to 7 kHz heard by the animal 12 ms ago, but it is inhibited by sound in the same frequency range from about 18 ms ago. Visual STRFs See Dario L. Ringach Receptive Fields in Macaque Primary Visual Cortex Spatial Structure and Symmetry of Simple-Cell (2002) J. H. van Hateren and D. L. Ruderman Independent component analysis of natural image sequences yields spatio-temporal filters similar to simple cells in primary visual cortex (2002) Idealized computational models for auditory receptive fields A computational theory for early auditory receptive fields can be expressed from normative physical, mathematical and perceptual arguments, permitting axiomatic derivation of auditory receptive fields in two stages: a first stage of temporal receptive fields corresponding to an idealized cochlea model modeled as window Fourier transform with either Gabor functions in the case of non-causal time or Gammatone functions alternatively generalized Gammatone functions for a truly time-causal model in which the future cannot be accessed, a secon Document 3::: Biotremology is the study of production, dispersion and reception of mechanical vibrations by organisms, and their effect on behavior. This involves neurophysiological and anatomical basis of vibration production and detection, and relation of vibrations to the medium they disperse through. Vibrations can represent either signals used in vibrational (seismic) communication or inadvertent cues used, for example, in locating prey (in some cases even both). In almost all known cases, they are transmitted as surface waves along the boundary of a medium, i.e. Rayleigh waves or bending waves. While most attention is directed towards the role of vibrations in animal behavior, plants actively respond to sounds and vibrations as well, so this subject is shared with plant bioacoustics. Other groups of organisms (such as nematodes) are also postulated to either actively produce or at least use vibrations to sense their environment, but those are currently far less studied. Traditionally regarded part of bioacoustics, the discipline has recently begun to actively diverge on its own, because of the many peculiarities of the studied modality compared with sound. Vibrational communication has been recognized as evolutionarily older than sound and much more prevalent, at least among arthropods, although the two modalities are closely related and sometimes overlap. While many experimental approaches are shared between the two disciplines, scientists in the field of biotremology often use special equipment, such as laser vibrometers, for detecting faint vibrational emissions by animals and electromagnetic transducers in contact with the substrate for artificial playback experiments. History People have observed vibrational communication by animals for hundreds of years, although the idea that vibrations may convey information dates to the middle of the 20th century. Swedish entomologist Frej Ossiannilsson pioneered the field in 1949 by suggesting vibrations transmitted through pl Document 4::: Detection theory or signal detection theory is a means to measure the ability to differentiate between information-bearing patterns (called stimulus in living organisms, signal in machines) and random patterns that distract from the information (called noise, consisting of background stimuli and random activity of the detection machine and of the nervous system of the operator). In the field of electronics, signal recovery is the separation of such patterns from a disguising background. According to the theory, there are a number of determiners of how a detecting system will detect a signal, and where its threshold levels will be. The theory can explain how changing the threshold will affect the ability to discern, often exposing how adapted the system is to the task, purpose or goal at which it is aimed. When the detecting system is a human being, characteristics such as experience, expectations, physiological state (e.g., fatigue) and other factors can affect the threshold applied. For instance, a sentry in wartime might be likely to detect fainter stimuli than the same sentry in peacetime due to a lower criterion, however they might also be more likely to treat innocuous stimuli as a threat. Much of the early work in detection theory was done by radar researchers. By 1954, the theory was fully developed on the theoretical side as described by Peterson, Birdsall and Fox and the foundation for the psychological theory was made by Wilson P. Tanner, David M. Green, and John A. Swets, also in 1954. Detection theory was used in 1966 by John A. Swets and David M. Green for psychophysics. Green and Swets criticized the traditional methods of psychophysics for their inability to discriminate between the real sensitivity of subjects and their (potential) response biases. Detection theory has applications in many fields such as diagnostics of any kind, quality control, telecommunications, and psychology. The concept is similar to the signal-to-noise ratio used in the The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What term is defined as the ability to locate objects in the dark by bouncing sound waves off them. A. night vision B. echolocation C. morphology D. thermodynamics Answer:
sciq-4223
multiple_choice
Metamorphic rocks form when an existing rock is changed by heat or what?
[ "radiation", "chemical reaction", "cold", "pressure" ]
D
Relavent Documents: Document 0::: In geology, rock (or stone) is any naturally occurring solid mass or aggregate of minerals or mineraloid matter. It is categorized by the minerals included, its chemical composition, and the way in which it is formed. Rocks form the Earth's outer solid layer, the crust, and most of its interior, except for the liquid outer core and pockets of magma in the asthenosphere. The study of rocks involves multiple subdisciplines of geology, including petrology and mineralogy. It may be limited to rocks found on Earth, or it may include planetary geology that studies the rocks of other celestial objects. Rocks are usually grouped into three main groups: igneous rocks, sedimentary rocks and metamorphic rocks. Igneous rocks are formed when magma cools in the Earth's crust, or lava cools on the ground surface or the seabed. Sedimentary rocks are formed by diagenesis and lithification of sediments, which in turn are formed by the weathering, transport, and deposition of existing rocks. Metamorphic rocks are formed when existing rocks are subjected to such high pressures and temperatures that they are transformed without significant melting. Humanity has made use of rocks since the earliest humans. This early period, called the Stone Age, saw the development of many stone tools. Stone was then used as a major component in the construction of buildings and early infrastructure. Mining developed to extract rocks from the Earth and obtain the minerals within them, including metals. Modern technology has allowed the development of new man-made rocks and rock-like substances, such as concrete. Study Geology is the study of Earth and its components, including the study of rock formations. Petrology is the study of the character and origin of rocks. Mineralogy is the study of the mineral components that create rocks. The study of rocks and their components has contributed to the geological understanding of Earth's history, the archaeological understanding of human history, and the Document 1::: The rock cycle is a basic concept in geology that describes transitions through geologic time among the three main rock types: sedimentary, metamorphic, and igneous. Each rock type is altered when it is forced out of its equilibrium conditions. For example, an igneous rock such as basalt may break down and dissolve when exposed to the atmosphere, or melt as it is subducted under a continent. Due to the driving forces of the rock cycle, plate tectonics and the water cycle, rocks do not remain in equilibrium and change as they encounter new environments. The rock cycle explains how the three rock types are related to each other, and how processes change from one type to another over time. This cyclical aspect makes rock change a geologic cycle and, on planets containing life, a biogeochemical cycle. Transition to igneous rock When rocks are pushed deep under the Earth's surface, they may melt into magma. If the conditions no longer exist for the magma to stay in its liquid state, it cools and solidifies into an igneous rock. A rock that cools within the Earth is called intrusive or plutonic and cools very slowly, producing a coarse-grained texture such as the rock granite. As a result of volcanic activity, magma (which is called lava when it reaches Earth's surface) may cool very rapidly on the Earth's surface exposed to the atmosphere and are called extrusive or volcanic rocks. These rocks are fine-grained and sometimes cool so rapidly that no crystals can form and result in a natural glass, such as obsidian, however the most common fine-grained rock would be known as basalt. Any of the three main types of rocks (igneous, sedimentary, and metamorphic rocks) can melt into magma and cool into igneous rocks. Secondary changes Epigenetic change (secondary processes occurring at low temperatures and low pressures) may be arranged under a number of headings, each of which is typical of a group of rocks or rock-forming minerals, though usually more than one of these alt Document 2::: Metamictisation (sometimes called metamictization or metamiction) is a natural process resulting in the gradual and ultimately complete destruction of a mineral's crystal structure, leaving the mineral amorphous. The affected material is therefore described as metamict. Certain minerals occasionally contain interstitial impurities of radioactive elements, and it is the alpha radiation emitted from those compounds that is responsible for degrading a mineral's crystal structure through internal bombardment. The effects of metamictisation are extensive: other than negating any birefringence previously present, the process also lowers a mineral's refractive index, hardness, and its specific gravity. The mineral's colour is also affected: metamict specimens are usually green, brown or blackish. Further, metamictisation diffuses the bands of a mineral's absorption spectrum. Curiously and inexplicably, the one attribute which metamictisation does not alter is dispersion. All metamict materials are themselves radioactive, some dangerously so. An example of a metamict mineral is zircon. The presence of uranium and thorium atoms substituting for zirconium in the crystal structure is responsible for the radiation damage in this case. Unaffected specimens are termed high zircon while metamict specimens are termed low zircon. Other minerals known to undergo metamictisation include allanite, gadolinite, ekanite, thorite and titanite. Ekanite is almost invariably found completely metamict as thorium and uranium are part of its essential chemical composition. Metamict minerals can have their crystallinity and properties restored through prolonged annealing. A related phenomenon is the formation of pleochroic halos surrounding minute zircon inclusions within a crystal of biotite or other mineral. The spherical halos are produced by alpha particle radiation from the included uranium- or thorium-bearing species. Such halos can also be found surrounding monazite and other radioacti Document 3::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 4::: The geologic record in stratigraphy, paleontology and other natural sciences refers to the entirety of the layers of rock strata. That is, deposits laid down by volcanism or by deposition of sediment derived from weathering detritus (clays, sands etc.). This includes all its fossil content and the information it yields about the history of the Earth: its past climate, geography, geology and the evolution of life on its surface. According to the law of superposition, sedimentary and volcanic rock layers are deposited on top of each other. They harden over time to become a solidified (competent) rock column, that may be intruded by igneous rocks and disrupted by tectonic events. Correlating the rock record At a certain locality on the Earth's surface, the rock column provides a cross section of the natural history in the area during the time covered by the age of the rocks. This is sometimes called the rock history and gives a window into the natural history of the location that spans many geological time units such as ages, epochs, or in some cases even multiple major geologic periods—for the particular geographic region or regions. The geologic record is in no one place entirely complete for where geologic forces one age provide a low-lying region accumulating deposits much like a layer cake, in the next may have uplifted the region, and the same area is instead one that is weathering and being torn down by chemistry, wind, temperature, and water. This is to say that in a given location, the geologic record can be and is quite often interrupted as the ancient local environment was converted by geological forces into new landforms and features. Sediment core data at the mouths of large riverine drainage basins, some of which go deep thoroughly support the law of superposition. However using broadly occurring deposited layers trapped within differently located rock columns, geologists have pieced together a system of units covering most of the geologic time scale The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Metamorphic rocks form when an existing rock is changed by heat or what? A. radiation B. chemical reaction C. cold D. pressure Answer:
sciq-10278
multiple_choice
Historically, certain bacteriophages have also been used as cloning vectors for making what?
[ "diverse libraries", "genomic libraries", "ultraviolet libraries", "specific libraries" ]
B
Relavent Documents: Document 0::: MicrobeLibrary is a permanent collection of over 1400 original peer-reviewed resources for teaching undergraduate microbiology. It is provided by the American Society for Microbiology, Washington DC, United States. Contents include curriculum activities; images and animations; reviews of books, websites and other resources; and articles from Focus on Microbiology Education, Microbiology Education and Microbe. Around 40% of the materials are free to educators and students, the remainder require a subscription. the service is suspended with the message to: "Please check back with us in 2017". External links MicrobeLibrary Microbiology Document 1::: The Investigative Biology Teaching Laboratories are located at Cornell University on the first floor Comstock Hall. They are well-equipped biology teaching laboratories used to provide hands-on laboratory experience to Cornell undergraduate students. Currently, they are the home of the Investigative Biology Laboratory Course, (BioG1500), and frequently being used by the Cornell Institute for Biology Teachers, the Disturbance Ecology course and Insectapalooza. In the past the Investigative Biology Teaching Laboratories hosted the laboratory portion of the Introductory Biology Course with the course number of Bio103-104 (renumbered to BioG1103-1104). The Investigative Biology Teaching Laboratories house the Science Communication and Public Engagement Undergraduate Minor. History Bio103-104 BioG1103-1104 Biological Sciences Laboratory course was a two-semester, two-credit course. BioG1103 was offered in the spring, while 1104 was offered in the fall. BioG1500 This course was first offered in Fall 2010. It is a one semester course, offered in the Fall, Spring and Summer for 2 credits. One credit is being awarded for the letter and one credit for the three-hour-long lab, following the SUNY system. Document 2::: Biochemical engineering, also known as bioprocess engineering, is a field of study with roots stemming from chemical engineering and biological engineering. It mainly deals with the design, construction, and advancement of unit processes that involve biological organisms (such as fermentation) or organic molecules (often enzymes) and has various applications in areas of interest such as biofuels, food, pharmaceuticals, biotechnology, and water treatment processes. The role of a biochemical engineer is to take findings developed by biologists and chemists in a laboratory and translate that to a large-scale manufacturing process. History For hundreds of years, humans have made use of the chemical reactions of biological organisms in order to create goods. In the mid-1800s, Louis Pasteur was one of the first people to look into the role of these organisms when he researched fermentation. His work also contributed to the use of pasteurization, which is still used to this day. By the early 1900s, the use of microorganisms had expanded, and was used to make industrial products. Up to this point, biochemical engineering hadn't developed as a field yet. It wasn't until 1928 when Alexander Fleming discovered penicillin that the field of biochemical engineering was established. After this discovery, samples were gathered from around the world in order to continue research into the characteristics of microbes from places such as soils, gardens, forests, rivers, and streams. Today, biochemical engineers can be found working in a variety of industries, from food to pharmaceuticals. This is due to the increasing need for efficiency and production which requires knowledge of how biological systems and chemical reactions interact with each other and how they can be used to meet these needs. Education Biochemical engineering is not a major offered by most universities and is instead an area of interest under the chemical engineering major in most cases. The following universiti Document 3::: The following outline is provided as an overview of and topical guide to biophysics: Biophysics – interdisciplinary science that uses the methods of physics to study biological systems. Nature of biophysics Biophysics is An academic discipline – branch of knowledge that is taught and researched at the college or university level. Disciplines are defined (in part), and recognized by the academic journals in which research is published, and the learned societies and academic departments or faculties to which their practitioners belong. A scientific field (a branch of science) – widely recognized category of specialized expertise within science, and typically embodies its own terminology and nomenclature. Such a field will usually be represented by one or more scientific journals, where peer-reviewed research is published. A natural science – one that seeks to elucidate the rules that govern the natural world using empirical and scientific methods. A biological science – concerned with the study of living organisms, including their structure, function, growth, evolution, distribution, and taxonomy. A branch of physics – concerned with the study of matter and its motion through space and time, along with related concepts such as energy and force. An interdisciplinary field – field of science that overlaps with other sciences Scope of biophysics research Biomolecular scale Biomolecule Biomolecular structure Organismal scale Animal locomotion Biomechanics Biomineralization Motility Environmental scale Biophysical environment Biophysics research overlaps with Agrophysics Biochemistry Biophysical chemistry Bioengineering Biogeophysics Nanotechnology Systems biology Branches of biophysics Astrobiophysics – field of intersection between astrophysics and biophysics concerned with the influence of the astrophysical phenomena upon life on planet Earth or some other planet in general. Medical biophysics – interdisciplinary field that applies me Document 4::: Biology by Team in German Biologie im Team - is the first Austrian biology contest for upper secondary schools. Students at upper secondary schools who are especially interested in biology can deepen their knowledge and broaden their competence in experimental biology within the framework of this contest. Each year, a team of teachers choose modules of key themes on which students work in the form of a voluntary exercise. The evaluation focuses in particular on the practical work, and, since the school year 2004/05, also on teamwork. In April, a two-day closing competition takes place, in which six groups of students from participating schools are given various problems to solve. A jury (persons from the science and corporate communities) evaluate the results and how they are presented. The concept was developed by a team of teachers in co-operation with the AHS (Academic Secondary Schools) - Department of the Pedagogical Institute in Carinthia. Since 2008 it is situated at the Science departement of the University College of Teacher Training Carinthia. The first contest in the school year 2002/03 took place under the motto: Hell is loose in the ground under us. Other themes included Beautiful but dangerous, www-worldwide water 1 and 2, Expedition forest, Relationship boxes, Mole's view, Biological timetravel, Biology at the University, Ecce Homo, Biodiversity, Death in tin cans, Sex sells, Without a trace, Biologists see more, Quo vadis biology? , Biology without limits?, Diversity instead of simplicity, Grid square, Diversity instead of simplicity 0.2, www-worldwide water 3.The theme for the year 2023/24 is I hear something you don't see. Till now the following schools were participating: BG/BRG Mössingerstraße Klagenfurt Ingeborg-Bachmann-Gymnasium, Klagenfurt BG/BRG St. Martinerstraße Villach BG/BRG Peraustraße Villach International school Carinthia, Velden Österreichisches Gymnasium Prag Europagymnasium Klagenfurt BRG Viktring Klagenfurt BORG Wo The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Historically, certain bacteriophages have also been used as cloning vectors for making what? A. diverse libraries B. genomic libraries C. ultraviolet libraries D. specific libraries Answer:
sciq-270
multiple_choice
More than half of all known organisms are what?
[ "enzymes", "spiders", "mammals", "insects" ]
D
Relavent Documents: Document 0::: Animals are multicellular, eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. As of 2022, 2.16 million living animal species have been described—of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are around 7.77 million animal species. Animals range in length from to . They have complex interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology. Most living animal species are in Bilateria, a clade whose members have a bilaterally symmetric body plan. The Bilateria include the protostomes, containing animals such as nematodes, arthropods, flatworms, annelids and molluscs, and the deuterostomes, containing the echinoderms and the chordates, the latter including the vertebrates. Life forms interpreted as early animals were present in the Ediacaran biota of the late Precambrian. Many modern animal phyla became clearly established in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago. Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on ad Document 1::: This list of life sciences comprises the branches of science that involve the scientific study of life – such as microorganisms, plants, and animals including human beings. This science is one of the two major branches of natural science, the other being physical science, which is concerned with non-living matter. Biology is the overall natural science that studies life, with the other life sciences as its sub-disciplines. Some life sciences focus on a specific type of organism. For example, zoology is the study of animals, while botany is the study of plants. Other life sciences focus on aspects common to all or many life forms, such as anatomy and genetics. Some focus on the micro-scale (e.g. molecular biology, biochemistry) other on larger scales (e.g. cytology, immunology, ethology, pharmacy, ecology). Another major branch of life sciences involves understanding the mindneuroscience. Life sciences discoveries are helpful in improving the quality and standard of life and have applications in health, agriculture, medicine, and the pharmaceutical and food science industries. For example, it has provided information on certain diseases which has overall aided in the understanding of human health. Basic life science branches Biology – scientific study of life Anatomy – study of form and function, in plants, animals, and other organisms, or specifically in humans Astrobiology – the study of the formation and presence of life in the universe Bacteriology – study of bacteria Biotechnology – study of combination of both the living organism and technology Biochemistry – study of the chemical reactions required for life to exist and function, usually a focus on the cellular level Bioinformatics – developing of methods or software tools for storing, retrieving, organizing and analyzing biological data to generate useful biological knowledge Biolinguistics – the study of the biology and evolution of language. Biological anthropology – the study of humans, non-hum Document 2::: Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research. Americas Human Biology major at Stanford University, Palo Alto (since 1970) Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government. Human and Social Biology (Caribbean) Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment. Human Biology Program at University of Toronto The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications. Asia BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002) BSc (honours) Human Biology at AIIMS (New Document 3::: MicrobeLibrary is a permanent collection of over 1400 original peer-reviewed resources for teaching undergraduate microbiology. It is provided by the American Society for Microbiology, Washington DC, United States. Contents include curriculum activities; images and animations; reviews of books, websites and other resources; and articles from Focus on Microbiology Education, Microbiology Education and Microbe. Around 40% of the materials are free to educators and students, the remainder require a subscription. the service is suspended with the message to: "Please check back with us in 2017". External links MicrobeLibrary Microbiology Document 4::: Deborah M. Gordon (born December 30, 1955) is a biologist, appointed as a professor in the Department of Biology at Stanford University. Major research Gordon studies ant colony behavior and ecology, with a particular focus on red harvester ants. She focuses on the developing behavior of colonies, even as individual ants change functions within their own lifetimes. Gordon's fieldwork includes a long-term study of ant colonies in Arizona. She is the author of numerous articles and papers as well as the book Ants at Work for the general public, and she was profiled in The New York Times Magazine in 1999. In 2012, she found that the foraging behavior of red harvester ants matches the TCP congestion control algorithm. Education Gordon received a Ph.D. in zoology from Duke in 1983, an M.Sc. in Biology from Stanford in 1977 and a bachelor's degree from Oberlin College, where she majored in French. She was a junior fellow of the Harvard Society of Fellows. Awards and recognition In 1993, Gordon was named a Stanford MacNamara Fellow. In 1995 Gordon received an award for teaching excellence from the Phi Beta Kappa Northern California Association. In 2001 Gordon was awarded a Guggenheim fellowship from the John Simon Guggenheim Memorial Foundation. In 2003, Gordon was invited to speak at a TED conference. She is also an adviser to the Microbes Mind Forum. Bibliography The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. More than half of all known organisms are what? A. enzymes B. spiders C. mammals D. insects Answer:
sciq-6742
multiple_choice
Metallic solids are composed entirely of which atoms?
[ "metallic", "metals", "toxic", "organic" ]
A
Relavent Documents: Document 0::: A metalloid is a type of chemical element which has a preponderance of properties in between, or that are a mixture of, those of metals and nonmetals. There is no standard definition of a metalloid and no complete agreement on which elements are metalloids. Despite the lack of specificity, the term remains in use in the literature of chemistry. The six commonly recognised metalloids are boron, silicon, germanium, arsenic, antimony and tellurium. Five elements are less frequently so classified: carbon, aluminium, selenium, polonium and astatine. On a standard periodic table, all eleven elements are in a diagonal region of the p-block extending from boron at the upper left to astatine at lower right. Some periodic tables include a dividing line between metals and nonmetals, and the metalloids may be found close to this line. Typical metalloids have a metallic appearance, but they are brittle and only fair conductors of electricity. Chemically, they behave mostly as nonmetals. They can form alloys with metals. Most of their other physical properties and chemical properties are intermediate in nature. Metalloids are usually too brittle to have any structural uses. They and their compounds are used in alloys, biological agents, catalysts, flame retardants, glasses, optical storage and optoelectronics, pyrotechnics, semiconductors, and electronics. The electrical properties of silicon and germanium enabled the establishment of the semiconductor industry in the 1950s and the development of solid-state electronics from the early 1960s. The term metalloid originally referred to nonmetals. Its more recent meaning, as a category of elements with intermediate or hybrid properties, became widespread in 1940–1960. Metalloids are sometimes called semimetals, a practice that has been discouraged, as the term semimetal has a different meaning in physics than in chemistry. In physics, it refers to a specific kind of electronic band structure of a substance. In this context, only Document 1::: A nonmetal is a chemical element that mostly lacks metallic properties. Seventeen elements are generally considered nonmetals, though some authors recognize more or fewer depending on the properties considered most representative of metallic or nonmetallic character. Some borderline elements further complicate the situation. Nonmetals tend to have low density and high electronegativity (the ability of an atom in a molecule to attract electrons to itself). They range from colorless gases like hydrogen to shiny solids like the graphite form of carbon. Nonmetals are often poor conductors of heat and electricity, and when solid tend to be brittle or crumbly. In contrast, metals are good conductors and most are pliable. While compounds of metals tend to be basic, those of nonmetals tend to be acidic. The two lightest nonmetals, hydrogen and helium, together make up about 98% of the observable ordinary matter in the universe by mass. Five nonmetallic elements—hydrogen, carbon, nitrogen, oxygen, and silicon—make up the overwhelming majority of the Earth's crust, atmosphere, oceans and biosphere. The distinct properties of nonmetallic elements allow for specific uses that metals often cannot achieve. Elements like hydrogen, oxygen, carbon, and nitrogen are essential building blocks for life itself. Moreover, nonmetallic elements are integral to industries such as electronics, energy storage, agriculture, and chemical production. Most nonmetallic elements were not identified until the 18th and 19th centuries. While a distinction between metals and other minerals had existed since antiquity, a basic classification of chemical elements as metallic or nonmetallic emerged only in the late 18th century. Since then nigh on two dozen properties have been suggested as single criteria for distinguishing nonmetals from metals. Definition and applicable elements Properties mentioned hereafter refer to the elements in their most stable forms in ambient conditions unless otherwise Document 2::: Material is a substance or mixture of substances that constitutes an object. Materials can be pure or impure, living or non-living matter. Materials can be classified on the basis of their physical and chemical properties, or on their geological origin or biological function. Materials science is the study of materials, their properties and their applications. Raw materials can be processed in different ways to influence their properties, by purification, shaping or the introduction of other materials. New materials can be produced from raw materials by synthesis. In industry, materials are inputs to manufacturing processes to produce products or more complex materials. Historical elements Materials chart the history of humanity. The system of the three prehistoric ages (Stone Age, Bronze Age, Iron Age) were succeeded by historical ages: steel age in the 19th century, polymer age in the middle of the following century (plastic age) and silicon age in the second half of the 20th century. Classification by use Materials can be broadly categorized in terms of their use, for example: Building materials are used for construction Building insulation materials are used to retain heat within buildings Refractory materials are used for high-temperature applications Nuclear materials are used for nuclear power and weapons Aerospace materials are used in aircraft and other aerospace applications Biomaterials are used for applications interacting with living systems Material selection is a process to determine which material should be used for a given application. Classification by structure The relevant structure of materials has a different length scale depending on the material. The structure and composition of a material can be determined by microscopy or spectroscopy. Microstructure In engineering, materials can be categorised according to their microscopic structure: Plastics: a wide range of synthetic or semi-synthetic materials that use polymers as a main ingred Document 3::: Nonmetals show more variability in their properties than do metals. Metalloids are included here since they behave predominately as chemically weak nonmetals. Physically, they nearly all exist as diatomic or monatomic gases, or polyatomic solids having more substantial (open-packed) forms and relatively small atomic radii, unlike metals, which are nearly all solid and close-packed, and mostly have larger atomic radii. If solid, they have a submetallic appearance (with the exception of sulfur) and are brittle, as opposed to metals, which are lustrous, and generally ductile or malleable; they usually have lower densities than metals; are mostly poorer conductors of heat and electricity; and tend to have significantly lower melting points and boiling points than those of most metals. Chemically, the nonmetals mostly have higher ionisation energies, higher electron affinities (nitrogen and the noble gases have negative electron affinities) and higher electronegativity values than metals noting that, in general, the higher an element's ionisation energy, electron affinity, and electronegativity, the more nonmetallic that element is. Nonmetals, including (to a limited extent) xenon and probably radon, usually exist as anions or oxyanions in aqueous solution; they generally form ionic or covalent compounds when combined with metals (unlike metals, which mostly form alloys with other metals); and have acidic oxides whereas the common oxides of nearly all metals are basic. Properties Abbreviations used in this section are: AR Allred-Rochow; CN coordination number; and MH Moh's hardness Group 1 Hydrogen is a colourless, odourless, and comparatively unreactive diatomic gas with a density of 8.988 × 10−5 g/cm3 and is about 14 times lighter than air. It condenses to a colourless liquid −252.879 °C and freezes into an ice- or snow-like solid at −259.16 °C. The solid form has a hexagonal crystalline structure and is soft and easily crushed. Hydrogen is an insulator in all of Document 4::: can be broadly divided into metals, metalloids, and nonmetals according to their shared physical and chemical properties. All metals have a shiny appearance (at least when freshly polished); are good conductors of heat and electricity; form alloys with other metals; and have at least one basic oxide. Metalloids are metallic-looking brittle solids that are either semiconductors or exist in semiconducting forms, and have amphoteric or weakly acidic oxides. Typical nonmetals have a dull, coloured or colourless appearance; are brittle when solid; are poor conductors of heat and electricity; and have acidic oxides. Most or some elements in each category share a range of other properties; a few elements have properties that are either anomalous given their category, or otherwise extraordinary. Properties Metals Metals appear lustrous (beneath any patina); form mixtures (alloys) when combined with other metals; tend to lose or share electrons when they react with other substances; and each forms at least one predominantly basic oxide. Most metals are silvery looking, high density, relatively soft and easily deformed solids with good electrical and thermal conductivity, closely packed structures, low ionisation energies and electronegativities, and are found naturally in combined states. Some metals appear coloured (Cu, Cs, Au), have low densities (e.g. Be, Al) or very high melting points (e.g. W, Nb), are liquids at or near room temperature (e.g. Hg, Ga), are brittle (e.g. Os, Bi), not easily machined (e.g. Ti, Re), or are noble (hard to oxidise, e.g. Au, Pt), or have nonmetallic structures (Mn and Ga are structurally analogous to, respectively, white P and I). Metals comprise the large majority of the elements, and can be subdivided into several different categories. From left to right in the periodic table, these categories include the highly reactive alkali metals; the less-reactive alkaline earth metals, lanthanides, and radioactive actinides; the archetypal tran The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Metallic solids are composed entirely of which atoms? A. metallic B. metals C. toxic D. organic Answer:
scienceQA-4838
multiple_choice
Select the bird.
[ "shoebill", "coral snake", "poison dart frog", "bison" ]
A
A shoebill is a bird. It has feathers, two wings, and a beak. Shoebills live in tropical East Africa. Shoebills get their name from their shoe-shaped beaks. A bison is a mammal. It has fur and feeds its young milk. Male bison have horns. They can use their horns to defend themselves. A coral snake is a reptile. It has scaly, waterproof skin. Coral snakes spend most of their time underground or hiding under leaves. A poison dart frog is an amphibian. It has moist skin and begins its life in water. Poison dart frogs come in many bright colors. Their bright color warns other animals that these frogs are poisonous.
Relavent Documents: Document 0::: History of Animals (, Ton peri ta zoia historion, "Inquiries on Animals"; , "History of Animals") is one of the major texts on biology by the ancient Greek philosopher Aristotle, who had studied at Plato's Academy in Athens. It was written in the fourth century BC; Aristotle died in 322 BC. Generally seen as a pioneering work of zoology, Aristotle frames his text by explaining that he is investigating the what (the existing facts about animals) prior to establishing the why (the causes of these characteristics). The book is thus an attempt to apply philosophy to part of the natural world. Throughout the work, Aristotle seeks to identify differences, both between individuals and between groups. A group is established when it is seen that all members have the same set of distinguishing features; for example, that all birds have feathers, wings, and beaks. This relationship between the birds and their features is recognized as a universal. The History of Animals contains many accurate eye-witness observations, in particular of the marine biology around the island of Lesbos, such as that the octopus had colour-changing abilities and a sperm-transferring tentacle, that the young of a dogfish grow inside their mother's body, or that the male of a river catfish guards the eggs after the female has left. Some of these were long considered fanciful before being rediscovered in the nineteenth century. Aristotle has been accused of making errors, but some are due to misinterpretation of his text, and others may have been based on genuine observation. He did however make somewhat uncritical use of evidence from other people, such as travellers and beekeepers. The History of Animals had a powerful influence on zoology for some two thousand years. It continued to be a primary source of knowledge until zoologists in the sixteenth century, such as Conrad Gessner, all influenced by Aristotle, wrote their own studies of the subject. Context Aristotle (384–322 BC) studied at Plat Document 1::: The Witherby Memorial Lecture is an academic lectureship awarded by the British Trust for Ornithology (BTO) annually since 1968. The memorial lecture is in memorandum of Harry Forbes Witherby, a former owner of Witherby, who previously published ornithological books. Lectures Document 2::: The difficulty of defining or measuring intelligence in non-human animals makes the subject difficult to study scientifically in birds. In general, birds have relatively large brains compared to their head size. The visual and auditory senses are well developed in most species, though the tactile and olfactory senses are well realized only in a few groups. Birds communicate using visual signals as well as through the use of calls and song. The testing of intelligence in birds is therefore usually based on studying responses to sensory stimuli. The corvids (ravens, crows, jays, magpies, etc.) and psittacines (parrots, macaws, and cockatoos) are often considered the most intelligent birds, and are among the most intelligent animals in general. Pigeons, finches, domestic fowl, and birds of prey have also been common subjects of intelligence studies. Studies Bird intelligence has been studied through several attributes and abilities. Many of these studies have been on birds such as quail, domestic fowl, and pigeons kept under captive conditions. It has, however, been noted that field studies have been limited, unlike those of the apes. Birds in the crow family (corvids) as well as parrots (psittacines) have been shown to live socially, have long developmental periods, and possess large forebrains, all of which have been hypothesized to allow for greater cognitive abilities. Counting has traditionally been considered an ability that shows intelligence. Anecdotal evidence from the 1960s has suggested that crows can count up to 3. Researchers need to be cautious, however, and ensure that birds are not merely demonstrating the ability to subitize, or count a small number of items quickly. Some studies have suggested that crows may indeed have a true numerical ability. It has been shown that parrots can count up to 6. Cormorants used by Chinese fishermen were given every eighth fish as a reward, and found to be able to keep count up to 7. E.H. Hoh wrote in Natural Histo Document 3::: This is a list of the fastest flying birds in the world. A bird's velocity is necessarily variable; a hunting bird will reach much greater speeds while diving to catch prey than when flying horizontally. The bird that can achieve the greatest airspeed is the peregrine falcon, able to exceed in its dives. A close relative of the common swift, the white-throated needletail (Hirundapus caudacutus), is commonly reported as the fastest bird in level flight with a reported top speed of . This record remains unconfirmed as the measurement methods have never been published or verified. The record for the fastest confirmed level flight by a bird is held by the common swift. Birds by flying speed See also List of birds by flight heights Note Document 4::: The genus includes the following species: Sympetrum ambiguum – blue-faced meadowhawk Sympetrum anomalum Sympetrum arenicolor Sympetrum baccha Sympetrum chaconi Sympetrum commixtum Sympetrum cordulegaster Sympetrum corruptum – variegated meadowhawk Sympetrum costiferum – saffron-winged meadowhawk Sympetrum croceolum Sympetrum daliensis Sympetrum danae – black darter, black meadowhawk Sympetrum darwinianum Sympetrum depressiusculum – spotted darter Sympetrum dilatatum – St. Helena darter Sympetrum durum Sympetrum eroticum Sympetrum evanescens Sympetrum flaveolum – yellow-winged darter Sympetrum fonscolombii – red-veined darter, nomad Sympetrum frequens Sympetrum gilvum Sympetrum gracile Sympetrum haematoneura Sympetrum haritonovi – dwarf darter Sympetrum himalayanum Sympetrum hypomelas Sympetrum illotum – cardinal meadowhawk Sympetrum imitans Sympetrum infuscatum Sympetrum internum – cherry-faced meadowhawk Sympetrum kunckeli Sympetrum maculatum Sympetrum madidum – red-veined meadowhawk Sympetrum meridionale – southern darter Sympetrum nigrifemur – island darter Sympetrum nigrocreatum – Talamanca meadowhawk Sympetrum nomurai Sympetrum obtrusum – white-faced meadowhawk Sympetrum orientale Sympetrum pallipes – striped meadowhawk Sympetrum paramo Sympetrum parvulum Sympetrum pedemontanum – banded darter Sympetrum risi Sympetrum roraimae Sympetrum rubicu The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Select the bird. A. shoebill B. coral snake C. poison dart frog D. bison Answer:
sciq-4645
multiple_choice
What type of plug is generally used on metal appliances?
[ "4 prong", "3 prong", "5 prong", ".2 prong" ]
B
Relavent Documents: Document 0::: Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams. Course content E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are: Electrostatics Conductors, capacitors, and dielectrics Electric circuits Magnetic fields Electromagnetism. Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class. AP test The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution. Registration The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test. Format The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with Document 1::: There are four Advanced Placement (AP) Physics courses administered by the College Board as part of its Advanced Placement program: the algebra-based Physics 1 and Physics 2 and the calculus-based Physics C: Mechanics and Physics C: Electricity and Magnetism. All are intended to be at the college level. Each AP Physics course has an exam for which high-performing students may receive credit toward their college coursework. AP Physics 1 and 2 AP Physics 1 and AP Physics 2 were introduced in 2015, replacing AP Physics B. The courses were designed to emphasize critical thinking and reasoning as well as learning through inquiry. They are algebra-based and do not require any calculus knowledge. AP Physics 1 AP Physics 1 covers Newtonian mechanics, including: Unit 1: Kinematics Unit 2: Dynamics Unit 3: Circular Motion and Gravitation Unit 4: Energy Unit 5: Momentum Unit 6: Simple Harmonic Motion Unit 7: Torque and Rotational Motion Until 2020, the course also covered topics in electricity (including Coulomb's Law and resistive DC circuits), mechanical waves, and sound. These units were removed because they are included in AP Physics 2. AP Physics 2 AP Physics 2 covers the following topics: Unit 1: Fluids Unit 2: Thermodynamics Unit 3: Electric Force, Field, and Potential Unit 4: Electric Circuits Unit 5: Magnetism and Electromagnetic Induction Unit 6: Geometric and Physical Optics Unit 7: Quantum, Atomic, and Nuclear Physics AP Physics C From 1969 to 1972, AP Physics C was a single course with a single exam that covered all standard introductory university physics topics, including mechanics, fluids, electricity and magnetism, optics, and modern physics. In 1973, the College Board split the course into AP Physics C: Mechanics and AP Physics C: Electricity and Magnetism. The exam was also split into two separate 90-minute tests, each equivalent to a semester-length calculus-based college course. Until 2006, both exams could be taken for a single Document 2::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 3::: A plugboard or control panel (the term used depends on the application area) is an array of jacks or sockets (often called hubs) into which patch cords can be inserted to complete an electrical circuit. Control panels are sometimes used to direct the operation of unit record equipment, cipher machines, and early computers. Unit record equipment Main article: Unit record equipment The earliest machines were hardwired for specific applications. Control panels were introduced in 1906 for the Hollerith Type 1 Tabulator (photo of Type 3 with built-in control panel here). Removable control panels were introduced with the Hollerith (IBM) type 3-S tabulator in the 1920s. Applications then could be wired on separate control panels, and inserted into tabulators as needed. Removable control panels came to be used in all unit record machines where the machine's use for different applications required rewiring. IBM removable control panels ranged in size from 6 1/4" by 10 3/4" (for machines such as the IBM 077, IBM 550, IBM 514) to roughly one to two feet (300 to 600 mm) on a side and had a rectangular array of hubs. Plugs at each end of a single-conductor patch cord were inserted into hubs, making a connection between two contacts on the machine when the control panel was placed in the machine, thereby connecting an emitting hub to an accepting or entry hub. For example, in a card duplicator application a card column reading (emitting) hub might be connected to a punch magnet entry hub. It was a relatively simple matter to copy some fields, perhaps to different columns, and ignore other columns by suitable wiring. Tabulator control panels could require dozens of patch cords for some applications. Tabulator functions were implemented with both mechanical and electrical components. Control panels simplified the changing of electrical connections for different applications, but changing most tabulator's use still required mechanical changes. The IBM 407 was the first IBM t Document 4::: Battery terminals are the electrical contacts used to connect a load or charger to a single cell or multiple-cell battery. These terminals have a wide variety of designs, sizes, and features that are often not well documented. Automotive battery terminals Automotive batteries typically have one of three types of terminals. In recent years, the most common design was the SAE Post, consisting of two lead posts in the shape of truncated cones, positioned on the top of the battery, with slightly different diameters to ensure correct electrical polarity. The "JIS" type is similar to the SAE but smaller, once again positive is larger than negative but both are smaller than their SAE counterparts. Most older Japanese cars were fitted with JIS terminals. General Motors, and other automobile manufacturers, have also begun using side-post battery terminals, which consist of two recessed female 3/8" threads (SAE 3/8-16) into which bolts or various battery terminal adapters are to be attached. These side posts are of the same size and do not prevent incorrect polarity connections. L terminals consist of an L-shaped post with a bolt hole through the vertical side. These are used on some European cars, motorcycles, lawn and garden devices, snowmobiles, and other light-duty vehicles. Some batteries sizes are available with terminals in many different configurations, but two main configurations are: positive on left and negative on the right corner negative on the left and positive on the right corner. Terminals can also be both on the long or short side of the battery, or diagonally opposed, or in the middle. Purchasing the wrong configuration may prevent battery cables from reaching the battery terminals. Marine battery terminals Marine batteries typically have two posts, a 3/8"-16 threaded post for the positive terminal, and a 5/16"-18 threaded post for the negative terminal. Zinc battery terminals Zinc battery terminals are an environmentally friendly alternative to The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What type of plug is generally used on metal appliances? A. 4 prong B. 3 prong C. 5 prong D. .2 prong Answer:
sciq-6319
multiple_choice
What is the unit of evolution?
[ "population", "community", "biome", "species" ]
A
Relavent Documents: Document 0::: A unit of selection is a biological entity within the hierarchy of biological organization (for example, an entity such as: a self-replicating molecule, a gene, a cell, an organism, a group, or a species) that is subject to natural selection. There is debate among evolutionary biologists about the extent to which evolution has been shaped by selective pressures acting at these different levels. There is debate over the relative importance of the units themselves. For instance, is it group or individual selection that has driven the evolution of altruism? Where altruism reduces the fitness of individuals, individual-centered explanations for the evolution of altruism become complex and rely on the use of game theory, for instance; see kin selection and group selection. There also is debate over the definition of the units themselves, and the roles for selection and replication, and whether these roles may change in the course of evolution. Fundamental theory Two useful introductions to the fundamental theory underlying the unit of selection issue and debate, which also present examples of multi-level selection from the entire range of the biological hierarchy (typically with entities at level N-1 competing for increased representation, i.e., higher frequency, at the immediately higher level N, e.g., organisms in populations or cell lineages in organisms), are Richard Lewontin's classic piece The Units of Selection and John Maynard-Smith and Eörs Szathmáry's co-authored book, The Major Transitions in Evolution. As a theoretical introduction to units of selection, Lewontin writes: The generality of the principles of natural selection means that any entities in nature that have variation, reproduction, and heritability may evolve. ...the principles can be applied equally to genes, organisms, populations, species, and at opposite ends of the scale, prebiotic molecules and ecosystems." (1970, pp. 1-2) Elisabeth Lloyd's book The Structure and Confirmation of Evolutio Document 1::: Ecological units, comprise concepts such as population, community, and ecosystem as the basic units, which are at the basis of ecological theory and research, as well as a focus point of many conservation strategies. The concept of ecological units continues to suffer from inconsistencies and confusion over its terminology. Analyses of the existing concepts used in describing ecological units have determined that they differ in respects to four major criteria: The questions as to whether they are defined statistically or via a network of interactions, If their boundaries are drawn by topographical or process-related criteria, How high the required internal relationships are, And if they are perceived as "real" entities or abstractions by an observer. A population is considered to be the smallest ecological unit, consisting of a group of individuals that belong to the same species. A community would be the next classification, referring to all of the population present in an area at a specific time, followed by an ecosystem, referring to the community and it's interactions with its physical environment. An ecosystem is the most commonly used ecological unit and can be universally defined by two common traits: The unit is often defined in terms of a natural border (maritime boundary, watersheds, etc.) Abiotic components and organisms within the unit are considered to be interlinked. See also Biogeographic realm Ecoregion Ecotope Holobiont Functional ecology Behavior settings Regional geology Document 2::: Population ecology is a sub-field of ecology that deals with the dynamics of species populations and how these populations interact with the environment, such as birth and death rates, and by immigration and emigration. The discipline is important in conservation biology, especially in the development of population viability analysis which makes it possible to predict the long-term probability of a species persisting in a given patch of habitat. Although population ecology is a subfield of biology, it provides interesting problems for mathematicians and statisticians who work in population dynamics. History In the 1940s ecology was divided into autecology—the study of individual species in relation to the environment—and synecology—the study of groups of species in relation to the environment. The term autecology (from Ancient Greek: αὐτο, aúto, "self"; οίκος, oíkos, "household"; and λόγος, lógos, "knowledge"), refers to roughly the same field of study as concepts such as life cycles and behaviour as adaptations to the environment by individual organisms. Eugene Odum, writing in 1953, considered that synecology should be divided into population ecology, community ecology and ecosystem ecology, renaming autecology as 'species ecology' (Odum regarded "autecology" as an archaic term), thus that there were four subdivisions of ecology. Terminology A population is defined as a group of interacting organisms of the same species. A demographic structure of a population is how populations are often quantified. The total number of individuals in a population is defined as a population size, and how dense these individuals are is defined as population density. There is also a population’s geographic range, which has limits that a species can tolerate (such as temperature). Population size can be influenced by the per capita population growth rate (rate at which the population size changes per individual in the population.) Births, deaths, emigration, and immigration rates Document 3::: The history of life on Earth seems to show a clear trend; for example, it seems intuitive that there is a trend towards increasing complexity in living organisms. More recently evolved organisms, such as mammals, appear to be much more complex than organisms, such as bacteria, which have existed for a much longer period of time. However, there are theoretical and empirical problems with this claim. From a theoretical perspective, it appears that there is no reason to expect evolution to result in any largest-scale trends, although small-scale trends, limited in time and space, are expected (Gould, 1997). From an empirical perspective, it is difficult to measure complexity and, when it has been measured, the evidence does not support a largest-scale trend (McShea, 1996). History Many of the founding figures of evolution supported the idea of Evolutionary progress which has fallen from favour, but the work of Francisco J. Ayala and Michael Ruse suggests is still influential. Hypothetical largest-scale trends McShea (1998) discusses eight features of organisms that might indicate largest-scale trends in evolution: entropy, energy intensiveness, evolutionary versatility, developmental depth, structural depth, adaptedness, size, complexity. He calls these "live hypotheses", meaning that trends in these features are currently being considered by evolutionary biologists. McShea observes that the most popular hypothesis, among scientists, is that there is a largest-scale trend towards increasing complexity. Evolutionary theorists agree that there are local trends in evolution, such as increasing brain size in hominids, but these directional changes do not persist indefinitely, and trends in opposite directions also occur (Gould, 1997). Evolution causes organisms to adapt to their local environment; when the environment changes, the direction of the trend may change. The question of whether there is evolutionary progress is better formulated as the question of whether Document 4::: The darwin (d) is a unit of evolutionary change, defined by J. B. S. Haldane in 1949. One darwin is defined to be an e-fold (about 2.718) change in a trait over one million years. Haldane named the unit after Charles Darwin. Equation The equation for calculating evolutionary change in darwins () is: where and are the initial and final values of the trait and is the change in time in millions of years. An alternative form of this equation is: Since the difference between two natural logarithms is a dimensionless ratio, the trait may be measured in any unit. Inexplicably, Haldane defined the millidarwin as 10−9 darwins, despite the fact that the prefix milli- usually denotes a factor of one thousandth (10−3). Application The measure is most useful in palaeontology, where macroevolutionary changes in the dimensions of fossils can be compared. Where this is used it is an indirect measure as it relies on phenotypic rather than genotypic data. Several data points are required to overcome natural variation within a population. The darwin only measures the evolution of a particular trait rather than a lineage; different traits may evolve at different rates within a lineage. The evolution of traits can however be used to infer as a proxy the evolution of lineages. See also Evolutionary biology Macroevolution Microevolution The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the unit of evolution? A. population B. community C. biome D. species Answer:
sciq-2940
multiple_choice
The rising and sinking of these can cause precipitation?
[ "circular air currents", "global air currents", "underwater currents", "temporary air currents" ]
B
Relavent Documents: Document 0::: In atmospheric science, an atmospheric model is a mathematical model constructed around the full set of primitive, dynamical equations which govern atmospheric motions. It can supplement these equations with parameterizations for turbulent diffusion, radiation, moist processes (clouds and precipitation), heat exchange, soil, vegetation, surface water, the kinematic effects of terrain, and convection. Most atmospheric models are numerical, i.e. they discretize equations of motion. They can predict microscale phenomena such as tornadoes and boundary layer eddies, sub-microscale turbulent flow over buildings, as well as synoptic and global flows. The horizontal domain of a model is either global, covering the entire Earth, or regional (limited-area), covering only part of the Earth. The different types of models run are thermotropic, barotropic, hydrostatic, and nonhydrostatic. Some of the model types make assumptions about the atmosphere which lengthens the time steps used and increases computational speed. Forecasts are computed using mathematical equations for the physics and dynamics of the atmosphere. These equations are nonlinear and are impossible to solve exactly. Therefore, numerical methods obtain approximate solutions. Different models use different solution methods. Global models often use spectral methods for the horizontal dimensions and finite-difference methods for the vertical dimension, while regional models usually use finite-difference methods in all three dimensions. For specific locations, model output statistics use climate information, output from numerical weather prediction, and current surface weather observations to develop statistical relationships which account for model bias and resolution issues. Types The main assumption made by the thermotropic model is that while the magnitude of the thermal wind may change, its direction does not change with respect to height, and thus the baroclinicity in the atmosphere can be simulated usi Document 1::: This is a list of meteorology topics. The terms relate to meteorology, the interdisciplinary scientific study of the atmosphere that focuses on weather processes and forecasting. (see also: List of meteorological phenomena) A advection aeroacoustics aerobiology aerography (meteorology) aerology air parcel (in meteorology) air quality index (AQI) airshed (in meteorology) American Geophysical Union (AGU) American Meteorological Society (AMS) anabatic wind anemometer annular hurricane anticyclone (in meteorology) apparent wind Atlantic Oceanographic and Meteorological Laboratory (AOML) Atlantic hurricane season atmometer atmosphere Atmospheric Model Intercomparison Project (AMIP) Atmospheric Radiation Measurement (ARM) (atmospheric boundary layer [ABL]) planetary boundary layer (PBL) atmospheric chemistry atmospheric circulation atmospheric convection atmospheric dispersion modeling atmospheric electricity atmospheric icing atmospheric physics atmospheric pressure atmospheric sciences atmospheric stratification atmospheric thermodynamics atmospheric window (see under Threats) B ball lightning balloon (aircraft) baroclinity barotropity barometer ("to measure atmospheric pressure") berg wind biometeorology blizzard bomb (meteorology) buoyancy Bureau of Meteorology (in Australia) C Canada Weather Extremes Canadian Hurricane Centre (CHC) Cape Verde-type hurricane capping inversion (in meteorology) (see "severe thunderstorms" in paragraph 5) carbon cycle carbon fixation carbon flux carbon monoxide (see under Atmospheric presence) ceiling balloon ("to determine the height of the base of clouds above ground level") ceilometer ("to determine the height of a cloud base") celestial coordinate system celestial equator celestial horizon (rational horizon) celestial navigation (astronavigation) celestial pole Celsius Center for Analysis and Prediction of Storms (CAPS) (in Oklahoma in the US) Center for the Study o Document 2::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 3::: Upper-atmospheric models are simulations of the Earth's atmosphere between 20 and 100 km (65,000 and 328,000 feet) that comprises the stratosphere, mesosphere, and the lower thermosphere. Whereas most climate models simulate a region of the Earth's atmosphere from the surface to the stratopause, there also exist numerical models which simulate the wind, temperature and composition of the Earth's tenuous upper atmosphere, from the mesosphere to the exosphere, including the ionosphere. This region is affected strongly by the 11 year Solar cycle through variations in solar UV/EUV/Xray radiation and solar wind leading to high latitude particle precipitation and aurora. It has been proposed that these phenomena may have an effect on the lower atmosphere, and should therefore be included in simulations of climate change. For this reason there has been a drive in recent years to create whole atmosphere models to investigate whether or not this is the case. Jet stream perturbation model A jet stream perturbation model is employed by Weather Logistics UK, which simulates the diversion of the air streams in the upper atmosphere. North Atlantic air flow modelling is simulated by combining a monthly jet stream climatology input calculated at 20 to 30°W, with different blocking high patterns. The jet stream input is generated by thermal wind balance calculations at 316mbars (6 to 9 km aloft) in the mid-latitude range from 40 to 60°N. Long term blocking patterns are determined by the weather forecaster, who identifies the likely position and strength of North Atlantic Highs from synoptic charts, the North Atlantic Oscillation (NAO) and El Niño-Southern Oscillation (ENSO) patterns. The model is based on the knowledge that low pressure systems at the surface are steered by the fast ribbons (jet streams) of air in the upper atmosphere. The jet stream - blocking interaction model simulation examines the sea surface temperature field using data from NOAA tracked along the ocean on a Document 4::: In meteorology, convective available potential energy (commonly abbreviated as CAPE), is the integrated amount of work that the upward (positive) buoyancy force would perform on a given mass of air (called an air parcel) if it rose vertically through the entire atmosphere. Positive CAPE will cause the air parcel to rise, while negative CAPE will cause the air parcel to sink. Nonzero CAPE is an indicator of atmospheric instability in any given atmospheric sounding, a necessary condition for the development of cumulus and cumulonimbus clouds with attendant severe weather hazards. Mechanics CAPE exists within the conditionally unstable layer of the troposphere, the free convective layer (FCL), where an ascending air parcel is warmer than the ambient air. CAPE is measured in joules per kilogram of air (J/kg). Any value greater than 0 J/kg indicates instability and an increasing possibility of thunderstorms and hail. Generic CAPE is calculated by integrating vertically the local buoyancy of a parcel from the level of free convection (LFC) to the equilibrium level (EL): Where is the height of the level of free convection and is the height of the equilibrium level (neutral buoyancy), where is the virtual temperature of the specific parcel, where is the virtual temperature of the environment (note that temperatures must be in the Kelvin scale), and where is the acceleration due to gravity. This integral is the work done by the buoyant force minus the work done against gravity, hence it's the excess energy that can become kinetic energy. CAPE for a given region is most often calculated from a thermodynamic or sounding diagram (e.g., a Skew-T log-P diagram) using air temperature and dew point data usually measured by a weather balloon. CAPE is effectively positive buoyancy, expressed B+ or simply B; the opposite of convective inhibition (CIN), which is expressed as B-, and can be thought of as "negative CAPE". As with CIN, CAPE is usually expressed in J/kg bu The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The rising and sinking of these can cause precipitation? A. circular air currents B. global air currents C. underwater currents D. temporary air currents Answer:
sciq-9110
multiple_choice
What was the informal female name given to the adult fossil found in ethiopia and thought to be over 3 million years old?
[ "lucy", "alice", "linda", "aunt" ]
A
Relavent Documents: Document 0::: Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women. The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development. Current status of girls and women in STEM education Overall trends in STEM education Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle. Learning achievement in STEM education Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and Document 1::: External links https://web.archive.org/web/20100610042626/http://chancellor.utk.edu/professors/horn.shtml http://web.utk.edu/~shorn/research.htm Livi Document 2::: The Berkeley Geochronology Center (BGC) is a non-profit geochronology research institute in Berkeley, California. It was originally a research group in the laboratory of geochronologist Garniss Curtis at the University of California, Berkeley. The center is now an independent scientific research institute with close Berkeley affiliations and directed by geologist and geochronologist Paul Renne, a professor in residence in the department of earth and planetary science at Berkeley. History In 1985, Curtis, set to retire in 1989, moved the group from his lab at the university to the basement of the independent Institute for Human Origins (IHO), at the suggestion of American anthropologist F. Clark Howell. The geochronologists worked separately from the IHO, although IHO contained their bureaucratic infrastructure, until 1989 when they became officially known as the Institute for Human Origins Geochronology Center. In 1994 the group officially split from the IHO based on different viewpoints of their respective missions. Both Curtis and IHO founder, Donald Johanson, were known to have egos that might "clash", but Howell thought that bringing the two research groups together could benefit both. The IHO's mission included publicizing the anthropology of ancient human ancestors to the general public, and the geochronology scientists felt the anthropologists emphasized this at the expense of more basic science, while the paleoanthropologist felt the geochronologists were devoting too much research time and funding to general geology questions not related to the institute's primary mission. The anthropologists had more public recognition in the press, while the geochronologists were obtaining more scientific grant moneys and publishing more scientific papers. The split was acrimonious and garnered negative publicity for some of those involved from their peers in professional organizations, particularly as Gordon Getty, the single largest donor and a board member of Document 3::: The Radcliffe Zoological Laboratory was created in 1894 when Radcliffe College rented a room on the fifth floor of the Museum of Comparative Zoology at Harvard University to convert into a women's laboratory. In the 1880s Elizabeth Cary Agassiz, director of the Harvard Annex (which would become chartered as Radcliffe College in 1894), negotiated for the use of space for her students in the Museum of Comparative Zoology. Prior to the acquisition of this space, science laboratories were taught using inadequate facilities, converting spaces such as bathrooms in old houses into physics laboratories, which Harvard professors often refused to teach in. Physical space and arrangements The laboratory space was converted from an office or storage closet, and was sandwiched between other invertebrate storage rooms. This small space was poorly-lit and often cramped, as this was the only space Radcliffe women technically had access to. In 1908, in response to pressure from Radcliffe administrators to construct a women's restroom, Alexander Agassiz launched an inquiry about which spaces women were occupying within the building. Agassiz rejected the construction of this restroom because it would obstruct light from hallway windows, despite the fact that the closest women's restrooms to the Radcliffe Zoological Laboratory were within the Natural History Museum galleries, two floors below. Agassiz found that, while Harvard men occupied 14 rooms, Radcliffe women were spilling over from their single designated laboratory space into 3 other rooms. Herbert Spencer Jennings highlighted that segregating instruction by gender was challenging due to the limitation of space, noting that Agassiz felt that the resources within the Museum of Comparative Zoology should not be ceded to Radcliffe, stating that 'it cannot expect us to sacrifice M.C.Z. for its needs in anyway'. Institutional affiliations and degrees Radcliffe did not grant PhDs until 1902. Between 1894 and 1902, multiple stude Document 4::: Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research. Americas Human Biology major at Stanford University, Palo Alto (since 1970) Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government. Human and Social Biology (Caribbean) Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment. Human Biology Program at University of Toronto The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications. Asia BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002) BSc (honours) Human Biology at AIIMS (New The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What was the informal female name given to the adult fossil found in ethiopia and thought to be over 3 million years old? A. lucy B. alice C. linda D. aunt Answer:
scienceQA-5347
multiple_choice
What do these two changes have in common? water boiling on a stove boiling sugar to make caramel
[ "Both are chemical changes.", "Both are only physical changes.", "Both are caused by heating.", "Both are caused by cooling." ]
C
Step 1: Think about each change. Water boiling on the stove is a change of state. So, it is a physical change. The liquid changes into a gas, but a different type of matter is not formed. Boiling sugar to make caramel is a chemical change. The heat causes the sugar to change into a different type of matter. Unlike sugar, the new matter is brown and sticky. Step 2: Look at each answer choice. Both are only physical changes. Water boiling is a physical change. But boiling sugar to make caramel is not. Both are chemical changes. Boiling sugar to make caramel is a chemical change. But water boiling is not. Both are caused by heating. Both changes are caused by heating. Both are caused by cooling. Neither change is caused by cooling.
Relavent Documents: Document 0::: Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds. Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate. A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density. An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge. Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change. Examples Heating and cooling Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation. Magnetism Ferro-magnetic materials can become magnetic. The process is reve Document 1::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 2::: Boiling is the rapid phase transition from liquid to gas or vapor; the reverse of boiling is condensation. Boiling occurs when a liquid is heated to its boiling point, so that the vapour pressure of the liquid is equal to the pressure exerted on the liquid by the surrounding atmosphere. Boiling and evaporation are the two main forms of liquid vapourization. There are two main types of boiling: nucleate boiling where small bubbles of vapour form at discrete points, and critical heat flux boiling where the boiling surface is heated above a certain critical temperature and a film of vapour forms on the surface. Transition boiling is an intermediate, unstable form of boiling with elements of both types. The boiling point of water is 100 °C or 212 °F but is lower with the decreased atmospheric pressure found at higher altitudes. Boiling water is used as a method of making it potable by killing microbes and viruses that may be present. The sensitivity of different micro-organisms to heat varies, but if water is held at for one minute, most micro-organisms and viruses are inactivated. Ten minutes at a temperature of 70 °C (158 °F) is also sufficient to inactivate most bacteria. Boiling water is also used in several cooking methods including boiling, steaming, and poaching. Types Free convection The lowest heat flux seen in boiling is only sufficient to cause [natural convection], where the warmer fluid rises due to its slightly lower density. This condition occurs only when the superheat is very low, meaning that the hot surface near the fluid is nearly the same temperature as the boiling point. Nucleate Nucleate boiling is characterised by the growth of bubbles or pops on a heated surface (heterogeneous nucleation), which rises from discrete points on a surface, whose temperature is only slightly above the temperature of the liquid. In general, the number of nucleation sites is increased by an increasing surface temperature. An irregular surface of the boiling Document 3::: Computer science and engineering (CSE) is an academic program at many universities which comprises computer science classes (e.g. data structures and algorithms) and computer engineering classes (e.g computer architecture). There is no clear division in computing between science and engineering, just like in the field of materials science and engineering. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both undergraduate as well postgraduate with specializations. Academic courses Academic programs vary between colleges, but typically include a combination of topics in computer science, computer engineering, and electrical engineering. Undergraduate courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most CSE programs require introductory mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, probability, and statistics, as well as the basics of electrical and electronic engineering, physics, and electromagnetism. Example universities with CSE majors and departments APJ Abdul Kalam Technological University American International University-B Document 4::: Analysis (: analyses) is the process of breaking a complex topic or substance into smaller parts in order to gain a better understanding of it. The technique has been applied in the study of mathematics and logic since before Aristotle (384–322 B.C.), though analysis as a formal concept is a relatively recent development. The word comes from the Ancient Greek (analysis, "a breaking-up" or "an untying;" from ana- "up, throughout" and lysis "a loosening"). From it also comes the word's plural, analyses. As a formal concept, the method has variously been ascribed to Alhazen, René Descartes (Discourse on the Method), and Galileo Galilei. It has also been ascribed to Isaac Newton, in the form of a practical method of physical discovery (which he did not name). The converse of analysis is synthesis: putting the pieces back together again in a new or different whole. Applications Science The field of chemistry uses analysis in three ways: to identify the components of a particular chemical compound (qualitative analysis), to identify the proportions of components in a mixture (quantitative analysis), and to break down chemical processes and examine chemical reactions between elements of matter. For an example of its use, analysis of the concentration of elements is important in managing a nuclear reactor, so nuclear scientists will analyze neutron activation to develop discrete measurements within vast samples. A matrix can have a considerable effect on the way a chemical analysis is conducted and the quality of its results. Analysis can be done manually or with a device. Types of Analysis: A) Qualitative Analysis: It is concerned with which components are in a given sample or compound. Example: Precipitation reaction B) Quantitative Analysis: It is to determine the quantity of individual component present in a given sample or compound. Example: To find concentration by uv-spectrophotometer. Isotopes Chemists can use isotope analysis to assist analysts with i The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What do these two changes have in common? water boiling on a stove boiling sugar to make caramel A. Both are chemical changes. B. Both are only physical changes. C. Both are caused by heating. D. Both are caused by cooling. Answer:
sciq-5380
multiple_choice
What is defined as the ability to cause changes in matter?
[ "momentum", "energy", "evolution", "nuclear" ]
B
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Applied physics is the application of physics to solve scientific or engineering problems. It is usually considered a bridge or a connection between physics and engineering. "Applied" is distinguished from "pure" by a subtle combination of factors, such as the motivation and attitude of researchers and the nature of the relationship to the technology or science that may be affected by the work. Applied physics is rooted in the fundamental truths and basic concepts of the physical sciences but is concerned with the utilization of scientific principles in practical devices and systems and with the application of physics in other areas of science and high technology. Examples of research and development areas Accelerator physics Acoustics Atmospheric physics Biophysics Brain–computer interfacing Chemistry Chemical physics Differentiable programming Artificial intelligence Scientific computing Engineering physics Chemical engineering Electrical engineering Electronics Sensors Transistors Materials science and engineering Metamaterials Nanotechnology Semiconductors Thin films Mechanical engineering Aerospace engineering Astrodynamics Electromagnetic propulsion Fluid mechanics Military engineering Lidar Radar Sonar Stealth technology Nuclear engineering Fission reactors Fusion reactors Optical engineering Photonics Cavity optomechanics Lasers Photonic crystals Geophysics Materials physics Medical physics Health physics Radiation dosimetry Medical imaging Magnetic resonance imaging Radiation therapy Microscopy Scanning probe microscopy Atomic force microscopy Scanning tunneling microscopy Scanning electron microscopy Transmission electron microscopy Nuclear physics Fission Fusion Optical physics Nonlinear optics Quantum optics Plasma physics Quantum technology Quantum computing Quantum cryptography Renewable energy Space physics Spectroscopy See also Applied science Applied mathematics Engineering Engineering Physics High Technology Document 2::: Physics First is an educational program in the United States, that teaches a basic physics course in the ninth grade (usually 14-year-olds), rather than the biology course which is more standard in public schools. This course relies on the limited math skills that the students have from pre-algebra and algebra I. With these skills students study a broad subset of the introductory physics canon with an emphasis on topics which can be experienced kinesthetically or without deep mathematical reasoning. Furthermore, teaching physics first is better suited for English Language Learners, who would be overwhelmed by the substantial vocabulary requirements of Biology. Physics First began as an organized movement among educators around 1990, and has been slowly catching on throughout the United States. The most prominent movement championing Physics First is Leon Lederman's ARISE (American Renaissance in Science Education). Many proponents of Physics First argue that turning this order around lays the foundations for better understanding of chemistry, which in turn will lead to more comprehension of biology. Due to the tangible nature of most introductory physics experiments, Physics First also lends itself well to an introduction to inquiry-based science education, where students are encouraged to probe the workings of the world in which they live. The majority of high schools which have implemented "physics first" do so by way of offering two separate classes, at two separate levels: simple physics concepts in 9th grade, followed by more advanced physics courses in 11th or 12th grade. In schools with this curriculum, nearly all 9th grade students take a "Physical Science", or "Introduction to Physics Concepts" course. These courses focus on concepts that can be studied with skills from pre-algebra and algebra I. With these ideas in place, students then can be exposed to ideas with more physics related content in chemistry, and other science electives. After th Document 3::: Physics education or physics teaching refers to the education methods currently used to teach physics. The occupation is called physics educator or physics teacher. Physics education research refers to an area of pedagogical research that seeks to improve those methods. Historically, physics has been taught at the high school and college level primarily by the lecture method together with laboratory exercises aimed at verifying concepts taught in the lectures. These concepts are better understood when lectures are accompanied with demonstration, hand-on experiments, and questions that require students to ponder what will happen in an experiment and why. Students who participate in active learning for example with hands-on experiments learn through self-discovery. By trial and error they learn to change their preconceptions about phenomena in physics and discover the underlying concepts. Physics education is part of the broader area of science education. Ancient Greece Aristotle wrote what is considered now as the first textbook of physics. Aristotle's ideas were taught unchanged until the Late Middle Ages, when scientists started making discoveries that didn't fit them. For example, Copernicus' discovery contradicted Aristotle's idea of an Earth-centric universe. Aristotle's ideas about motion weren't displaced until the end of the 17th century, when Newton published his ideas. Today's physics students often think of physics concepts in Aristotelian terms, despite being taught only Newtonian concepts. Hong Kong High schools In Hong Kong, physics is a subject for public examination. Local students in Form 6 take the public exam of Hong Kong Diploma of Secondary Education (HKDSE). Compare to the other syllabus include GCSE, GCE etc. which learn wider and boarder of different topics, the Hong Kong syllabus is learning more deeply and more challenges with calculations. Topics are narrow down to a smaller amount compared to the A-level due to the insufficient teachi Document 4::: Energeticism is the physical view that energy is the fundamental element in all physical change. It posits a specific ontology, or a philosophy of being, which holds that all things are ultimately composed of energy, and which is opposed to ontological idealism. Energeticism might be associated with the physicist and philosopher Ernst Mach, though his attitude to it is ambiguous. It was also propounded by the chemist Wilhelm Ostwald. Energeticism is largely rejected today, in part due to its Aristotelian and metaphysical leanings and its rejection of the existence of a micro-world (such as the one that chemists or physicists have discovered). Ludwig Boltzmann and Max Planck posited that matter and energy are distinct from each other and, hence, that energy cannot itself be the fundamental unit of nature upon which all other units are based. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is defined as the ability to cause changes in matter? A. momentum B. energy C. evolution D. nuclear Answer:
sciq-7779
multiple_choice
What connection is required for a newborn baby to begin breathing?
[ "connection to the father", "connection to the placenta", "connection to a ventilator", "connection to a heat source" ]
B
Relavent Documents: Document 0::: The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work. History It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council. Function Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres. STEM ambassadors To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell. Funding STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments. See also The WISE Campaign Engineering and Physical Sciences Research Council National Centre for Excellence in Teaching Mathematics Association for Science Education Glossary of areas of mathematics Glossary of astronomy Glossary of biology Glossary of chemistry Glossary of engineering Glossary of physics Document 1::: Foetal cerebral redistribution or 'brain-sparing' is a diagnosis in foetal medicine. It is characterised by preferential flow of blood towards the brain at the expense of the other vital organs, and it occurs as a haemodynamic adaptation in foetuses which have placental insufficiency. The underlying mechanism is thought to be vasodilation of the cerebral arteries. Cerebral redistribution is defined by the presence of a low middle cerebral artery pulsatility index (MCA-PI). Ultrasound of the middle cerebral artery to examine the Doppler waveform is used to establish this. Although cerebral redistribution represents an effort to preserve brain development in the face of hypoxic stress, it is nonetheless associated with adverse neurodevelopmental outcome. The presence of cerebral redistribution will be one factor taken into consideration when deciding whether to artificially deliver a baby with placental insufficiency via induction of labour or caesarian section. Additional images Document 2::: Maternal somatic support after brain death occurs when a brain dead patient is pregnant and their body is kept alive to deliver a fetus. It occurs very rarely internationally. Even among brain dead patients, in a U.S. study of 252 brain dead patients from 1990–96, only 5 (2.8%) cases involved pregnant women between 15 and 45 years of age. Past cases In the 28-year period between 1982 and 2010, there were "30 [reported] cases of maternal brain death (19 case reports and 1 case series)." In 12 of those cases, a viable child was delivered via cesarean section after extended somatic support. However, according to Esmaelilzadeh, et al. there is no widely accepted protocol to manage a brain dead mother "since only a few reported cases are found in the medical literature." Moreover, the mother's wishes are rarely, if ever, known, and family should be consulted in developing a care plan. Life support complications Throughout their care, brain dead patients could experience a wide range of complications, including "infection, hemodynamic instability, diabetes insipidus (DI), panhypopituitarism, poikilothermia, metabolic instability, acute respiratory distress syndrome and disseminated intravascular coagulation." Treating these complications is difficult since the effects of medication on the fetus's health are unknown. Fetus's chance of survival According to Esmaelilzadeh, et al., "[a]t present, it seems that there is no clear lower limit to the gestational age which would restrict the physician's efforts to support the brain dead mother and her fetus." However, the older a fetus is when its mother becomes brain dead, the greater its chance for survival. Research into preterm births indicates that "a fetus born before 24 weeks of gestation has a limited chance of survival. At 24, 28 and 32 weeks, a fetus has approximately a 20–30%, 80% and 98% likelihood of survival with a 40%, 10% and less than 2% chance of suffering from a severe handicap, respectively." Brain de Document 3::: Breast crawl is the instinctive movement of a newborn mammal toward the nipple of its mother for the purpose of latching on to initiate breastfeeding. In humans, if the newborn is laid on its mother's abdomen, movements commence at 12 to 44 minutes after birth, with spontaneous suckling being achieved roughly 27 to 71 minutes after birth. Background The Baby Friendly Hospital Initiative, developed by the World Health Organization and UNICEF, recommends that all babies have access to immediate skin-to-skin contact (SSC) following vaginal or Caesarean section birth. Immediate SSC after a Caesarean that used spinal or epidural anesthesia is achievable because the mother remains alert; however, after the use of general anesthesia, the newborn should be placed skin to skin as soon as the mother becomes alert and responsive. If the mother is not immediately able to begin SSC, her partner or other helper can assist or place the infant SSC on their chest or breast. It is recommended that SSC be facilitated immediately after birth, as this is the time when the newborn is most likely to follow its natural instincts to find and attach to the breast and then breastfeed. To find the nipple, the newborn uses a variety of sensory stimuli: visual (the sight of the mother's face and areola); auditory (the sound of its mother's voice); and olfactory (the scent of the areola, which resembles that of amniotic fluid). Nine stages of breast crawl Newborn babies go through nine distinct stages after birth within the first hour or so: Birth cry: Intense crying just after birth Relaxation phase: Infant resting and recovering. No activity of mouth, head, arms, legs or body Awakening phase: Infant begins to show signs of activity. Small thrusts of head: up, down, from side-to-side. Small movements of limbs and shoulders Active phase: Infant moves limbs and head, is more determined in movements. Rooting activity, ‘pushing’ with limbs without shifting body Crawling phase: ‘Pushing’ whic Document 4::: In placental mammals, the umbilical cord (also called the navel string, birth cord or funiculus umbilicalis) is a conduit between the developing embryo or fetus and the placenta. During prenatal development, the umbilical cord is physiologically and genetically part of the fetus and (in humans) normally contains two arteries (the umbilical arteries) and one vein (the umbilical vein), buried within Wharton's jelly. The umbilical vein supplies the fetus with oxygenated, nutrient-rich blood from the placenta. Conversely, the fetal heart pumps low-oxygen, nutrient-depleted blood through the umbilical arteries back to the placenta. Structure and development The umbilical cord develops from and contains remnants of the yolk sac and allantois. It forms by the fifth week of development, replacing the yolk sac as the source of nutrients for the embryo. The cord is not directly connected to the mother's circulatory system, but instead joins the placenta, which transfers materials to and from the maternal blood without allowing direct mixing. The length of the umbilical cord is approximately equal to the crown-rump length of the fetus throughout pregnancy. The umbilical cord in a full term neonate is usually about 50 centimeters (20 in) long and about 2 centimeters (0.75 in) in diameter. This diameter decreases rapidly within the placenta. The fully patent umbilical artery has two main layers: an outer layer consisting of circularly arranged smooth muscle cells and an inner layer which shows rather irregularly and loosely arranged cells embedded in abundant ground substance staining metachromatic. The smooth muscle cells of the layer are rather poorly differentiated, contain only a few tiny myofilaments and are thereby unlikely to contribute actively to the process of post-natal closure. Umbilical cord can be detected on ultrasound by 6 weeks of gestation and well-visualised by 8 to 9 weeks of gestation. The umbilical cord lining is a good source of mesenchymal and epith The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What connection is required for a newborn baby to begin breathing? A. connection to the father B. connection to the placenta C. connection to a ventilator D. connection to a heat source Answer:
sciq-9987
multiple_choice
The passenger pigeon, the dodo bird, and the woolly mammoth represent individual cases of what fate?
[ "accumulation", "extinction", "isolation", "compression" ]
B
Relavent Documents: Document 0::: Tinbergen's four questions, named after 20th century biologist Nikolaas Tinbergen, are complementary categories of explanations for animal behaviour. These are also commonly referred to as levels of analysis. It suggests that an integrative understanding of behaviour must include ultimate (evolutionary) explanations, in particular: behavioural adaptive functions phylogenetic history; and the proximate explanations underlying physiological mechanisms ontogenetic/developmental history. Four categories of questions and explanations When asked about the purpose of sight in humans and animals, even elementary-school children can answer that animals have vision to help them find food and avoid danger (function/adaptation). Biologists have three additional explanations: sight is caused by a particular series of evolutionary steps (phylogeny), the mechanics of the eye (mechanism/causation), and even the process of an individual's development (ontogeny). This schema constitutes a basic framework of the overlapping behavioural fields of ethology, behavioural ecology, comparative psychology, sociobiology, evolutionary psychology, and anthropology. Julian Huxley identified the first three questions. Niko Tinbergen gave only the fourth question, as Huxley's questions failed to distinguish between survival value and evolutionary history; Tinbergen's fourth question helped resolve this problem. Evolutionary (ultimate) explanations First question: Function (adaptation) Darwin's theory of evolution by natural selection is the only scientific explanation for why an animal's behaviour is usually well adapted for survival and reproduction in its environment. However, claiming that a particular mechanism is well suited to the present environment is different from claiming that this mechanism was selected for in the past due to its history of being adaptive. The literature conceptualizes the relationship between function and evolution in two ways. On the one hand, function Document 1::: Plant evolution is the subset of evolutionary phenomena that concern plants. Evolutionary phenomena are characteristics of populations that are described by averages, medians, distributions, and other statistical methods. This distinguishes plant evolution from plant development, a branch of developmental biology which concerns the changes that individuals go through in their lives. The study of plant evolution attempts to explain how the present diversity of plants arose over geologic time. It includes the study of genetic change and the consequent variation that often results in speciation, one of the most important types of radiation into taxonomic groups called clades. A description of radiation is called a phylogeny and is often represented by type of diagram called a phylogenetic tree. Evolutionary trends Differences between plant and animal physiology and reproduction cause minor differences in how they evolve. One major difference is the totipotent nature of plant cells, allowing them to reproduce asexually much more easily than most animals. They are also capable of polyploidy – where more than two chromosome sets are inherited from the parents. This allows relatively fast bursts of evolution to occur, for example by the effect of gene duplication. The long periods of dormancy that seed plants can employ also makes them less vulnerable to extinction, as they can "sit out" the tough periods and wait until more clement times to leap back to life. The effect of these differences is most profoundly seen during extinction events. These events, which wiped out between 6 and 62% of terrestrial animal families, had "negligible" effect on plant families. However, the ecosystem structure is significantly rearranged, with the abundances and distributions of different groups of plants changing profoundly. These effects are perhaps due to the higher diversity within families, as extinction – which was common at the species level – was very selective. For example, win Document 2::: Animals are multicellular eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, are able to move, reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Over 1.5 million living animal species have been described—of which around 1 million are insects—but it has been estimated there are over 7 million in total. Animals range in size from 8.5 millionths of a metre to long and have complex interactions with each other and their environments, forming intricate food webs. The study of animals is called zoology. Animals may be listed or indexed by many criteria, including taxonomy, status as endangered species, their geographical location, and their portrayal and/or naming in human culture. By common name List of animal names (male, female, young, and group) By aspect List of common household pests List of animal sounds List of animals by number of neurons By domestication List of domesticated animals By eating behaviour List of herbivorous animals List of omnivores List of carnivores By endangered status IUCN Red List endangered species (Animalia) United States Fish and Wildlife Service list of endangered species By extinction List of extinct animals List of extinct birds List of extinct mammals List of extinct cetaceans List of extinct butterflies By region Lists of amphibians by region Lists of birds by region Lists of mammals by region Lists of reptiles by region By individual (real or fictional) Real Lists of snakes List of individual cats List of oldest cats List of giant squids List of individual elephants List of historical horses List of leading Thoroughbred racehorses List of individual apes List of individual bears List of giant pandas List of individual birds List of individual bovines List of individual cetaceans List of individual dogs List of oldest dogs List of individual monkeys List of individual pigs List of w Document 3::: Extinction is the termination of a taxon by the death of its last member. A taxon may become functionally extinct before the death of its last member if it loses the capacity to reproduce and recover. Because a species' potential range may be very large, determining this moment is difficult, and is usually done retrospectively. This difficulty leads to phenomena such as Lazarus taxa, where a species presumed extinct abruptly "reappears" (typically in the fossil record) after a period of apparent absence. More than 99% of all species that ever lived on Earth, amounting to over five billion species, are estimated to have died out. It is estimated that there are currently around 8.7 million species of eukaryote globally, and possibly many times more if microorganisms, like bacteria, are included. Notable extinct animal species include non-avian dinosaurs, saber-toothed cats, dodos, mammoths, ground sloths, thylacines, trilobites, and golden toads. Through evolution, species arise through the process of speciation—where new varieties of organisms arise and thrive when they are able to find and exploit an ecological niche—and species become extinct when they are no longer able to survive in changing conditions or against superior competition. The relationship between animals and their ecological niches has been firmly established. A typical species becomes extinct within 10 million years of its first appearance, although some species, called living fossils, survive with little to no morphological change for hundreds of millions of years. Mass extinctions are relatively rare events; however, isolated extinctions of species and clades are quite common, and are a natural part of the evolutionary process. Only recently have extinctions been recorded and scientists have become alarmed at the current high rate of extinctions. Most species that become extinct are never scientifically documented. Some scientists estimate that up to half of presently existing plant and animal Document 4::: The history of life on Earth seems to show a clear trend; for example, it seems intuitive that there is a trend towards increasing complexity in living organisms. More recently evolved organisms, such as mammals, appear to be much more complex than organisms, such as bacteria, which have existed for a much longer period of time. However, there are theoretical and empirical problems with this claim. From a theoretical perspective, it appears that there is no reason to expect evolution to result in any largest-scale trends, although small-scale trends, limited in time and space, are expected (Gould, 1997). From an empirical perspective, it is difficult to measure complexity and, when it has been measured, the evidence does not support a largest-scale trend (McShea, 1996). History Many of the founding figures of evolution supported the idea of Evolutionary progress which has fallen from favour, but the work of Francisco J. Ayala and Michael Ruse suggests is still influential. Hypothetical largest-scale trends McShea (1998) discusses eight features of organisms that might indicate largest-scale trends in evolution: entropy, energy intensiveness, evolutionary versatility, developmental depth, structural depth, adaptedness, size, complexity. He calls these "live hypotheses", meaning that trends in these features are currently being considered by evolutionary biologists. McShea observes that the most popular hypothesis, among scientists, is that there is a largest-scale trend towards increasing complexity. Evolutionary theorists agree that there are local trends in evolution, such as increasing brain size in hominids, but these directional changes do not persist indefinitely, and trends in opposite directions also occur (Gould, 1997). Evolution causes organisms to adapt to their local environment; when the environment changes, the direction of the trend may change. The question of whether there is evolutionary progress is better formulated as the question of whether The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The passenger pigeon, the dodo bird, and the woolly mammoth represent individual cases of what fate? A. accumulation B. extinction C. isolation D. compression Answer:
sciq-7566
multiple_choice
Dinosaurs filled the niches that mammals fill today during which era?
[ "mesozoic", "Cenozoic", "Proterozoic", "Phanerozoic" ]
A
Relavent Documents: Document 0::: The Romer-Simpson Medal is the highest award issued by the Society of Vertebrate Paleontology for "sustained and outstanding scholarly excellence and service to the discipline of vertebrate paleontology". The award is named in honor of Alfred S. Romer and George G. Simpson. Past awards Source: Society for Vertebrate Paleontology 1987 Everett C. Olson 1988 Bobb Schaeffer 1989 Edwin H. Colbert 1990 Richard Estes 1991 no award 1992 Loris S. Russell 1993 Zhou Mingzhen 1994 John H. Ostrom 1995 Zofia Kielan-Jaworowska 1996 Percy Butler 1997 Colin Patterson 1998 Albert E. Wood 1999 Robert Warren Wilson 2000 John A. Wilson 2001 Malcolm McKenna 2002 Mary R. Dawson 2003 Rainer Zangerl 2004 Robert L. Carroll 2005 Donald E. Russell 2006 William A. Clemens 2007 Wann Langston, Jr. 2008 Jose Bonaparte 2009 Farish Jenkins 2010 Rinchen Barsbold 2011 Alfred W. Crompton 2012 Philip D. Gingerich 2013 Jack Horner 2014 Hans-Peter Schultze 2015 Jim Hopson 2016 Mee-mann Chang 2017 Philip J. Currie 2018 Kay Behrensmeyer 2019 Michael Archer 2020 Jenny Clack 2021 Blaire Van Valkenburgh 2022 David W. Krause See also List of biology awards List of paleontology awards Document 1::: Timeline Paleontology Paleontology timelines Document 2::: Paleornithology, also known as avian paleontology, is the scientific study of bird evolution and fossil birds. It is a hybrid of ornithology and paleontology. Paleornithology began with the discovery of Archaeopteryx. The reptilian relationship of birds and their ancestors, the theropod dinosaurs, are important aspects of paleornithological research. Other areas of interest to paleornithologists are the early sea-birds Ichthyornis, Hesperornis, and others. Notable paleornithologists are Storrs L. Olson, Alexander Wetmore, Alan Feduccia, Cécile Mourer-Chauviré, Philip Ashmole, Pierce Brodkorb, Trevor H. Worthy, Zhou Zhonghe, Yevgeny Kurochkin, Bradley C. Livezey, Gareth J. Dyke, Luis M. Chiappe, Gerald Mayr and David Steadman. Document 3::: The nocturnal bottleneck hypothesis is a hypothesis to explain several mammalian traits. In 1942, Gordon Lynn Walls described this concept which states that placental mammals were mainly or even exclusively nocturnal through most of their evolutionary history, starting with their origin 225 million years ago, and only ending with the demise of the non-avian dinosaurs 66 million years ago. While some mammal groups have later evolved to fill diurnal niches, the approximately 160 million years spent as nocturnal animals has left a lasting legacy on basal anatomy and physiology, and most mammals are still nocturnal. Evolution of mammals Mammals evolved from cynodonts, a group of superficially dog-like synapsids in the wake of the Permian–Triassic mass extinction. The emerging archosaurian groups that flourished after the extinction, including crocodiles and dinosaurs and their ancestors, drove the remaining larger cynodonts into extinction, leaving only the smaller forms. The surviving cynodonts could only succeed in niches with minimal competition from the diurnal dinosaurs, evolving into the typical small-bodied insectivorous dwellers of the nocturnal undergrowth. While the early mammals continued to develop into several probably quite common groups of animals during the Mesozoic, they all remained relatively small and nocturnal. Only with the massive extinction at the end of the Cretaceous did the dinosaurs leave the stage open for the establishment of a new fauna of mammals. Despite this, mammals continued to be small-bodied for millions of years. While all the largest animals alive today are mammals, the majority of mammals are still small nocturnal animals. Mammalian nocturnal adaptions Several different features of mammalian physiology appear to be adaptations to a nocturnal lifestyle, mainly related to the sensory organs. These include: Senses Acute sense of hearing, including coiling cochleae, external pinnae and auditory ossicles. Very good sense of sm Document 4::: Sinoconodon is an extinct genus of mammaliamorphs that appears in the fossil record of the Lufeng Formation of China in the Sinemurian stage of the Early Jurassic period, about 193 million years ago. While sharing many plesiomorphic traits with other non-mammaliaform cynodonts, it possessed a special, secondarily evolved jaw joint between the dentary and the squamosal bones, which in more derived taxa would replace the primitive tetrapod one between the articular and quadrate bones. The presence of a dentary-squamosal joint is a trait historically used to define mammals. Description This animal had skull of which suggest a presacral body length of and weight about due to the similar parameters to the European hedgehog. Sinoconodon closely resembled early mammaliaforms like Morganucodon, but it is regarded as more basal, differing substantially from Morganucodon in its dentition and growth habits. Like most other non-mammalian tetrapods, such as reptiles and amphibians, it was polyphyodont, replacing many of its teeth throughout its lifetime, and it seems to have grown slowly but continuously until its death. It was thus somewhat less mammal-like than mammaliaforms such as morganucodonts and docodonts. The combination of basal tetrapod and mammalian features makes it a unique transitional fossil. Taxonomy Sinoconodon was named by Patterson and Olson in 1961. Its type is Sinoconodon rigneyi. It was assigned to Triconodontidae by Patterson and Olson in 1961; to Triconodonta by Jenkins and Crompton in 1979; to Sinoconodontidae by Carroll in 1988; to Mammaliamorpha by Wible in 1991; to Mammalia by Luo and Wu in 1994; to Mammalia by Kielan-Jaworowska et al. in 2004; and to Mammaliaformes by Luo et al. in 2001 and Bi et al. in 2014. Phylogeny The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Dinosaurs filled the niches that mammals fill today during which era? A. mesozoic B. Cenozoic C. Proterozoic D. Phanerozoic Answer:
sciq-10344
multiple_choice
What machines have scientist built to smash particles that are smaller than atoms into each other head-on?
[ "energy accelerators", "particle accelerators", "Atom Accelerators", "nitrogen accelerators" ]
B
Relavent Documents: Document 0::: Applied physics is the application of physics to solve scientific or engineering problems. It is usually considered a bridge or a connection between physics and engineering. "Applied" is distinguished from "pure" by a subtle combination of factors, such as the motivation and attitude of researchers and the nature of the relationship to the technology or science that may be affected by the work. Applied physics is rooted in the fundamental truths and basic concepts of the physical sciences but is concerned with the utilization of scientific principles in practical devices and systems and with the application of physics in other areas of science and high technology. Examples of research and development areas Accelerator physics Acoustics Atmospheric physics Biophysics Brain–computer interfacing Chemistry Chemical physics Differentiable programming Artificial intelligence Scientific computing Engineering physics Chemical engineering Electrical engineering Electronics Sensors Transistors Materials science and engineering Metamaterials Nanotechnology Semiconductors Thin films Mechanical engineering Aerospace engineering Astrodynamics Electromagnetic propulsion Fluid mechanics Military engineering Lidar Radar Sonar Stealth technology Nuclear engineering Fission reactors Fusion reactors Optical engineering Photonics Cavity optomechanics Lasers Photonic crystals Geophysics Materials physics Medical physics Health physics Radiation dosimetry Medical imaging Magnetic resonance imaging Radiation therapy Microscopy Scanning probe microscopy Atomic force microscopy Scanning tunneling microscopy Scanning electron microscopy Transmission electron microscopy Nuclear physics Fission Fusion Optical physics Nonlinear optics Quantum optics Plasma physics Quantum technology Quantum computing Quantum cryptography Renewable energy Space physics Spectroscopy See also Applied science Applied mathematics Engineering Engineering Physics High Technology Document 1::: A collider is a type of particle accelerator that brings two opposing particle beams together such that the particles collide. Colliders may either be ring accelerators or linear accelerators. Colliders are used as a research tool in particle physics by accelerating particles to very high kinetic energy and letting them impact other particles. Analysis of the byproducts of these collisions gives scientists good evidence of the structure of the subatomic world and the laws of nature governing it. These may become apparent only at high energies and for tiny periods of time, and therefore may be hard or impossible to study in other ways. Explanation In particle physics one gains knowledge about elementary particles by accelerating particles to very high kinetic energy and letting them impact on other particles. For sufficiently high energy, a reaction occurs that transforms the particles into other particles. Detecting these products gives insight into the physics involved. To do such experiments there are two possible setups: Fixed target setup: A beam of particles (the projectiles) is accelerated with a particle accelerator, and as collision partner, one puts a stationary target into the path of the beam. Collider: Two beams of particles are accelerated and the beams are directed against each other, so that the particles collide while flying in opposite directions. This process can be used to make strange and anti-matter. The collider setup is harder to construct but has the great advantage that according to special relativity the energy of an inelastic collision between two particles approaching each other with a given velocity is not just 4 times as high as in the case of one particle resting (as it would be in non-relativistic physics); it can be orders of magnitude higher if the collision velocity is near the speed of light. In the case of a collider where the collision point is at rest in the laboratory frame (i.e. ), the center of mass energy (the ene Document 2::: A list of particle accelerators used for particle physics experiments. Some early particle accelerators that more properly did nuclear physics, but existed prior to the separation of particle physics from that field, are also included. Although a modern accelerator complex usually has several stages of accelerators, only accelerators whose output has been used directly for experiments are listed. Early accelerators These all used single beams with fixed targets. They tended to have very briefly run, inexpensive, and unnamed experiments. Cyclotrons [1] The magnetic pole pieces and return yoke from the 60-inch cyclotron were later moved to UC Davis and incorporated into a 76-inch isochronous cyclotron which is still in use today Other early accelerator types Synchrotrons Fixed-target accelerators More modern accelerators that were also run in fixed target mode; often, they will also have been run as colliders, or accelerated particles for use in subsequently built colliders. High intensity hadron accelerators (Meson and neutron sources) Electron and low intensity hadron accelerators Colliders Electron–positron colliders Hadron colliders Electron-proton colliders Light sources Hypothetical accelerators Besides the real accelerators listed above, there are hypothetical accelerators often used as hypothetical examples or optimistic projects by particle physicists. Eloisatron (Eurasiatic Long Intersecting Storage Accelerator) was a project of INFN headed by Antonio Zichichi at the Ettore Majorana Foundation and Centre for Scientific Culture in Erice, Sicily. The center-of-mass energy was planned to be 200 TeV, and the size was planned to span parts of Europe and Asia. Fermitron was an accelerator sketched by Enrico Fermi on a notepad in the 1940s proposing an accelerator in stable orbit around the Earth. The undulator radiation collider is a design for an accelerator with a center-of-mass energy around the GUT scale. It would be light-weeks across a Document 3::: An electrostatic particle accelerator is a particle accelerator in which charged particles are accelerated to a high energy by a static high voltage potential. This contrasts with the other major category of particle accelerator, oscillating field particle accelerators, in which the particles are accelerated by oscillating electric fields. Owing to their simpler design, electrostatic types were the first particle accelerators. The two most common types are the Van de Graaf generator invented by Robert Van de Graaff in 1929, and the Cockcroft-Walton accelerator invented by John Cockcroft and Ernest Walton in 1932. The maximum particle energy produced by electrostatic accelerators is limited by the maximum voltage which can be achieved the machine. This is in turn limited by insulation breakdown to a few megavolts. Oscillating accelerators do not have this limitation, so they can achieve higher particle energies than electrostatic machines. The advantages of electrostatic accelerators over oscillating field machines include lower cost, the ability to produce continuous beams, and higher beam currents that make them useful to industry. As such, they are by far the most widely used particle accelerators, with industrial applications such as plastic shrink wrap production, high power X-ray machines, radiation therapy in medicine, radioisotope production, ion implanters in semiconductor production, and sterilization. Many universities worldwide have electrostatic accelerators for research purposes. High energy oscillating field accelerators usually incorporate an electrostatic machine as their first stage, to accelerate particles to a high enough velocity to inject into the main accelerator. Electrostatic accelerators are a subset of linear accelerators (linacs). While all linacs accelerate particles in a straight line, electrostatic accelerators use a fixed accelerating field from a single high voltage source, while radiofrequency linacs use oscillating electri Document 4::: The Santa Cruz Institute for Particle Physics (SCIPP) is an organized research unit within the University of California system focused on theoretical and experimental high-energy physics and astrophysics. Research SCIPP's scientific and technical staff are and have been involved in several cutting edge research projects for more than 25 years, in both theory and experiment. The primary focus is particle physics and particle astrophysics, including the development of technologies needed to advance that research. SCIPP is also pursuing the application of those technologies to other scientific fields such as neuroscience and biomedicine. The Institute is recognized as a leader in the development of custom readout electronics and silicon micro-strip sensors for state-of-the-art particle detection systems. This department has several faculty associated with the Stanford Linear Accelerator Center (SLAC) or the ATLAS project at CERN. There are many experiments being performed at any time within SCIPP but many center on Silicone Strip Particle Detectors and their properties before and after radioactive exposure. Also many of the faculty work on monte carlo simulations and tracking particles within particle colliders. Their most prominent project in recent history has been the development of the Gamma-ray Large Area Space Telescope (GLAST) which searches the sky for Gamma Ray Bursts. Members Notable faculty include: Anthony Aguirre, theoretical cosmologist Tom Banks, c-discoverer of M(atrix) theory in string theory George Blumenthal, astronomer, chancellor of UCSC Michael Dine, high-energy theorist, recipient of Sakurai prize, physics department chair Howard Haber, theoretical particle physicist, recipient of Sakurai prize Piero Madau, recipient of Dannie Heineman Prize for Astrophysics Joel Primack, quantum field theorist and cosmologist, director of AstroComputing Center Constance Rockosi, chair of astronomy department Terry Schalk The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What machines have scientist built to smash particles that are smaller than atoms into each other head-on? A. energy accelerators B. particle accelerators C. Atom Accelerators D. nitrogen accelerators Answer:
sciq-2585
multiple_choice
What are the proteins called that speed up biochemical reactions in cells?
[ "carbohydrates", "hormones", "peptides", "enzymes" ]
D
Relavent Documents: Document 0::: Biochemistry or biological chemistry is the study of chemical processes within and relating to living organisms. A sub-discipline of both chemistry and biology, biochemistry may be divided into three fields: structural biology, enzymology, and metabolism. Over the last decades of the 20th century, biochemistry has become successful at explaining living processes through these three disciplines. Almost all areas of the life sciences are being uncovered and developed through biochemical methodology and research. Biochemistry focuses on understanding the chemical basis which allows biological molecules to give rise to the processes that occur within living cells and between cells, in turn relating greatly to the understanding of tissues and organs, as well as organism structure and function. Biochemistry is closely related to molecular biology, which is the study of the molecular mechanisms of biological phenomena. Much of biochemistry deals with the structures, bonding, functions, and interactions of biological macromolecules, such as proteins, nucleic acids, carbohydrates, and lipids. They provide the structure of cells and perform many of the functions associated with life. The chemistry of the cell also depends upon the reactions of small molecules and ions. These can be inorganic (for example, water and metal ions) or organic (for example, the amino acids, which are used to synthesize proteins). The mechanisms used by cells to harness energy from their environment via chemical reactions are known as metabolism. The findings of biochemistry are applied primarily in medicine, nutrition and agriculture. In medicine, biochemists investigate the causes and cures of diseases. Nutrition studies how to maintain health and wellness and also the effects of nutritional deficiencies. In agriculture, biochemists investigate soil and fertilizers, with the goal of improving crop cultivation, crop storage, and pest control. In recent decades, biochemical principles a Document 1::: This is a list of topics in molecular biology. See also index of biochemistry articles. Document 2::: Biochemistry is the study of the chemical processes in living organisms. It deals with the structure and function of cellular components such as proteins, carbohydrates, lipids, nucleic acids and other biomolecules. Articles related to biochemistry include: 0–9 2-amino-5-phosphonovalerate - 3' end - 5' end Document 3::: This is a list of articles that describe particular biomolecules or types of biomolecules. A For substances with an A- or α- prefix such as α-amylase, please see the parent page (in this case Amylase). A23187 (Calcimycin, Calcium Ionophore) Abamectine Abietic acid Acetic acid Acetylcholine Actin Actinomycin D Adenine Adenosmeme Adenosine diphosphate (ADP) Adenosine monophosphate (AMP) Adenosine triphosphate (ATP) Adenylate cyclase Adiponectin Adonitol Adrenaline, epinephrine Adrenocorticotropic hormone (ACTH) Aequorin Aflatoxin Agar Alamethicin Alanine Albumins Aldosterone Aleurone Alpha-amanitin Alpha-MSH (Melaninocyte stimulating hormone) Allantoin Allethrin α-Amanatin, see Alpha-amanitin Amino acid Amylase (also see α-amylase) Anabolic steroid Anandamide (ANA) Androgen Anethole Angiotensinogen Anisomycin Antidiuretic hormone (ADH) Anti-Müllerian hormone (AMH) Arabinose Arginine Argonaute Ascomycin Ascorbic acid (vitamin C) Asparagine Aspartic acid Asymmetric dimethylarginine ATP synthase Atrial-natriuretic peptide (ANP) Auxin Avidin Azadirachtin A – C35H44O16 B Bacteriocin Beauvericin beta-Hydroxy beta-methylbutyric acid beta-Hydroxybutyric acid Bicuculline Bilirubin Biopolymer Biotin (Vitamin H) Brefeldin A Brassinolide Brucine Butyric acid C Document 4::: In molecular biology, biosynthesis is a multi-step, enzyme-catalyzed process where substrates are converted into more complex products in living organisms. In biosynthesis, simple compounds are modified, converted into other compounds, or joined to form macromolecules. This process often consists of metabolic pathways. Some of these biosynthetic pathways are located within a single cellular organelle, while others involve enzymes that are located within multiple cellular organelles. Examples of these biosynthetic pathways include the production of lipid membrane components and nucleotides. Biosynthesis is usually synonymous with anabolism. The prerequisite elements for biosynthesis include: precursor compounds, chemical energy (e.g. ATP), and catalytic enzymes which may need coenzymes (e.g. NADH, NADPH). These elements create monomers, the building blocks for macromolecules. Some important biological macromolecules include: proteins, which are composed of amino acid monomers joined via peptide bonds, and DNA molecules, which are composed of nucleotides joined via phosphodiester bonds. Properties of chemical reactions Biosynthesis occurs due to a series of chemical reactions. For these reactions to take place, the following elements are necessary: Precursor compounds: these compounds are the starting molecules or substrates in a reaction. These may also be viewed as the reactants in a given chemical process. Chemical energy: chemical energy can be found in the form of high energy molecules. These molecules are required for energetically unfavorable reactions. Furthermore, the hydrolysis of these compounds drives a reaction forward. High energy molecules, such as ATP, have three phosphates. Often, the terminal phosphate is split off during hydrolysis and transferred to another molecule. Catalysts: these may be for example metal ions or coenzymes and they catalyze a reaction by increasing the rate of the reaction and lowering the activation energy. In the sim The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What are the proteins called that speed up biochemical reactions in cells? A. carbohydrates B. hormones C. peptides D. enzymes Answer:
sciq-3014
multiple_choice
What are 2 common growth patterns of population?
[ "exponential and economical", "migratory and logistic", "organic and inorganic", "exponential and logistic" ]
D
Relavent Documents: Document 0::: Population dynamics is the type of mathematics used to model and study the size and age composition of populations as dynamical systems. History Population dynamics has traditionally been the dominant branch of mathematical biology, which has a history of more than 220 years, although over the last century the scope of mathematical biology has greatly expanded. The beginning of population dynamics is widely regarded as the work of Malthus, formulated as the Malthusian growth model. According to Malthus, assuming that the conditions (the environment) remain constant (ceteris paribus), a population will grow (or decline) exponentially. This principle provided the basis for the subsequent predictive theories, such as the demographic studies such as the work of Benjamin Gompertz and Pierre François Verhulst in the early 19th century, who refined and adjusted the Malthusian demographic model. A more general model formulation was proposed by F. J. Richards in 1959, further expanded by Simon Hopkins, in which the models of Gompertz, Verhulst and also Ludwig von Bertalanffy are covered as special cases of the general formulation. The Lotka–Volterra predator-prey equations are another famous example, as well as the alternative Arditi–Ginzburg equations. Logistic function Simplified population models usually start with four key variables (four demographic processes) including death, birth, immigration, and emigration. Mathematical models used to calculate changes in population demographics and evolution hold the assumption of no external influence. Models can be more mathematically complex where "...several competing hypotheses are simultaneously confronted with the data." For example, in a closed system where immigration and emigration does not take place, the rate of change in the number of individuals in a population can be described as: where is the total number of individuals in the specific experimental population being studied, is the number of births and D is Document 1::: Bounded growth occurs when the growth rate of a mathematical function is constantly increasing at a decreasing rate. Asymptotically, bounded growth approaches a fixed value. This contrasts with exponential growth, which is constantly increasing at an accelerating rate, and therefore approaches infinity in the limit. An example of bounded growth is the logistic function. Document 2::: The growth curve model in statistics is a specific multivariate linear model, also known as GMANOVA (Generalized Multivariate Analysis-Of-Variance). It generalizes MANOVA by allowing post-matrices, as seen in the definition. Definition Growth curve model: Let X be a p×n random matrix corresponding to the observations, A a p×q within design matrix with q ≤ p, B a q×k parameter matrix, C a k×n between individual design matrix with rank(C) + p ≤ n and let Σ be a positive-definite p×p matrix. Then defines the growth curve model, where A and C are known, B and Σ are unknown, and E is a random matrix distributed as Np,n(0,Ip,n). This differs from standard MANOVA by the addition of C, a "postmatrix". History Many writers have considered the growth curve analysis, among them Wishart (1938), Box (1950) and Rao (1958). Potthoff and Roy in 1964; were the first in analyzing longitudinal data applying GMANOVA models. Applications GMANOVA is frequently used for the analysis of surveys, clinical trials, and agricultural data, as well as more recently in the context of Radar adaptive detection. Other uses In mathematical statistics, growth curves such as those used in biology are often modeled as being continuous stochastic processes, e.g. as being sample paths that almost surely solve stochastic differential equations. Growth curves have been also applied in forecasting market development. When variables are measured with error, a Latent growth modeling SEM can be used. Footnotes Document 3::: In epidemiology, the next-generation matrix is used to derive the basic reproduction number, for a compartmental model of the spread of infectious diseases. In population dynamics it is used to compute the basic reproduction number for structured population models. It is also used in multi-type branching models for analogous computations. The method to compute the basic reproduction ratio using the next-generation matrix is given by Diekmann et al. (1990) and van den Driessche and Watmough (2002). To calculate the basic reproduction number by using a next-generation matrix, the whole population is divided into compartments in which there are infected compartments. Let be the numbers of infected individuals in the infected compartment at time t. Now, the epidemic model is , where In the above equations, represents the rate of appearance of new infections in compartment . represents the rate of transfer of individuals into compartment by all other means, and represents the rate of transfer of individuals out of compartment . The above model can also be written as where and Let be the disease-free equilibrium. The values of the parts of the Jacobian matrix and are: and respectively. Here, and are m × m matrices, defined as and . Now, the matrix is known as the next-generation matrix. The basic reproduction number of the model is then given by the eigenvalue of with the largest absolute value (the spectral radius of . Next generation matrices can be computationally evaluated from observational data, which is often the most productive approach where there are large numbers of compartments. See also Mathematical modelling of infectious disease Document 4::: The Law of Maximum also known as Law of the Maximum is a principle developed by Arthur Wallace which states that total growth of a crop or a plant is proportional to about 70 growth factors. Growth will not be greater than the aggregate values of the growth factors. Without the correction of the limiting growth factors, nutrients, waters and other inputs are not fully or judicially used resulting in wasted resources. Applications The factors range from 0 for no growth to 1 for maximum growth. Actual growth is calculated by the total multiplication of each growth factor. For example, if ten factors had a value of 0.5, the actual growth would be: 0.5 x 0.5 x 0.5 x 0.5 x 0.5 x 0.5 x 0.5 x 0.5 x 0.5 x 0.5 = 0.001, which is 0.1% of optimum. If each of ten factors had a value of 0.9 the actual growth would be: 0.9 x 0.9 x 0.9 x 0.9 x 0.9 x 0.9 x 0.9 x 0.9 x 0.9 x 0.9 = 0.349, which is 34.9% of optimum. Hence the need to achieve maximal value for each factor is critical in order to obtain maximal growth. Demonstrations of "Law of the Maximum" The following demonstrates the Law of the Maximum. For the various crops listed below, one, two or three factors were limiting while all the other factors were 1. When two or three factors were simultaneously limiting, predicted growth of the two or three factors was similar to the actual growth when the two or three factors were limits individually and then multiplied together. Growth Factors A. Adequacy of Nutrients B. Non-nutrient elements and nutrients excesses that cause toxicities (stresses) C. Interactions of the nutrients D. Soil Conditioning requirement and physical processes E. Additional biology F. Weather factors G. Management External links Law of the Maximum, in Handbook of soil science by Malcolm E. Sumner The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What are 2 common growth patterns of population? A. exponential and economical B. migratory and logistic C. organic and inorganic D. exponential and logistic Answer:
ai2_arc-1094
multiple_choice
Human interaction with the environment has led to increasing amounts of acid rain. Which population have humans adversely affected the most by contributing to the production of acid rain?
[ "frogs in a pond ecosystem", "fish in an ocean ecosystem", "bears in a tundra ecosystem", "lions in a grassland ecosystem" ]
A
Relavent Documents: Document 0::: The University of Michigan Biological Station (UMBS) is a research and teaching facility operated by the University of Michigan. It is located on the south shore of Douglas Lake in Cheboygan County, Michigan. The station consists of 10,000 acres (40 km2) of land near Pellston, Michigan in the northern Lower Peninsula of Michigan and 3,200 acres (13 km2) on Sugar Island in the St. Mary's River near Sault Ste. Marie, in the Upper Peninsula. It is one of only 28 Biosphere Reserves in the United States. Overview Founded in 1909, it has grown to include approximately 150 buildings, including classrooms, student cabins, dormitories, a dining hall, and research facilities. Undergraduate and graduate courses are available in the spring and summer terms. It has a full-time staff of 15. In the 2000s, UMBS is increasingly focusing on the measurement of climate change. Its field researchers are gauging the impact of global warming and increased levels of atmospheric carbon dioxide on the ecosystem of the upper Great Lakes region, and are using field data to improve the computer models used to forecast further change. Several archaeological digs have been conducted at the station as well. UMBS field researchers sometimes call the station "bug camp" amongst themselves. This is believed to be due to the number of mosquitoes and other insects present. It is also known as "The Bio-Station". The UMBS is also home to Michigan's most endangered species and one of the most endangered species in the world: the Hungerford's Crawling Water Beetle. The species lives in only five locations in the world, two of which are in Emmet County. One of these, a two and a half mile stretch downstream from the Douglas Road crossing of the East Branch of the Maple River supports the only stable population of the Hungerford's Crawling Water Beetle, with roughly 1000 specimens. This area, though technically not part of the UMBS is largely within and along the boundary of the University of Michigan Document 1::: In nature and human societies, many phenomena have causal relationships where one phenomenon A (a cause) impacts another phenomenon B (an effect). Establishing causal relationships is the aim of many scientific studies across fields ranging from biology and physics to social sciences and economics. It is also a subject of accident analysis, and can be considered a prerequisite for effective policy making. To describe causal relationships between phenomena, non-quantitative visual notations are common, such as arrows, e.g. in the nitrogen cycle or many chemistry and mathematics textbooks. Mathematical conventions are also used, such as plotting an independent variable on a horizontal axis and a dependent variable on a vertical axis, or the notation to denote that a quantity "" is a dependent variable which is a function of an independent variable "". Causal relationships are also described using quantitative mathematical expressions. The following examples illustrate various types of causal relationships. These are followed by different notations used to represent causal relationships. Examples What follows does not necessarily assume the convention whereby denotes an independent variable, and denotes a function of the independent variable . Instead, and denote two quantities with an a priori unknown causal relationship, which can be related by a mathematical expression. Ecosystem example: correlation without causation Imagine the number of days of weather below zero degrees Celsius, , causes ice to form on a lake, , and it causes bears to go into hibernation . Even though does not cause and vice-versa, one can write an equation relating and . This equation may be used to successfully calculate the number of hibernating bears , given the surface area of the lake covered by ice. However, melting the ice in a region of the lake by pouring salt onto it, will not cause bears to come out of hibernation. Nor will waking the bears by physically disturbing the Document 2::: Biodiversity loss includes the worldwide extinction of different species, as well as the local reduction or loss of species in a certain habitat, resulting in a loss of biological diversity. The latter phenomenon can be temporary or permanent, depending on whether the environmental degradation that leads to the loss is reversible through ecological restoration/ecological resilience or effectively permanent (e.g. through land loss). The current global extinction (frequently called the sixth mass extinction or Anthropocene extinction), has resulted in a biodiversity crisis being driven by human activities which push beyond the planetary boundaries and so far has proven irreversible. The main direct threats to conservation (and thus causes for biodiversity loss) fall in eleven categories: Residential and commercial development; farming activities; energy production and mining; transportation and service corridors; biological resource usages; human intrusions and activities that alter, destroy, disturb habitats and species from exhibiting natural behaviors; natural system modification; invasive and problematic species, pathogens and genes; pollution; catastrophic geological events, climate change, and so on. Numerous scientists and the IPBES Global Assessment Report on Biodiversity and Ecosystem Services assert that human population growth and overconsumption are the primary factors in this decline. However other scientists have criticized this, saying that loss of habitat is caused mainly by "the growth of commodities for export" and that population has very little to do with overall consumption, due to country wealth disparities. Climate change is another threat to global biodiversity. For example, coral reefs – which are biodiversity hotspots – will be lost within the century if global warming continues at the current rate. However, habitat destruction e.g. for the expansion of agriculture, is currently the more significant driver of contemporary biodiversity lo Document 3::: Ian Gordon Simmons (born 22 January 1937) is a British geographer. He retired as Professor of Geography from the University of Durham in 2001. He has made significant contributions to environmental history and prehistoric archaeology. Background Simmons grew up in East London and then East Lincolnshire until the age of 12. He studied physical geography (BSc) and holds a PhD from the University of London (early 1960s) on the vegetation history of Dartmoor. He began university lecturing in his early 20s and was Lecturer and then Reader in Geography at the University of Durham from 1962 to 1977, then Professor of Geography at the University of Bristol from 1977 to 1981 before returning to a Chair in Geography at Durham, where he worked until retiring in 2001. In 1972–73, he taught biogeography for a year at York University, Canada and has held other appointments including Visiting Scholar, St. John's College, University of Oxford in the 1990s. Previously, he had been an ACLS postdoctoral fellow at the University of California, Berkeley. Scholarship His research includes the study of the later Mesolithic and early Neolithic in their environmental setting on English uplands, where he has demonstrated the role of these early human communities in initiating some of Britain's characteristic landscape elements. His work also encompasses the long-term effects of human manipulation of the natural environment and its consequences for resource use and environmental change. This line of work resulted in his last three books, which looked at environmental history on three nested scales: the moorlands of England and Wales, Great Britain, and the Globe. Each dealt with the last 10,000 years and tried to encompassboth conventional science-based data with the insights of the social sciences and humanities. Simmons has authored several books on environmental thought and culture over the ages as well as contemporary resource management and environmental problems. Since retireme Document 4::: The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591. On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education. Format This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions. The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test. Preparation The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Human interaction with the environment has led to increasing amounts of acid rain. Which population have humans adversely affected the most by contributing to the production of acid rain? A. frogs in a pond ecosystem B. fish in an ocean ecosystem C. bears in a tundra ecosystem D. lions in a grassland ecosystem Answer:
sciq-10283
multiple_choice
How do you determine the atomic weight of an element?
[ "subtract protons from electrons", "divide protons and neutrons", "multiply protons and neutrons", "add up protons and neutrons" ]
D
Relavent Documents: Document 0::: The atomic mass (ma or m) is the mass of an atom. Although the SI unit of mass is the kilogram (symbol: kg), atomic mass is often expressed in the non-SI unit dalton (symbol: Da) – equivalently, unified atomic mass unit (u). 1 Da is defined as of the mass of a free carbon-12 atom at rest in its ground state. The protons and neutrons of the nucleus account for nearly all of the total mass of atoms, with the electrons and nuclear binding energy making minor contributions. Thus, the numeric value of the atomic mass when expressed in daltons has nearly the same value as the mass number. Conversion between mass in kilograms and mass in daltons can be done using the atomic mass constant . The formula used for conversion is: where is the molar mass constant, is the Avogadro constant, and is the experimentally determined molar mass of carbon-12. The relative isotopic mass (see section below) can be obtained by dividing the atomic mass ma of an isotope by the atomic mass constant mu yielding a dimensionless value. Thus, the atomic mass of a carbon-12 atom is by definition, but the relative isotopic mass of a carbon-12 atom is simply 12. The sum of relative isotopic masses of all atoms in a molecule is the relative molecular mass. The atomic mass of an isotope and the relative isotopic mass refers to a certain specific isotope of an element. Because substances are usually not isotopically pure, it is convenient to use the elemental atomic mass which is the average (mean) atomic mass of an element, weighted by the abundance of the isotopes. The dimensionless (standard) atomic weight is the weighted mean relative isotopic mass of a (typical naturally occurring) mixture of isotopes. The atomic mass of atoms, ions, or atomic nuclei is slightly less than the sum of the masses of their constituent protons, neutrons, and electrons, due to binding energy mass loss (per ). Relative isotopic mass Relative isotopic mass (a property of a single atom) is not to be confused w Document 1::: In nuclear physics, the semi-empirical mass formula (SEMF) (sometimes also called the Weizsäcker formula, Bethe–Weizsäcker formula, or Bethe–Weizsäcker mass formula to distinguish it from the Bethe–Weizsäcker process) is used to approximate the mass of an atomic nucleus from its number of protons and neutrons. As the name suggests, it is based partly on theory and partly on empirical measurements. The formula represents the liquid-drop model proposed by George Gamow, which can account for most of the terms in the formula and gives rough estimates for the values of the coefficients. It was first formulated in 1935 by German physicist Carl Friedrich von Weizsäcker, and although refinements have been made to the coefficients over the years, the structure of the formula remains the same today. The formula gives a good approximation for atomic masses and thereby other effects. However, it fails to explain the existence of lines of greater binding energy at certain numbers of protons and neutrons. These numbers, known as magic numbers, are the foundation of the nuclear shell model. The liquid-drop model The liquid-drop model was first proposed by George Gamow and further developed by Niels Bohr and John Archibald Wheeler. It treats the nucleus as a drop of incompressible fluid of very high density, held together by the nuclear force (a residual effect of the strong force), there is a similarity to the structure of a spherical liquid drop. While a crude model, the liquid-drop model accounts for the spherical shape of most nuclei and makes a rough prediction of binding energy. The corresponding mass formula is defined purely in terms of the numbers of protons and neutrons it contains. The original Weizsäcker formula defines five terms: Volume energy, when an assembly of nucleons of the same size is packed together into the smallest volume, each interior nucleon has a certain number of other nucleons in contact with it. So, this nuclear energy is proportional to the vol Document 2::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 3::: The elementary charge, usually denoted by , is a fundamental physical constant, defined as the electric charge carried by a single proton or, equivalently, the magnitude of the negative electric charge carried by a single electron, which has charge −1 . In the SI system of units, the value of the elementary charge is exactly defined as  =  coulombs, or 160.2176634 zeptocoulombs (zC). Since the 2019 redefinition of SI base units, the seven SI base units are defined by seven fundamental physical constants, of which the elementary charge is one. In the centimetre–gram–second system of units (CGS), the corresponding quantity is . Robert A. Millikan and Harvey Fletcher's oil drop experiment first directly measured the magnitude of the elementary charge in 1909, differing from the modern accepted value by just 0.6%. Under assumptions of the then-disputed atomic theory, the elementary charge had also been indirectly inferred to ~3% accuracy from blackbody spectra by Max Planck in 1901 and (through the Faraday constant) at order-of-magnitude accuracy by Johann Loschmidt's measurement of the Avogadro number in 1865. As a unit In some natural unit systems, such as the system of atomic units, e functions as the unit of electric charge. The use of elementary charge as a unit was promoted by George Johnstone Stoney in 1874 for the first system of natural units, called Stoney units. Later, he proposed the name electron for this unit. At the time, the particle we now call the electron was not yet discovered and the difference between the particle electron and the unit of charge electron was still blurred. Later, the name electron was assigned to the particle and the unit of charge e lost its name. However, the unit of energy electronvolt (eV) is a remnant of the fact that the elementary charge was once called electron. In other natural unit systems, the unit of charge is defined as with the result that where is the fine-structure constant, is the speed of light, is Document 4::: The atomic number or nuclear charge number (symbol Z) of a chemical element is the charge number of an atomic nucleus. For ordinary nuclei composed of protons and neutrons, this is equal to the proton number (np) or the number of protons found in the nucleus of every atom of that element. The atomic number can be used to uniquely identify ordinary chemical elements. In an ordinary uncharged atom, the atomic number is also equal to the number of electrons. For an ordinary atom which contains protons, neutrons and electrons, the sum of the atomic number Z and the neutron number N gives the atom's atomic mass number A. Since protons and neutrons have approximately the same mass (and the mass of the electrons is negligible for many purposes) and the mass defect of the nucleon binding is always small compared to the nucleon mass, the atomic mass of any atom, when expressed in daltons (making a quantity called the "relative isotopic mass"), is within 1% of the whole number A. Atoms with the same atomic number but different neutron numbers, and hence different mass numbers, are known as isotopes. A little more than three-quarters of naturally occurring elements exist as a mixture of isotopes (see monoisotopic elements), and the average isotopic mass of an isotopic mixture for an element (called the relative atomic mass) in a defined environment on Earth determines the element's standard atomic weight. Historically, it was these atomic weights of elements (in comparison to hydrogen) that were the quantities measurable by chemists in the 19th century. The conventional symbol Z comes from the German word 'number', which, before the modern synthesis of ideas from chemistry and physics, merely denoted an element's numerical place in the periodic table, whose order was then approximately, but not completely, consistent with the order of the elements by atomic weights. Only after 1915, with the suggestion and evidence that this Z number was also the nuclear charge and a physi The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. How do you determine the atomic weight of an element? A. subtract protons from electrons B. divide protons and neutrons C. multiply protons and neutrons D. add up protons and neutrons Answer:
ai2_arc-156
multiple_choice
Repeating experiments improves the likelihood of accurate results because the overall results are
[ "less likely to prove the hypothesis correct.", "more likely to prove the hypothesis correct.", "less likely to be correct due to fewer errors being made.", "more likely to be correct due to fewer errors being made." ]
D
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Adaptive comparative judgement is a technique borrowed from psychophysics which is able to generate reliable results for educational assessment – as such it is an alternative to traditional exam script marking. In the approach, judges are presented with pairs of student work and are then asked to choose which is better, one or the other. By means of an iterative and adaptive algorithm, a scaled distribution of student work can then be obtained without reference to criteria. Introduction Traditional exam script marking began in Cambridge 1792 when, with undergraduate numbers rising, the importance of proper ranking of students was growing. So in 1792 the new Proctor of Examinations, William Farish, introduced marking, a process in which every examiner gives a numerical score to each response by every student, and the overall total mark puts the students in the final rank order. Francis Galton (1869) noted that, in an unidentified year about 1863, the Senior Wrangler scored 7,634 out of a maximum of 17,000, while the Second Wrangler scored 4,123. (The 'Wooden Spoon' scored only 237.) Prior to 1792, a team of Cambridge examiners convened at 5pm on the last day of examining, reviewed the 19 papers each student had sat – and published their rank order at midnight. Marking solved the problems of numbers and prevented unfair personal bias, and its introduction was a step towards modern objective testing, the format it is best suited to. But the technology of testing that followed, with its major emphasis on reliability and the automatisation of marking, has been an uncomfortable partner for some areas of educational achievement: assessing writing or speaking, and other kinds of performance need something more qualitative and judgemental. The technique of Adaptive Comparative Judgement is an alternative to marking. It returns to the pre-1792 idea of sorting papers according to their quality, but retains the guarantee of reliability and fairness. It is by far the most rel Document 2::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of Document 3::: The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered. The 1995 version has 30 five-way multiple choice questions. Example question (question 4): Gender differences The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher. Document 4::: Computerized adaptive testing (CAT) is a form of computer-based test that adapts to the examinee's ability level. For this reason, it has also been called tailored testing. In other words, it is a form of computer-administered test in which the next item or set of items selected to be administered depends on the correctness of the test taker's responses to the most recent items administered. How it works CAT successively selects questions for the purpose of maximizing the precision of the exam based on what is known about the examinee from previous questions. From the examinee's perspective, the difficulty of the exam seems to tailor itself to their level of ability. For example, if an examinee performs well on an item of intermediate difficulty, they will then be presented with a more difficult question. Or, if they performed poorly, they would be presented with a simpler question. Compared to static tests that nearly everyone has experienced, with a fixed set of items administered to all examinees, computer-adaptive tests require fewer test items to arrive at equally accurate scores. The basic computer-adaptive testing method is an iterative algorithm with the following steps: The pool of available items is searched for the optimal item, based on the current estimate of the examinee's ability The chosen item is presented to the examinee, who then answers it correctly or incorrectly The ability estimate is updated, based on all prior answers Steps 1–3 are repeated until a termination criterion is met Nothing is known about the examinee prior to the administration of the first item, so the algorithm is generally started by selecting an item of medium, or medium-easy, difficulty as the first item. As a result of adaptive administration, different examinees receive quite different tests. Although examinees are typically administered different tests, their ability scores are comparable to one another (i.e., as if they had received the same test, as is common The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Repeating experiments improves the likelihood of accurate results because the overall results are A. less likely to prove the hypothesis correct. B. more likely to prove the hypothesis correct. C. less likely to be correct due to fewer errors being made. D. more likely to be correct due to fewer errors being made. Answer:
sciq-6096
multiple_choice
The largest region of each of the palatine bone is the what?
[ "big plate", "horizontal plate", "magnetic plate", "abnormal plate" ]
B
Relavent Documents: Document 0::: The horizontal plate of palatine bone is a quadrilateral part of the palatine bone, and has two surfaces and four borders. Surfaces The superior surface, concave from side to side, forms the back part of the floor of the nasal cavity. The inferior surface, slightly concave and rough, forms, with the corresponding surface of the opposite bone, the posterior fourth of the hard palate. Near its posterior margin may be seen a more or less marked transverse ridge for the attachment of part of the aponeurosis of the tensor veli palatini. Borders The anterior border is serrated. It articulates with the palatine process of maxilla. The posterior border is concave, free, and serves for the attachment of the soft palate. Its medial end is sharp and pointed, and, when united with that of the opposite bone, forms a projecting process, the posterior nasal spine for the attachment of the musculus uvulae. The lateral border is united with the lower margin of the perpendicular plate, and is grooved by the lower end of the greater palatine canal. The medial border, the thickest, is serrated for articulation with its fellow of the opposite side; its superior edge is raised into a ridge, which, united with the ridge of the opposite bone, forms the nasal crest for articulation with the posterior part of the lower edge of the vomer. Additional images Document 1::: The perpendicular plate of palatine bone is the vertical part of the palatine bone, and is thin, of an oblong form, and presents two surfaces and four borders. Surfaces The nasal surface exhibits at its lower part a broad, shallow depression, which forms part of the inferior meatus of the nose. Immediately above this is a well-marked horizontal ridge, the conchal crest, for articulation with the inferior nasal concha; still higher is a second broad, shallow depression, which forms part of the middle meatus, and is limited above by a horizontal crest less prominent than the inferior, the ethmoidal crest, for articulation with the middle nasal concha. Above the ethmoidal crest is a narrow, horizontal groove, which forms part of the superior meatus. The maxillary surface is rough and irregular throughout the greater part of its extent, for articulation with the nasal surface of the maxilla; its upper and back part is smooth where it enters into the formation of the pterygopalatine fossa; it is also smooth in front, where it forms the posterior part of the medial wall of the maxillary sinus. On the posterior part of this surface is a deep vertical groove, converted into the pterygopalatine canal, by articulation with the maxilla; this canal transmits the descending palatine vessels, and the anterior palatine nerve. Borders The anterior border is thin and irregular; opposite the conchal crest is a pointed, projecting lamina, the maxillary process, which is directed forward, and closes in the lower and back part of the opening of the maxillary sinus. The posterior border presents a deep groove, the edges of which are serrated for articulation with the medial pterygoid plate of the sphenoid. This border is continuous above with the sphenoidal process; below it expands into the pyramidal process. The superior border supports the orbital process in front and the sphenoidal process behind. These processes are separated by the sphenopalatine notch, which is converted Document 2::: In anatomy, the palatine bones () are two irregular bones of the facial skeleton in many animal species, located above the uvula in the throat. Together with the maxillae, they comprise the hard palate. (Palate is derived from the Latin palatum.) Structure The palatine bones are situated at the back of the nasal cavity between the maxilla and the pterygoid process of the sphenoid bone. They contribute to the walls of three cavities: the floor and lateral walls of the nasal cavity, the roof of the mouth, and the floor of the orbits. They help to form the pterygopalatine and pterygoid fossae, and the inferior orbital fissures. Each palatine bone somewhat resembles the letter L, and consists of a horizontal plate, a perpendicular plate, and three projecting processes—the pyramidal process, which is directed backward and lateral from the junction of the two parts, and the orbital and sphenoidal processes, which surmount the vertical part, and are separated by a deep notch, the sphenopalatine notch. The two plates form the posterior part of the hard palate and the floor of the nasal cavity; anteriorly, they join with the maxillae. The two horizontal plates articulate with each other at the posterior part of the median palatine suture and more anteriorly with the maxillae at the transverse palatine suture. The human palatine articulates with six bones: the sphenoid, ethmoid, maxilla, inferior nasal concha, vomer and opposite palatine. There are two important foramina in the palatine bones that transmit nerves and blood vessels to this region: the greater and lesser palatine. The larger greater palatine foramen is located in the posterolateral region of each of the palatine bones, usually at the apex of the maxillary third molar. The greater palatine foramen transmits the greater palatine nerve and blood vessels. A smaller opening nearby, the lesser palatine foramen, transmits the lesser palatine nerve and blood vessels to the soft palate and tonsils. Both foramina ar Document 3::: The orbital process of the palatine bone is placed on a higher level than the sphenoidal, and is directed upward and lateralward from the front of the vertical part, to which it is connected by a constricted neck. It presents five surfaces, which enclose an air cell. Of these surfaces, three are articular and two non-articular. The articular surfaces are: the anterior or maxillary, directed forward, lateralward, and downward, of an oblong form, and rough for articulation with the maxilla the posterior or sphenoidal, directed backward, upward, and medialward; it presents the opening of the air cell, which usually communicates with the sphenoidal sinus; the margins of the opening are serrated for articulation with the sphenoidal concha the medial or ethmoidal, directed forward, articulates with the labyrinth of the ethmoid. In some cases the air cell opens on this surface of the bone and then communicates with the posterior ethmoidal cells. More rarely it opens on both surfaces, and then communicates with the posterior ethmoidal cells and the sphenoidal sinus. The non-articular surfaces are: the superior or orbital, directed upward and lateralward; it is triangular in shape, and forms the back part of the floor of the orbit; and the lateral, of an oblong form, directed toward the pterygopalatine fossa; it is separated from the orbital surface by a rounded border, which enters into the formation of the inferior orbital fissure. Additional images Document 4::: The pyramidal process of the palatine bone projects backward and lateralward from the junction of the horizontal and vertical parts, and is received into the angular interval between the lower extremities of the pterygoid plates. On its posterior surface is a smooth, grooved, triangular area, limited on either side by a rough articular furrow. The furrows articulate with the pterygoid plates, while the grooved intermediate area completes the lower part of the pterygoid fossa and gives origin to a few fibers of the Pterygoideus internus. The anterior part of the lateral surface is rough, for articulation with the tuberosity of the maxilla; its posterior part consists of a smooth triangular area which appears, in the articulated skull, between the tuberosity of the maxilla and the lower part of the lateral pterygoid plate, and completes the lower part of the infratemporal fossa. On the base of the pyramidal process, close to its union with the horizontal part, are the lesser palatine foramina for the transmission of the posterior and middle palatine nerves. Additional images The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The largest region of each of the palatine bone is the what? A. big plate B. horizontal plate C. magnetic plate D. abnormal plate Answer:
sciq-575
multiple_choice
What type of radio waves exist in the 540 to 1600 khz frequency range?
[ "microwaves", "am radio waves", "fm radio waves", "sound waves" ]
B
Relavent Documents: Document 0::: Advances in Radio Science is a peer-reviewed open access scientific journal published by the German National Committee of the International Union of Radio Science. It covers radio science and radio engineering. It is abstracted and indexed in Scopus. See also Radio Science External links Electrical and electronic engineering journals Academic journals established in 2003 Multilingual journals Copernicus Publications academic journals Creative Commons Attribution-licensed journals Electromagnetism journals Document 1::: Radio Science is a quarterly peer-reviewed scientific journal published by Wiley-Blackwell on behalf of the American Geophysical Union and co-sponsored by the International Union of Radio Science. It contains original scientific contributions on radio-frequency electromagnetic propagation and its applications (radio science). Its full aims and scope read: Volumes for the years 1966 through 1968 were issued by the Environmental Science Services Administration (ESSA), the precursor of the National Oceanic and Atmospheric Administration (NOAA), in cooperation with the United States National Committee of the International Scientific Radio Union. See also Advances in Radio Science Document 2::: Terahertz radiation – also known as submillimeter radiation, terahertz waves, tremendously high frequency (THF), T-rays, T-waves, T-light, T-lux or THz – consists of electromagnetic waves within the ITU-designated band of frequencies from 0.3 to 3 terahertz (THz), although the upper boundary is somewhat arbitrary and is considered by some sources as 30 THz. One terahertz is 1012 Hz or 1000 GHz. Wavelengths of radiation in the terahertz band correspondingly range from 1 mm to 0.1 mm = 100 µm. Because terahertz radiation begins at a wavelength of around 1 millimeter and proceeds into shorter wavelengths, it is sometimes known as the submillimeter band, and its radiation as submillimeter waves, especially in astronomy. This band of electromagnetic radiation lies within the transition region between microwave and far infrared, and can be regarded as either. At some frequencies, terahertz radiation is strongly absorbed by the gases of the atmosphere, and in air is attenuated to zero within a few meters, so it is not practical for terrestrial radio communication at such frequencies. However, there are frequency windows in Earth's atmosphere, where the terahertz radiation could propagate up to 1 km or even longer depending on atmospheric conditions. The most important is the 0.3 THz band that will be used for 6G communications. It can penetrate thin layers of materials but is blocked by thicker objects. THz beams transmitted through materials can be used for material characterization, layer inspection, relief measurement, and as a lower-energy alternative to X-rays for producing high resolution images of the interior of solid objects. Terahertz radiation occupies a middle ground where the ranges of microwaves and infrared light waves overlap, known as the “terahertz gap”; it is called a “gap” because the technology for its generation and manipulation is still in its infancy. The generation and modulation of electromagnetic waves in this frequency range ceases to be pos Document 3::: IEEE Transactions on Microwave Theory and Techniques (T-MTT) is a monthly peer-reviewed scientific journal with a focus on that part of engineering and theory associated with microwave/millimeter-wave technology and components, electronic devices, guided wave structures and theory, electromagnetic theory, and Radio Frequency Hybrid and Monolithic Integrated Circuits, including mixed-signal circuits, from a few MHz to THz. T-MTT is published by the IEEE Microwave Theory and Techniques Society. T-MTT was established in 1953 as the Transactions of the IRE Professional Group on Microwave Theory and Techniques. From 1955 T-MTT was published as the IRE Transactions on Microwave Theory and Techniques and was finally the current denomination since 1963. The editors-in-chief is Jianguo Ma (Guangdong University of Technology). According to the Journal Citation Reports, the journal has a 2020 impact factor of 3.599. Document 4::: Radiophysics (also modern writing "radio physics") is a branch of physics focused on the theoretical and experimental study of certain kinds of radiation, its emission, propagation and interaction with matter. The term is used in the following major meanings: study of radio waves (the original area of research) study of radiation used in radiology study of other ranges of the spectrum of electromagnetic radiation in some specific applications Among the main applications of radiophysics are radio communications, radiolocation, radio astronomy and radiology. Branches Classical radiophysics deals with radio wave communications and detection Quantum radiophysics (physics of lasers and masers; Nikolai Basov was the founder of quantum radiophysics in the Soviet Union) Statistical radiophysics The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What type of radio waves exist in the 540 to 1600 khz frequency range? A. microwaves B. am radio waves C. fm radio waves D. sound waves Answer:
sciq-7558
multiple_choice
What causes maple leaves to change their colors?
[ "radiation reactions", "chemical reactions", "factor reactions", "artificial preservatives" ]
B
Relavent Documents: Document 0::: Leaf flushing or leaf out is the production of a flush of new leaves typically produced simultaneously on all branches of a bare plant or tree. Young leaves often have less chlorophyll and the leaf flush may be white or red, the latter due to presence of pigments, particularly anthocyanins. Leaf flushing succeeds leaf fall, and is delayed by winter in the temperate zone or by extreme dryness in the tropics. Leaf fall and leaf flushing in tropical deciduous forests can overlap in some species, called leaf-exchanging species, producing new leaves during the same period when old leaves are shed or almost immediately after. Leaf-flushing may be synchronized among trees of a single species or even across species in an area. In the seasonal tropics, leaf flushing phenology may be influenced by herbivory and water stress. Red leaf flush In tropical regions, leaves often flush red when young and in the phase of expansion to mature size. Red flushing is frequent among woody species, reported from 20 to 40% of the woody species in a site in Costa Rica, in 36% of species in Barro Colorado Island, Panama, about 49% of species in Kibale National Park, Uganda, and in 83 of 250 species in Southern Yunnan, China. The red coloration is primarily due to the presence of anthocyanins. Various hypotheses have been advanced to explain red flushing. The herbivore defense hypothesis suggests that the red coloration may make the leaves less likely to be attacked by insects as they are cryptic to herbivores that are blind to the red part of the spectrum. It has also been hypothesised that the anthocyanins may reduce light stress or fungal attacks on leaves. A recent study in tropical forest region of China provides support for the herbivore defense hypothesis, indicating that the red coloration of young leaves protects them from attacks of herbivorous insects through chemical defense as the red leaves have high concentrations of tannins and anthocyanins. Document 1::: Ecophysiology (from Greek , oikos, "house(hold)"; , physis, "nature, origin"; and , -logia), environmental physiology or physiological ecology is a biological discipline that studies the response of an organism's physiology to environmental conditions. It is closely related to comparative physiology and evolutionary physiology. Ernst Haeckel's coinage bionomy is sometimes employed as a synonym. Plants Plant ecophysiology is concerned largely with two topics: mechanisms (how plants sense and respond to environmental change) and scaling or integration (how the responses to highly variable conditions—for example, gradients from full sunlight to 95% shade within tree canopies—are coordinated with one another), and how their collective effect on plant growth and gas exchange can be understood on this basis. In many cases, animals are able to escape unfavourable and changing environmental factors such as heat, cold, drought or floods, while plants are unable to move away and therefore must endure the adverse conditions or perish (animals go places, plants grow places). Plants are therefore phenotypically plastic and have an impressive array of genes that aid in acclimating to changing conditions. It is hypothesized that this large number of genes can be partly explained by plant species' need to live in a wider range of conditions. Light Light is the food of plants, i.e. the form of energy that plants use to build themselves and reproduce. The organs harvesting light in plants are leaves and the process through which light is converted into biomass is photosynthesis. The response of photosynthesis to light is called light response curve of net photosynthesis (PI curve). The shape is typically described by a non-rectangular hyperbola. Three quantities of the light response curve are particularly useful in characterising a plant's response to light intensities. The inclined asymptote has a positive slope representing the efficiency of light use, and is called quantum Document 2::: Witch's broom or witches' broom is a deformity in a woody plant, typically a tree, where the natural structure of the plant is changed. A dense mass of shoots grows from a single point, with the resulting structure resembling a broom or a bird's nest. It is sometimes caused by pathogens. Diseases with symptoms of witches' broom, caused by phytoplasmas or basidiomycetes, are economically important in a number of crop plants, including the cocoa tree Theobroma cacao, jujube (Ziziphus jujuba) and the timber tree Melia azedarach. Causes A tree's characteristic shape, or habit, is in part the product of auxins, hormones which control the growth of secondary apices. The growth of an offshoot is limited by the auxin, while that of the parent branch is not. In cases of witch's broom, the normal hierarchy of buds is interrupted, and apices grow indiscriminately. This can be caused by cytokinin, a phytohormone which interferes with growth regulation. The phenomenon can also be caused by other organisms, including fungi, oomycetes, insects, mites, nematodes, phytoplasmas, and viruses. The broom growths may last for many years, typically for the life of the host plant. If twigs of witch's brooms are grafted onto normal rootstocks, freak trees result, showing that the attacking organism has changed the inherited growth pattern of the twigs. Ecological role Witches' brooms provide nesting habitat for birds and mammals, such as the northern flying squirrel, which nests in them. See also Plant development § buds and shoots – atypical shoot development Epicormic shoot – a shoot that develops from buds under the bark Forest pathology Longan witches broom-associated virus Melampsora can cause different kinds of witch's brooms. Moniliophthora perniciosa, cause of witch's broom disease in cacao Phyllody, a related plant growth abnormality affecting flowers Document 3::: Colored music notation is a technique used to facilitate enhanced learning in young music students by adding visual color to written musical notation. It is based upon the concept that color can affect the observer in various ways, and combines this with standard learning of basic notation. Basis Viewing color has been widely shown to change an individual's emotional state and stimulate neurons. The Lüscher color test observes from experiments that when individuals are required to contemplate pure red for varying lengths of time, [the experiments] have shown that this color decidedly has a stimulating effect on the nervous system; blood pressure increases, and respiration rate and heart rate both increase. Pure blue, on the other hand, has the reverse effect; observers experience a decline in blood pressure, heart rate, and breathing. Given these findings, it has been suggested that the influence of colored musical notation would be similar. Music education In music education, color is typically used in method books to highlight new material. Stimuli received through several senses excite more neurons in several localized areas of the cortex, thereby reinforcing the learning process and improving retention. This information has been proven by other researchers; Chute (1978) reported that "elementary students who viewed a colored version of an instructional film scored significantly higher on both immediate and delayed tests than did students who viewed a monochrome version". Color studies Effect on achievement A researcher in this field, George L. Rogers is the Director of Music Education at Westfield State College. He is also the author of 25 articles in publications that include the Music Educators Journal, The Instrumentalist, and the Journal of Research in Music Education. In 1991, George L. Rogers did a study that researched the effect of color-coded notation on music achievement of elementary instrumental students. Rogers states that the color-co Document 4::: Autumn leaf color is a phenomenon that affects the normally green leaves of many deciduous trees and shrubs by which they take on, during a few weeks in the autumn season, various shades of yellow, orange, red, purple, and brown. The phenomenon is commonly called autumn colours or autumn foliage in British English and fall colors, fall foliage, or simply foliage in American English. In some areas of Canada and the United States, "leaf peeping" tourism is a major contribution to economic activity. This tourist activity occurs between the beginning of color changes and the onset of leaf fall, usually around September and October in the Northern Hemisphere and April to May in the Southern Hemisphere. Chlorophyll and the green/yellow/orange colors A green leaf is green because of the presence of a pigment known as chlorophyll, which is inside an organelle called a chloroplast. When abundant in the leaf's cells, as during the growing season, the chlorophyll's green color dominates and masks out the colors of any other pigments that may be present in the leaf. Thus, the leaves of summer are characteristically green. Chlorophyll has a vital function: it captures solar rays and uses the resulting energy in the manufacture of the plant's food simple sugars which are produced from water and carbon dioxide. These sugars are the basis of the plant's nourishment the sole source of the carbohydrates needed for growth and development. In their food-manufacturing process, the chlorophylls break down, thus are continually "used up". During the growing season, however, the plant replenishes the chlorophyll so that the supply remains high and the leaves stay green. In late summer, with daylight hours shortening and temperatures cooling, the veins that carry fluids into and out of the leaf are gradually closed off as a layer of special cork cells forms at the base of each leaf. As this cork layer develops, water and mineral intake into the leaf is reduced, slowly at first, and the The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What causes maple leaves to change their colors? A. radiation reactions B. chemical reactions C. factor reactions D. artificial preservatives Answer:
sciq-2222
multiple_choice
A flower's colors come from what part of the plant?
[ "Internode", "vacuole", "glycogen", "Petiole" ]
B
Relavent Documents: Document 0::: Picotee describes flowers whose edge is a different colour than the flower's base colour. The word originates from the French picoté, meaning 'marked with points'. Examples Document 1::: White flower colour is related to the absence or reduction of the anthocyanidin content. Unlike other colors, white colour is not induced by pigments. Several white plant tissues are principally equipped with the complete machinery for anthocyanin biosynthesis including the expression of regulatory genes. Nevertheless, they are unable to accumulate red or blue pigments, for example Dahlia ´Seattle´ petals showing a white tip. Several studies have revealed a further reduction of the anthocyanidin to colorless epicatechin by the enzyme anthocyanidin reductase (ANR). Cultivation & Modification of Colour Many external factors can influence colour: light, temperature, pH, sugars and metals. There is a method to turn petunia flowers from white to transparent. The petunia flower is immersed into a flask of water, connected to a vacuum pump, after which the flower appeared colourless. The white colour is expressed by the air present in the vacuoles that absorb the light, without air the flower loses the white colour. There is an increasing interest in flower colour, since some colorations are currently unavailable in plants. Ornamental companies create new flower colour by classical and mutation breeding and biotechnological approaches. For example, white bracts in Poinsettia are obtained by high frequency irradiation. See also Basics of blue flower colouration Document 2::: Floral diagram is a graphic representation of flower structure. It shows the number of floral organs, their arrangement and fusion. Different parts of the flower are represented by their respective symbols. Floral diagrams are useful for flower identification or can help in understanding angiosperm evolution. They were introduced in the late 19th century and are generally attributed to A. W. Eichler. They are typically used with the floral formula of that flower to study its morphology. History In the 19th century, two contrasting methods of describing the flower were introduced: the textual floral formulae and pictorial floral diagrams. Floral diagrams are credited to A. W. Eichler, his extensive work Blüthendiagramme (1875, 1878) remains a valuable source of information on floral morphology. Eichler inspired later generation of scientists, including John Henry Schaffner. Diagrams were included e.g. in Types of Floral Mechanism by Church (1908). They were used in different textbooks, e.g. Organogenesis of Flowers by Sattler (1973), Botanische Bestimmungsübungen by Stützel (2006) or Plant Systematics by Simpson (2010). Floral Diagrams (2010) by Ronse De Craene followed Eichler’s approach using the contemporary APG II system. Basic characteristics and significance A floral diagram is a schematic cross-section through a young flower. It may be also defined as “projection of the flower perpendicular to its axis”. It usually shows the number of floral parts, their sizes, relative positions and fusion. Different organs are represented by distinguishable symbols, which may be uniform for one organ type, or may reflect concrete morphology. The diagram may also include symbols that don’t represent physical structures, but carry additional information (e.g. symmetry plane orientation). There is no agreement on how floral diagrams should be drawn, it depends on the author whether it is just a rough representation, or whether structural details of the flower are included. Document 3::: In botany, floral morphology is the study of the diversity of forms and structures presented by the flower, which, by definition, is a branch of limited growth that bears the modified leaves responsible for reproduction and protection of the gametes, called floral pieces. Fertile leaves or sporophylls carry sporangiums, which will produce male and female gametes and therefore are responsible for producing the next generation of plants. The sterile leaves are modified leaves whose function is to protect the fertile parts or to attract pollinators. The branch of the flower that joins the floral parts to the stem is a shaft called the pedicel, which normally dilates at the top to form the receptacle in which the various floral parts are inserted. All spermatophytes ("seed plants") possess flowers as defined here (in a broad sense), but the internal organization of the flower is very different in the two main groups of spermatophytes: living gymnosperms and angiosperms. Gymnosperms may possess flowers that are gathered in strobili, or the flower itself may be a strobilus of fertile leaves. Instead a typical angiosperm flower possesses verticils or ordered whorls that, from the outside in, are composed first of sterile parts, commonly called sepals (if their main function is protective) and petals (if their main function is to attract pollinators), and then the fertile parts, with reproductive function, which are composed of verticils or whorls of stamens (which carry the male gametes) and finally carpels (which enclose the female gametes). The arrangement of the floral parts on the axis, the presence or absence of one or more floral parts, the size, the pigmentation and the relative arrangement of the floral parts are responsible for the existence of a great variety of flower types. Such diversity is particularly important in phylogenetic and taxonomic studies of angiosperms. The evolutionary interpretation of the different flower types takes into account aspects of Document 4::: Flower differentiation is a plant process by which the shoot apical meristem changes its anatomy to generate a flower or inflorescence in lieu of other structures. Anatomical changes begin at the edge of the meristem, generating first the outer whorls of the flower - the calyx and the corolla, and later the inner whorls of the flower, the androecium and gynoecium. Flower differentiation can take from only a few days (in annual plants) to 4–11 months (in fruit crops). The process is preceded by flower induction. Morphological Characteristics:[edit] Flower bud differentiation was seen to have five different stages in the flower M.sinostellata. Undifferentiated stage: The flower bud was seen as yellow-green, had no scale hairs and was smooth outside. Its differentiation primordium cells are small and arranged closely. Early flower bud differentiation stage: The bud's basal region begins to expand and develops yellow-brown hairs on its outer surface. The bracts inside the growing bud begin to stratify. Cells are still closely arranged and the floral primordium becomes larger. Petal primordium differentiation stage: At this stage, the bud becomes more distinct than the leaf primordia by becoming longer and wider. The bud develops a couple of spathe-like bracts with scale hairs. The start of petal primordium differentiation is demonstrated by the wave-like surface of the tip of the developing floral meristem. Stamen primordium differentiation stage: The bud has expanded and the outer hairs mentioned earlier have become denser. The inner buds differentiation region forms a rounded hump shape with a smooth tip. The bud meristem inner cells are separate from each other while the outer cells stay small and compact. Rows of small spots were found on the inside of the petal primordia around the bottom of the meristem. Pistil primordium differentiation stage: The pistil primordia beginning to differentiate is indicated by the multiple round bulges in the upper region of The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. A flower's colors come from what part of the plant? A. Internode B. vacuole C. glycogen D. Petiole Answer:
scienceQA-12276
multiple_choice
Select the living thing.
[ "yucca", "candle", "windmill", "airplane" ]
A
Yucca is a living thing. Yucca grows and responds to its environment. It needs food and water. It is made up of many cells. Yucca is a plant. It uses water, air, and sunlight to make food. A candle is not a living thing. A candle does not have all the traits of a living thing. It gives off light and heat, but it does not need food or water to survive. An airplane is not a living thing. An airplane does not have all the traits of a living thing. It needs energy to fly, but it does not eat food. Airplanes get energy from gasoline or other fuel. They do not grow. A windmill is not a living thing. A windmill does not have all the traits of a living thing. It moves in the wind, but it does not grow. It does not need food or water.
Relavent Documents: Document 0::: Macroflora is a term used for all the plants occurring in a particular area that are large enough to be seen with the naked eye. It is usually synonymous with the Flora and can be contrasted with the microflora, a term used for all the bacteria and other microorganisms in an ecosystem. Macroflora is also an informal term used by many palaeobotanists to refer to an assemblage of plant fossils as preserved in the rock. This is in contrast to the flora, which in this context refers to the assemblage of living plants that were growing in a particular area, whose fragmentary remains became entrapped within the sediment from which the rock was formed and thus became the macroflora. Document 1::: Braiding Sweetgrass: Indigenous Wisdom, Scientific Knowledge, and the Teachings of Plants is a 2013 nonfiction book by Potawatomi professor Robin Wall Kimmerer, about the role of Indigenous knowledge as an alternative or complementary approach to Western mainstream scientific methodologies. Braiding Sweetgrass explores reciprocal relationships between humans and the land, with a focus on the role of plants and botany in both Native American and Western traditions. The book received largely positive reviews, and has appeared on several bestseller lists. Kimmerer is known for her scholarship on traditional ecological knowledge, ethnobotany, and moss ecology. Contents Braiding Sweetgrass: Indigenous Wisdom, Scientific Knowledge, and the Teachings of Plants is about botany and the relationship to land in Native American traditions. Kimmerer, who is an enrolled member of the Citizen Potawatomi Nation, writes about her personal experiences working with plants and reuniting with her people's cultural traditions. She also presents the history of the plants and botany from a scientific perspective. The series of essays in five sections begins with "Planting Sweetgrass", and progresses through "Tending," "Picking," "Braiding," and "Burning Sweetgrass." Environmental Philosophy says that this progression of headings "signals how Kimmerer's book functions not only as natural history but also as ceremony, the latter of which plays a decisive role in how Kimmerer comes to know the living world." Kimmerer describes Braiding Sweetgrass as "[A] braid of stories ... woven from three strands: indigenous ways of knowing, scientific knowledge, and the story of an Anishinabeckwe scientist trying to bring them together in service to what matters most." She also calls the work "an intertwining of science, spirit, and story." American Indian Quarterly writes that Braiding Sweetgrass is a book about traditional ecological knowledge and environmental humanities. Kimmerer combines her Document 2::: Phytotechnology (; ) implements solutions to scientific and engineering problems in the form of plants. It is distinct from ecotechnology and biotechnology as these fields encompass the use and study of ecosystems and living beings, respectively. Current study of this field has mostly been directed into contaminate removal (phytoremediation), storage (phytosequestration) and accumulation (see hyperaccumulators). Plant-based technologies have become alternatives to traditional cleanup procedures because of their low capital costs, high success rates, low maintenance requirements, end-use value, and aesthetic nature. Overview Phytotechnology is the application of plants to engineering and science problems. Phytotechnology uses ecosystem services to provide for a specifically engineered solution to a problem. Ecosystem services, broadly defined fall into four broad categories: provisioning (i.e. production of food and water), regulating (i.e. the control of climate and disease) supporting (i.e. nutrient cycles and crop pollination), and cultural (i.e. spiritual and recreational benefits). Many times only one of these ecosystem services is maximized in the design of the space. For instance a constructed wetland may attempt to maximize the cooling properties of the system to treat water from a wastewater treatment facility before introduction to a river. The designed benefit is a reduction of water temperature for the river system while the constructed wetland itself provides habitat and food for wildlife as well as walking trails for recreation. Most phytotechnology has been focused on the abilities of plants to remove pollutants from the environment. Other technologies such as green roofs, green walls and bioswales are generally considered phytotechnology. Taking a broad view: even parks and landscaping could be viewed as phytotechnology. However, there is very little consensus over a definition of phytotechnology even within the field. The Phytotechnology Technical Document 3::: The Desert Garden Conservatory is a large botanical greenhouse and part of the Huntington Library, Art Collections and Botanical Gardens, in San Marino, California. It was constructed in 1985. The Desert Garden Conservatory is adjacent to the Huntington Desert Garden itself. The garden houses one of the most important collections of cacti and other succulent plants in the world, including a large number of rare and endangered species. The Desert Garden Conservatory serves The Huntington and public communities as a conservation facility, research resource and genetic diversity preserve. John N. Trager is the Desert Collection curator. There are an estimated 10,000 succulents worldwide, about 1,500 of them classified as cacti. The Huntington Desert Garden Conservatory now contains more than 2,200 accessions, representing more than 43 plant families, 1,261 different species and subspecies, and 246 genera. The plant collection contains examples from the world's major desert regions, including the southern United States, Argentina, Bolivia, Chile, Brazil, Canary Islands, Madagascar, Malawi, Mexico and South Africa. The Desert Collection plays a critical role as a repository of biodiversity, in addition to serving as an outreach and education center. Propagation program to save rare and endangered plants Some studies estimate that as many as two-thirds of the world's flora and fauna may become extinct during the course of the 21st century, the result of global warming and encroaching development. Scientists alarmed by these prospects are working diligently to propagate plants outside their natural habitats, in protected areas. Ex-situ cultivation, as this practice is known, can serve as a stopgap for plants that will otherwise be lost to the world as their habitats disappear. To this end, The Huntington has a program to protect and plant propagate endangered plant species, designated International Succulent Introductions (ISI). The aim of the ISI program is to pr Document 4::: In botany, a virtual herbarium is a herbarium in a digitized form. That is, it concerns a collection of digital images of preserved plants or plant parts. Virtual herbaria often are established to improve availability of specimens to a wider audience. However, there are digital herbaria that are not suitable for internet access because of the high resolution of scans and resulting large file sizes (several hundred megabytes per file). Additional information about each specimen, such as the location, the collector, and the botanical name are attached to every specimen. Frequently, further details such as related species and growth requirements are mentioned. Specimen imaging The standard hardware used for herbarium specimen imaging is the "HerbScan" scanner. It is an inverted flat-bed scanner which raises the specimen up to the scanning surface. This technology was developed because it is standard practice to never turn a herbarium specimen upside-down. Alternatively, some herbaria employ a flat-bed book scanner or a copy stand to achieve the same effect. A small color chart and a ruler must be included on a herbarium sheet when it is imaged. The JSTOR Plant Science requires that the ruler bears the herbarium name and logo, and that a ColorChecker chart is used for any specimens to be contributed to the Global Plants Initiative (GPI). Uses Virtual herbaria are established in part to increase the longevity of specimens. Major herbaria participate in international loan programs, where a researcher can request specimens to be shipped in for study. This shipping contributes to the wear and tear of specimens. If, however, digital images are available, images of the specimens can be sent electronically. These images may be a sufficient substitute for the specimens themselves, or alternatively, the researcher can use the images to "preview" the specimens, to which ones should be sent out for further study. This process cuts down on the shipping, and thus the wear and The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Select the living thing. A. yucca B. candle C. windmill D. airplane Answer:
sciq-1860
multiple_choice
The body contains how many types of muscle tissue?
[ "three", "two", "four", "seven" ]
A
Relavent Documents: Document 0::: H2.00.04.4.01001: Lymphoid tissue H2.00.05.0.00001: Muscle tissue H2.00.05.1.00001: Smooth muscle tissue H2.00.05.2.00001: Striated muscle tissue H2.00.06.0.00001: Nerve tissue H2.00.06.1.00001: Neuron H2.00.06.2.00001: Synapse H2.00.06.2.00001: Neuroglia h3.01: Bones h3.02: Joints h3.03: Muscles h3.04: Alimentary system h3.05: Respiratory system h3.06: Urinary system h3.07: Genital system h3.08: Document 1::: Outline h1.00: Cytology h2.00: General histology H2.00.01.0.00001: Stem cells H2.00.02.0.00001: Epithelial tissue H2.00.02.0.01001: Epithelial cell H2.00.02.0.02001: Surface epithelium H2.00.02.0.03001: Glandular epithelium H2.00.03.0.00001: Connective and supportive tissues H2.00.03.0.01001: Connective tissue cells H2.00.03.0.02001: Extracellular matrix H2.00.03.0.03001: Fibres of connective tissues H2.00.03.1.00001: Connective tissue proper H2.00.03.1.01001: Ligaments H2.00.03.2.00001: Mucoid connective tissue; Gelatinous connective tissue H2.00.03.3.00001: Reticular tissue H2.00.03.4.00001: Adipose tissue H2.00.03.5.00001: Cartilage tissue H2.00.03.6.00001: Chondroid tissue H2.00.03.7.00001: Bone tissue; Osseous tissue H2.00.04.0.00001: Haemotolymphoid complex H2.00.04.1.00001: Blood cells H2.00.04.1.01001: Erythrocyte; Red blood cell H2.00.04.1.02001: Leucocyte; White blood cell H2.00.04.1.03001: Platelet; Thrombocyte H2.00.04.2.00001: Plasma H2.00.04.3.00001: Blood cell production H2.00.04.4.00001: Postnatal sites of haematopoiesis H2.00.04.4.01001: Lymphoid tissue H2.00.05.0.00001: Muscle tissue H2.00.05.1.00001: Smooth muscle tissue Document 2::: Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research. Americas Human Biology major at Stanford University, Palo Alto (since 1970) Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government. Human and Social Biology (Caribbean) Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment. Human Biology Program at University of Toronto The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications. Asia BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002) BSc (honours) Human Biology at AIIMS (New Document 3::: Myology is the study of the muscular system, including the study of the structure, function and diseases of muscle. The muscular system consists of skeletal muscle, which contracts to move or position parts of the body (e.g., the bones that articulate at joints), smooth and cardiac muscle that propels, expels or controls the flow of fluids and contained substance. See also Myotomy Oral myology Document 4::: This article contains a list of organs of the human body. A general consensus is widely believed to be 79 organs (this number goes up if you count each bone and muscle as an organ on their own, which is becoming more common practice to do); however, there is no universal standard definition of what constitutes an organ, and some tissue groups' status as one is debated. Since there is no single standard definition of what an organ is, the number of organs varies depending on how one defines an organ. For example, this list contains more than 79 organs (about ~103). It is still not clear which definition of an organ is used for all the organs in this list, it seemed that it may have been compiled based on what wikipedia articles were available on organs. Musculoskeletal system Skeleton Joints Ligaments Muscular system Tendons Digestive system Mouth Teeth Tongue Lips Salivary glands Parotid glands Submandibular glands Sublingual glands Pharynx Esophagus Stomach Small intestine Duodenum Jejunum Ileum Large intestine Cecum Ascending colon Transverse colon Descending colon Sigmoid colon Rectum Liver Gallbladder Mesentery Pancreas Anal canal Appendix Respiratory system Nasal cavity Pharynx Larynx Trachea Bronchi Bronchioles and smaller air passages Lungs Muscles of breathing Urinary system Kidneys Ureter Bladder Urethra Reproductive systems Female reproductive system Internal reproductive organs Ovaries Fallopian tubes Uterus Cervix Vagina External reproductive organs Vulva Clitoris Male reproductive system Internal reproductive organs Testicles Epididymis Vas deferens Prostate External reproductive organs Penis Scrotum Endocrine system Pituitary gland Pineal gland Thyroid gland Parathyroid glands Adrenal glands Pancreas Circulatory system Circulatory system Heart Arteries Veins Capillaries Lymphatic system Lymphatic vessel Lymph node Bone marrow Thymus Spleen Gut-associated lymphoid tissue Tonsils Interstitium Nervous system Central nervous system The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The body contains how many types of muscle tissue? A. three B. two C. four D. seven Answer:
ai2_arc-629
multiple_choice
Which of the following has the greatest mass?
[ "star", "moon", "planet", "galaxy" ]
D
Relavent Documents: Document 0::: The Leonard–Merritt mass estimator is a formula for estimating the mass of a spherical stellar system using the apparent (angular) positions and proper motions of its component stars. The distance to the stellar system must also be known. Like the virial theorem, the Leonard–Merritt estimator yields correct results regardless of the degree of velocity anisotropy. Its statistical properties are superior to those of the virial theorem. However, it requires that two components of the velocity be known for every star, rather than just one for the virial theorem. The estimator has the general form The angle brackets denote averages over the ensemble of observed stars. is the mass contained within a distance from the center of the stellar system; is the projected distance of a star from the apparent center; and are the components of a star's velocity parallel to, and perpendicular to, the apparent radius vector; and is the gravitational constant. Like all estimators based on moments of the Jeans equations, the Leonard–Merritt estimator requires an assumption about the relative distribution of mass and light. As a result, it is most useful when applied to stellar systems that have one of two properties: All or almost all of the mass resides in a central object, or, the mass is distributed in the same way as the observed stars. Case (1) applies to the nucleus of a galaxy containing a supermassive black hole. Case (2) applies to a stellar system composed entirely of luminous stars (i.e. no dark matter or black holes). In a cluster with constant mass-to-light ratio and total mass , the Leonard–Merritt estimator becomes: On the other hand, if all the mass is located in a central point of mass , then: In its second form, the Leonard–Merritt estimator has been successfully used to measure the mass of the supermassive black hole at the center of the Milky Way galaxy. See also Globular cluster Proper motion Virial theorem Document 1::: A mass deficit is the amount of mass (in stars) that has been removed from the center of a galaxy, presumably by the action of a binary supermassive black hole. The density of stars increases toward the center in most galaxies. In small galaxies, this increase continues into the very center. In large galaxies, there is usually a "core", a region near the center where the density is constant or slowly rising. The size of the core – the "core radius" – can be a few hundred parsecs in large elliptical galaxies. The greatest observed stellar cores reach 3.2 to 5.7 kiloparsecs in radius. It is believed that cores are produced by binary supermassive black holes (SMBHs). Binary SMBHs form during the merger of two galaxies. If a star passes near the massive binary, it will be ejected, by a process called the gravitational slingshot. This ejection continues until most of the stars near the center of the galaxy have been removed. The result is a low-density core. Such cores are ubiquitous in giant elliptical galaxies. The mass deficit is defined as the amount of mass that was removed in creating the core. Mathematically, the mass deficit is defined as where ρi is the original density, ρ is the observed density, and Rc is the core radius. In practice, the core-Sersic model can be used to help quantify the deficits. Observed mass deficits are typically in the range of one to a few times the mass of the central SMBH, and observed core radii are comparable to the influence radii of the central SMBH. These properties are consistent with what is predicted in theoretical models of core formation and lend support to the hypothesis that all bright galaxies once contained binary SMBHs at their centers. It is not known whether most galaxies still contain massive binaries, or whether the two black holes have coalesced. Both possibilities are consistent with the presence of mass deficits. Document 2::: To help compare different orders of magnitude, the following lists describe various mass levels between 10−59 kg and 1052 kg. The least massive thing listed here is a graviton, and the most massive thing is the observable universe. Typically, an object having greater mass will also have greater weight (see mass versus weight), especially if the objects are subject to the same gravitational field strength. Units of mass The table at right is based on the kilogram (kg), the base unit of mass in the International System of Units (SI). The kilogram is the only standard unit to include an SI prefix (kilo-) as part of its name. The gram (10−3 kg) is an SI derived unit of mass. However, the names of all SI mass units are based on gram, rather than on kilogram; thus 103 kg is a megagram (106 g), not a *kilokilogram. The tonne (t) is an SI-compatible unit of mass equal to a megagram (Mg), or 103 kg. The unit is in common use for masses above about 103 kg and is often used with SI prefixes. For example, a gigagram (Gg) or 109 g is 103 tonnes, commonly called a kilotonne. Other units Other units of mass are also in use. Historical units include the stone, the pound, the carat, and the grain. For subatomic particles, physicists use the mass equivalent to the energy represented by an electronvolt (eV). At the atomic level, chemists use the mass of one-twelfth of a carbon-12 atom (the dalton). Astronomers use the mass of the sun (). The least massive things: below 10−24 kg Unlike other physical quantities, mass–energy does not have an a priori expected minimal quantity, or an observed basic quantum as in the case of electric charge. Planck's law allows for the existence of photons with arbitrarily low energies. Consequently, there can only ever be an experimental upper bound on the mass of a supposedly massless particle; in the case of the photon, this confirmed upper bound is of the order of = . 10−24 to 10−18 kg 10−18 to 10−12 kg 10−12 to 10−6 kg 10−6 to 1 kg Document 3::: In astrophysics, the mass–luminosity relation is an equation giving the relationship between a star's mass and its luminosity, first noted by Jakob Karl Ernst Halm. The relationship is represented by the equation: where L⊙ and M⊙ are the luminosity and mass of the Sun and 1 < a < 6. The value a = 3.5 is commonly used for main-sequence stars. This equation and the usual value of a = 3.5 only applies to main-sequence stars with masses and does not apply to red giants or white dwarfs. As a star approaches the Eddington luminosity then a = 1. In summary, the relations for stars with different ranges of mass are, to a good approximation, as the following: For stars with masses less than 0.43M⊙, convection is the sole energy transport process, so the relation changes significantly. For stars with masses M > 55M⊙ the relationship flattens out and becomes L ∝ M but in fact those stars don't last because they are unstable and quickly lose matter by intense solar winds. It can be shown this change is due to an increase in radiation pressure in massive stars. These equations are determined empirically by determining the mass of stars in binary systems to which the distance is known via standard parallax measurements or other techniques. After enough stars are plotted, stars will form a line on a logarithmic plot and slope of the line gives the proper value of a. Another form, valid for K-type main-sequence stars, that avoids the discontinuity in the exponent has been given by Cuntz & Wang; it reads: with (M in M⊙). This relation is based on data by Mann and collaborators, who used moderate-resolution spectra of nearby late-K and M dwarfs with known parallaxes and interferometrically determined radii to refine their effective temperatures and luminosities. Those stars have also been used as a calibration sample for Kepler candidate objects. Besides avoiding the discontinuity in the exponent at M = 0.43M⊙, the relation also recovers a = 4.0 for M ≃ 0.85M⊙. The mass/lu Document 4::: Astrophysics is a science that employs the methods and principles of physics and chemistry in the study of astronomical objects and phenomena. As one of the founders of the discipline, James Keeler, said, Astrophysics "seeks to ascertain the nature of the heavenly bodies, rather than their positions or motions in space–what they are, rather than where they are." Among the subjects studied are the Sun (solar physics), other stars, galaxies, extrasolar planets, the interstellar medium and the cosmic microwave background. Emissions from these objects are examined across all parts of the electromagnetic spectrum, and the properties examined include luminosity, density, temperature, and chemical composition. Because astrophysics is a very broad subject, astrophysicists apply concepts and methods from many disciplines of physics, including classical mechanics, electromagnetism, statistical mechanics, thermodynamics, quantum mechanics, relativity, nuclear and particle physics, and atomic and molecular physics. In practice, modern astronomical research often involves a substantial amount of work in the realms of theoretical and observational physics. Some areas of study for astrophysicists include their attempts to determine the properties of dark matter, dark energy, black holes, and other celestial bodies; and the origin and ultimate fate of the universe. Topics also studied by theoretical astrophysicists include Solar System formation and evolution; stellar dynamics and evolution; galaxy formation and evolution; magnetohydrodynamics; large-scale structure of matter in the universe; origin of cosmic rays; general relativity, special relativity, quantum and physical cosmology, including string cosmology and astroparticle physics. History Astronomy is an ancient science, long separated from the study of terrestrial physics. In the Aristotelian worldview, bodies in the sky appeared to be unchanging spheres whose only motion was uniform motion in a circle, while the earthl The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Which of the following has the greatest mass? A. star B. moon C. planet D. galaxy Answer:
sciq-5742
multiple_choice
What members of an ecosystem food chain take in food by eating producers or other living things?
[ "consumers", "decomposers", "primary producers", "insectivores" ]
A
Relavent Documents: Document 0::: The trophic level of an organism is the position it occupies in a food web. A food chain is a succession of organisms that eat other organisms and may, in turn, be eaten themselves. The trophic level of an organism is the number of steps it is from the start of the chain. A food web starts at trophic level 1 with primary producers such as plants, can move to herbivores at level 2, carnivores at level 3 or higher, and typically finish with apex predators at level 4 or 5. The path along the chain can form either a one-way flow or a food "web". Ecological communities with higher biodiversity form more complex trophic paths. The word trophic derives from the Greek τροφή (trophē) referring to food or nourishment. History The concept of trophic level was developed by Raymond Lindeman (1942), based on the terminology of August Thienemann (1926): "producers", "consumers", and "reducers" (modified to "decomposers" by Lindeman). Overview The three basic ways in which organisms get food are as producers, consumers, and decomposers. Producers (autotrophs) are typically plants or algae. Plants and algae do not usually eat other organisms, but pull nutrients from the soil or the ocean and manufacture their own food using photosynthesis. For this reason, they are called primary producers. In this way, it is energy from the sun that usually powers the base of the food chain. An exception occurs in deep-sea hydrothermal ecosystems, where there is no sunlight. Here primary producers manufacture food through a process called chemosynthesis. Consumers (heterotrophs) are species that cannot manufacture their own food and need to consume other organisms. Animals that eat primary producers (like plants) are called herbivores. Animals that eat other animals are called carnivores, and animals that eat both plants and other animals are called omnivores. Decomposers (detritivores) break down dead plant and animal material and wastes and release it again as energy and nutrients into Document 1::: Consumer–resource interactions are the core motif of ecological food chains or food webs, and are an umbrella term for a variety of more specialized types of biological species interactions including prey-predator (see predation), host-parasite (see parasitism), plant-herbivore and victim-exploiter systems. These kinds of interactions have been studied and modeled by population ecologists for nearly a century. Species at the bottom of the food chain, such as algae and other autotrophs, consume non-biological resources, such as minerals and nutrients of various kinds, and they derive their energy from light (photons) or chemical sources. Species higher up in the food chain survive by consuming other species and can be classified by what they eat and how they obtain or find their food. Classification of consumer types The standard categorization Various terms have arisen to define consumers by what they eat, such as meat-eating carnivores, fish-eating piscivores, insect-eating insectivores, plant-eating herbivores, seed-eating granivores, and fruit-eating frugivores and omnivores are meat eaters and plant eaters. An extensive classification of consumer categories based on a list of feeding behaviors exists. The Getz categorization Another way of categorizing consumers, proposed by South African American ecologist Wayne Getz, is based on a biomass transformation web (BTW) formulation that organizes resources into five components: live and dead animal, live and dead plant, and particulate (i.e. broken down plant and animal) matter. It also distinguishes between consumers that gather their resources by moving across landscapes from those that mine their resources by becoming sessile once they have located a stock of resources large enough for them to feed on during completion of a full life history stage. In Getz's scheme, words for miners are of Greek etymology and words for gatherers are of Latin etymology. Thus a bestivore, such as a cat, preys on live animal Document 2::: The soil food web is the community of organisms living all or part of their lives in the soil. It describes a complex living system in the soil and how it interacts with the environment, plants, and animals. Food webs describe the transfer of energy between species in an ecosystem. While a food chain examines one, linear, energy pathway through an ecosystem, a food web is more complex and illustrates all of the potential pathways. Much of this transferred energy comes from the sun. Plants use the sun’s energy to convert inorganic compounds into energy-rich, organic compounds, turning carbon dioxide and minerals into plant material by photosynthesis. Plant flowers exude energy-rich nectar above ground and plant roots exude acids, sugars, and ectoenzymes into the rhizosphere, adjusting the pH and feeding the food web underground. Plants are called autotrophs because they make their own energy; they are also called producers because they produce energy available for other organisms to eat. Heterotrophs are consumers that cannot make their own food. In order to obtain energy they eat plants or other heterotrophs. Above ground food webs In above ground food webs, energy moves from producers (plants) to primary consumers (herbivores) and then to secondary consumers (predators). The phrase, trophic level, refers to the different levels or steps in the energy pathway. In other words, the producers, consumers, and decomposers are the main trophic levels. This chain of energy transferring from one species to another can continue several more times, but eventually ends. At the end of the food chain, decomposers such as bacteria and fungi break down dead plant and animal material into simple nutrients. Methodology The nature of soil makes direct observation of food webs difficult. Since soil organisms range in size from less than 0.1 mm (nematodes) to greater than 2 mm (earthworms) there are many different ways to extract them. Soil samples are often taken using a metal Document 3::: Feeding is the process by which organisms, typically animals, obtain food. Terminology often uses either the suffixes -vore, -vory, or -vorous from Latin vorare, meaning "to devour", or -phage, -phagy, or -phagous from Greek φαγεῖν (), meaning "to eat". Evolutionary history The evolution of feeding is varied with some feeding strategies evolving several times in independent lineages. In terrestrial vertebrates, the earliest forms were large amphibious piscivores 400 million years ago. While amphibians continued to feed on fish and later insects, reptiles began exploring two new food types, other tetrapods (carnivory), and later, plants (herbivory). Carnivory was a natural transition from insectivory for medium and large tetrapods, requiring minimal adaptation (in contrast, a complex set of adaptations was necessary for feeding on highly fibrous plant materials). Evolutionary adaptations The specialization of organisms towards specific food sources is one of the major causes of evolution of form and function, such as: mouth parts and teeth, such as in whales, vampire bats, leeches, mosquitos, predatory animals such as felines and fishes, etc. distinct forms of beaks in birds, such as in hawks, woodpeckers, pelicans, hummingbirds, parrots, kingfishers, etc. specialized claws and other appendages, for apprehending or killing (including fingers in primates) changes in body colour for facilitating camouflage, disguise, setting up traps for preys, etc. changes in the digestive system, such as the system of stomachs of herbivores, commensalism and symbiosis Classification By mode of ingestion There are many modes of feeding that animals exhibit, including: Filter feeding: obtaining nutrients from particles suspended in water Deposit feeding: obtaining nutrients from particles suspended in soil Fluid feeding: obtaining nutrients by consuming other organisms' fluids Bulk feeding: obtaining nutrients by eating all of an organism. Ram feeding and suction feeding: in Document 4::: A nutrient is a substance used by an organism to survive, grow, and reproduce. The requirement for dietary nutrient intake applies to animals, plants, fungi, and protists. Nutrients can be incorporated into cells for metabolic purposes or excreted by cells to create non-cellular structures, such as hair, scales, feathers, or exoskeletons. Some nutrients can be metabolically converted to smaller molecules in the process of releasing energy, such as for carbohydrates, lipids, proteins, and fermentation products (ethanol or vinegar), leading to end-products of water and carbon dioxide. All organisms require water. Essential nutrients for animals are the energy sources, some of the amino acids that are combined to create proteins, a subset of fatty acids, vitamins and certain minerals. Plants require more diverse minerals absorbed through roots, plus carbon dioxide and oxygen absorbed through leaves. Fungi live on dead or living organic matter and meet nutrient needs from their host. Different types of organisms have different essential nutrients. Ascorbic acid (vitamin C) is essential, meaning it must be consumed in sufficient amounts, to humans and some other animal species, but some animals and plants are able to synthesize it. Nutrients may be organic or inorganic: organic compounds include most compounds containing carbon, while all other chemicals are inorganic. Inorganic nutrients include nutrients such as iron, selenium, and zinc, while organic nutrients include, among many others, energy-providing compounds and vitamins. A classification used primarily to describe nutrient needs of animals divides nutrients into macronutrients and micronutrients. Consumed in relatively large amounts (grams or ounces), macronutrients (carbohydrates, fats, proteins, water) are primarily used to generate energy or to incorporate into tissues for growth and repair. Micronutrients are needed in smaller amounts (milligrams or micrograms); they have subtle biochemical and physiologi The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What members of an ecosystem food chain take in food by eating producers or other living things? A. consumers B. decomposers C. primary producers D. insectivores Answer:
sciq-9530
multiple_choice
What is the term for when clumped solids sink to the bottom of the water?
[ "sedimentation", "Foundation", "Clumping", "sediment" ]
A
Relavent Documents: Document 0::: Sediment transport is the movement of solid particles (sediment), typically due to a combination of gravity acting on the sediment, and the movement of the fluid in which the sediment is entrained. Sediment transport occurs in natural systems where the particles are clastic rocks (sand, gravel, boulders, etc.), mud, or clay; the fluid is air, water, or ice; and the force of gravity acts to move the particles along the sloping surface on which they are resting. Sediment transport due to fluid motion occurs in rivers, oceans, lakes, seas, and other bodies of water due to currents and tides. Transport is also caused by glaciers as they flow, and on terrestrial surfaces under the influence of wind. Sediment transport due only to gravity can occur on sloping surfaces in general, including hillslopes, scarps, cliffs, and the continental shelf—continental slope boundary. Sediment transport is important in the fields of sedimentary geology, geomorphology, civil engineering, hydraulic engineering and environmental engineering (see applications, below). Knowledge of sediment transport is most often used to determine whether erosion or deposition will occur, the magnitude of this erosion or deposition, and the time and distance over which it will occur. Mechanisms Aeolian Aeolian or eolian (depending on the parsing of æ) is the term for sediment transport by wind. This process results in the formation of ripples and sand dunes. Typically, the size of the transported sediment is fine sand (<1 mm) and smaller, because air is a fluid with low density and viscosity, and can therefore not exert very much shear on its bed. Bedforms are generated by aeolian sediment transport in the terrestrial near-surface environment. Ripples and dunes form as a natural self-organizing response to sediment transport. Aeolian sediment transport is common on beaches and in the arid regions of the world, because it is in these environments that vegetation does not prevent the presence and motion Document 1::: A slurry is a mixture of denser solids suspended in liquid, usually water. The most common use of slurry is as a means of transporting solids or separating minerals, the liquid being a carrier that is pumped on a device such as a centrifugal pump. The size of solid particles may vary from 1 micrometre up to hundreds of millimetres. The particles may settle below a certain transport velocity and the mixture can behave like a Newtonian or non-Newtonian fluid. Depending on the mixture, the slurry may be abrasive and/or corrosive. Examples Examples of slurries include: Cement slurry, a mixture of cement, water, and assorted dry and liquid additives used in the petroleum and other industries Soil/cement slurry, also called Controlled Low-Strength Material (CLSM), flowable fill, controlled density fill, flowable mortar, plastic soil-cement, K-Krete, and other names A mixture of thickening agent, oxidizers, and water used to form a gel explosive A mixture of pyroclastic material, rocky debris, and water produced in a volcanic eruption and known as a lahar A mixture of bentonite and water used to make slurry walls Coal slurry, a mixture of coal waste and water, or crushed coal and water Slip, a mixture of clay and water used for joining, glazing and decoration of ceramics and pottery. Slurry oil, the highest boiling fraction distilled from the effluent of an FCC unit in an oil refinery. It contains a large amount of catalyst, in form of sediments hence the denomination of slurry. A mixture of wood pulp and water used to make paper Manure slurry, a mixture of animal waste, organic matter, and sometimes water often known simply as "slurry" in agricultural use, used as fertilizer after aging in a slurry pit Meat slurry, a mixture of finely ground meat and water, centrifugally dewatered and used as a food ingredient. An abrasive substance used in chemical-mechanical polishing Slurry ice, a mixture of ice crystals, freezing point depressant, and water A mixture of raw materials Document 2::: Marine clay is a type of clay found in coastal regions around the world. In the northern, deglaciated regions, it can sometimes be quick clay, which is notorious for being involved in landslides. Marine clay is a particle of soil that is dedicated to a particle size class, this is usually associated with USDA's classification with sand at 0.05mm, silt at 0.05-.002mm and clay being less than 0.002 mm in diameter. Paired with the fact this size of particle was deposited within a marine system involving the erosion and transportation of the clay into the ocean. Soil particles become suspended when in a solution with water, with sand being affected by the force of gravity first with suspended silt and clay still floating in solution. This is also known as turbidity, in which floating soil particles create a murky brown color to a water solution. These clay particles are then transferred to the abyssal plain in which they are deposited in high percentages of clay. Once the clay is deposited on the ocean floor it can change its structure through a process known as flocculation, process by which fine particulates are caused to clump together or floc. These can be either edge to edge flocculation or edge to face flocculation. Relating to individual clay particles interacting with each other. Clays can also be aggregated or shifted in their structure besides being flocculated. Particles configurations Clay particles can self-assemble into various configurations, each with totally different properties. This change in structure to the clay particles is due to a swap in cations with the basic structure of a clay particle. This basic structure of the clay particle is known as a silica tetrahedral or aluminum octahedral. They are the basic structure of clay particles composing of one cation, usually silica or aluminum surrounded by hydroxide anions, these particles form in sheets forming what we know as clay particles and have very specific properties to them including m Document 3::: Accretion is the process of coastal sediment returning to the visible portion of a beach or foreshore after a submersion event. A sustainable beach or foreshore often goes through a cycle of submersion during rough weather and later accretion during calmer periods. If a coastline is not in a healthy sustainable state, erosion can be more serious, and accretion does not fully restore the original volume of the visible beach or foreshore, which leads to permanent beach loss. Coastal geography Deposition (geology) Physical oceanography Document 4::: Marine sediment, or ocean sediment, or seafloor sediment, are deposits of insoluble particles that have accumulated on the seafloor. These particles have their origins in soil and rocks and have been transported from the land to the sea, mainly by rivers but also by dust carried by wind and by the flow of glaciers into the sea. Additional deposits come from marine organisms and chemical precipitation in seawater, as well as from underwater volcanoes and meteorite debris. Except within a few kilometres of a mid-ocean ridge, where the volcanic rock is still relatively young, most parts of the seafloor are covered in sediment. This material comes from several different sources and is highly variable in composition. Seafloor sediment can range in thickness from a few millimetres to several tens of kilometres. Near the surface seafloor sediment remains unconsolidated, but at depths of hundreds to thousands of metres the sediment becomes lithified (turned to rock). Rates of sediment accumulation are relatively slow throughout most of the ocean, in many cases taking thousands of years for any significant deposits to form. Sediment transported from the land accumulates the fastest, on the order of one metre or more per thousand years for coarser particles. However, sedimentation rates near the mouths of large rivers with high discharge can be orders of magnitude higher. Biogenous oozes accumulate at a rate of about one centimetre per thousand years, while small clay particles are deposited in the deep ocean at around one millimetre per thousand years. Sediments from the land are deposited on the continental margins by surface runoff, river discharge, and other processes. Turbidity currents can transport this sediment down the continental slope to the deep ocean floor. The deep ocean floor undergoes its own process of spreading out from the mid-ocean ridge, and then slowly subducts accumulated sediment on the deep floor into the molten interior of the earth. In turn, molt The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the term for when clumped solids sink to the bottom of the water? A. sedimentation B. Foundation C. Clumping D. sediment Answer:
sciq-8929
multiple_choice
What decreases with the loss of subsequent protons?
[ "seawater strength", "example strength", "Movement strength", "acid strength" ]
D
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Advanced Placement (AP) Physics B was a physics course administered by the College Board as part of its Advanced Placement program. It was equivalent to a year-long introductory university course covering Newtonian mechanics, electromagnetism, fluid mechanics, thermal physics, waves, optics, and modern physics. The course was algebra-based and heavily computational; in 2015, it was replaced by the more concept-focused AP Physics 1 and AP Physics 2. Exam The exam consisted of a 70 MCQ section, followed by a 6-7 FRQ section. Each section was 90 minutes and was worth 50% of the final score. The MCQ section banned calculators, while the FRQ allowed calculators and a list of common formulas. Overall, the exam was configured to approximately cover a set percentage of each of the five target categories: Purpose According to the College Board web site, the Physics B course provided "a foundation in physics for students in the life sciences, a pre medical career path, and some applied sciences, as well as other fields not directly related to science." Discontinuation Starting in the 2014–2015 school year, AP Physics B was no longer offered, and AP Physics 1 and AP Physics 2 took its place. Like AP Physics B, both are algebra-based, and both are designed to be taught as year-long courses. Grade distribution The grade distributions for the Physics B scores from 2010 until its discontinuation in 2014 are as follows: Document 2::: There are four Advanced Placement (AP) Physics courses administered by the College Board as part of its Advanced Placement program: the algebra-based Physics 1 and Physics 2 and the calculus-based Physics C: Mechanics and Physics C: Electricity and Magnetism. All are intended to be at the college level. Each AP Physics course has an exam for which high-performing students may receive credit toward their college coursework. AP Physics 1 and 2 AP Physics 1 and AP Physics 2 were introduced in 2015, replacing AP Physics B. The courses were designed to emphasize critical thinking and reasoning as well as learning through inquiry. They are algebra-based and do not require any calculus knowledge. AP Physics 1 AP Physics 1 covers Newtonian mechanics, including: Unit 1: Kinematics Unit 2: Dynamics Unit 3: Circular Motion and Gravitation Unit 4: Energy Unit 5: Momentum Unit 6: Simple Harmonic Motion Unit 7: Torque and Rotational Motion Until 2020, the course also covered topics in electricity (including Coulomb's Law and resistive DC circuits), mechanical waves, and sound. These units were removed because they are included in AP Physics 2. AP Physics 2 AP Physics 2 covers the following topics: Unit 1: Fluids Unit 2: Thermodynamics Unit 3: Electric Force, Field, and Potential Unit 4: Electric Circuits Unit 5: Magnetism and Electromagnetic Induction Unit 6: Geometric and Physical Optics Unit 7: Quantum, Atomic, and Nuclear Physics AP Physics C From 1969 to 1972, AP Physics C was a single course with a single exam that covered all standard introductory university physics topics, including mechanics, fluids, electricity and magnetism, optics, and modern physics. In 1973, the College Board split the course into AP Physics C: Mechanics and AP Physics C: Electricity and Magnetism. The exam was also split into two separate 90-minute tests, each equivalent to a semester-length calculus-based college course. Until 2006, both exams could be taken for a single Document 3::: Advanced Placement (AP) Physics 1 is a year-long introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a one-semester algebra-based university course in mechanics. Along with AP Physics 2, the first AP Physics 1 exam was administered in 2015. In its first five years, AP Physics 1 covered forces and motion, conservation laws, waves, and electricity. As of 2021, AP Physics 1 includes mechanics topics only. History The heavily computational AP Physics B course served for four decades as the College Board's algebra-based offering. As part of the College Board's redesign of science courses, AP Physics B was discontinued; therefore, AP Physics 1 and 2 were created with guidance from the National Research Council and the National Science Foundation. The course covers material of a first-semester university undergraduate physics course offered at American universities that use best practices of physics pedagogy. The first AP Physics 1 classes had begun in the 2014–2015 school year, with the first AP exams administered in May 2015. Curriculum AP Physics 1 is an algebra-based, introductory college-level physics course that includes mechanics topics such as motion, force, momentum, energy, harmonic motion, and rotation; The College Board published a curriculum framework that includes seven big ideas on which the AP Physics 1 and 2 courses are based, along with "enduring understandings" students are expected to acquire within each of the big ideas.: Questions for the exam are constructed with direct reference to items in the curriculum framework. Student understanding of each topic is tested with reference to multiple skills—that is, questions require students to use quantitative, semi-quantitative, qualitative, and experimental reasoning in each content area. Exam Science Practices Assessed Multiple Choice and Free Response Sections of the AP® Physics 1 exam are also assessed on scientific prac Document 4::: Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams. Course content E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are: Electrostatics Conductors, capacitors, and dielectrics Electric circuits Magnetic fields Electromagnetism. Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class. AP test The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution. Registration The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test. Format The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What decreases with the loss of subsequent protons? A. seawater strength B. example strength C. Movement strength D. acid strength Answer:
sciq-178
multiple_choice
Cations have what type of charge?
[ "constant", "negative", "neutral", "positive" ]
D
Relavent Documents: Document 0::: Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams. Course content E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are: Electrostatics Conductors, capacitors, and dielectrics Electric circuits Magnetic fields Electromagnetism. Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class. AP test The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution. Registration The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test. Format The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with Document 1::: There are four Advanced Placement (AP) Physics courses administered by the College Board as part of its Advanced Placement program: the algebra-based Physics 1 and Physics 2 and the calculus-based Physics C: Mechanics and Physics C: Electricity and Magnetism. All are intended to be at the college level. Each AP Physics course has an exam for which high-performing students may receive credit toward their college coursework. AP Physics 1 and 2 AP Physics 1 and AP Physics 2 were introduced in 2015, replacing AP Physics B. The courses were designed to emphasize critical thinking and reasoning as well as learning through inquiry. They are algebra-based and do not require any calculus knowledge. AP Physics 1 AP Physics 1 covers Newtonian mechanics, including: Unit 1: Kinematics Unit 2: Dynamics Unit 3: Circular Motion and Gravitation Unit 4: Energy Unit 5: Momentum Unit 6: Simple Harmonic Motion Unit 7: Torque and Rotational Motion Until 2020, the course also covered topics in electricity (including Coulomb's Law and resistive DC circuits), mechanical waves, and sound. These units were removed because they are included in AP Physics 2. AP Physics 2 AP Physics 2 covers the following topics: Unit 1: Fluids Unit 2: Thermodynamics Unit 3: Electric Force, Field, and Potential Unit 4: Electric Circuits Unit 5: Magnetism and Electromagnetic Induction Unit 6: Geometric and Physical Optics Unit 7: Quantum, Atomic, and Nuclear Physics AP Physics C From 1969 to 1972, AP Physics C was a single course with a single exam that covered all standard introductory university physics topics, including mechanics, fluids, electricity and magnetism, optics, and modern physics. In 1973, the College Board split the course into AP Physics C: Mechanics and AP Physics C: Electricity and Magnetism. The exam was also split into two separate 90-minute tests, each equivalent to a semester-length calculus-based college course. Until 2006, both exams could be taken for a single Document 2::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 3::: In physics, a charge carrier is a particle or quasiparticle that is free to move, carrying an electric charge, especially the particles that carry electric charges in electrical conductors. Examples are electrons, ions and holes. The term is used most commonly in solid state physics. In a conducting medium, an electric field can exert force on these free particles, causing a net motion of the particles through the medium; this is what constitutes an electric current. The electron and the proton are the elementary charge carriers, each carrying one elementary charge (e), of the same magnitude and opposite sign. In conductors In conducting media, particles serve to carry charge: In many metals, the charge carriers are electrons. One or two of the valence electrons from each atom are able to move about freely within the crystal structure of the metal. The free electrons are referred to as conduction electrons, and the cloud of free electrons is called a Fermi gas. Many metals have electron and hole bands. In some, the majority carriers are holes. In electrolytes, such as salt water, the charge carriers are ions, which are atoms or molecules that have gained or lost electrons so they are electrically charged. Atoms that have gained electrons so they are negatively charged are called anions, atoms that have lost electrons so they are positively charged are called cations. Cations and anions of the dissociated liquid also serve as charge carriers in melted ionic solids (see e.g. the Hall–Héroult process for an example of electrolysis of a melted ionic solid). Proton conductors are electrolytic conductors employing positive hydrogen ions as carriers. In a plasma, an electrically charged gas which is found in electric arcs through air, neon signs, and the sun and stars, the electrons and cations of ionized gas act as charge carriers. In a vacuum, free electrons can act as charge carriers. In the electronic component known as the vacuum tube (also called valve), the mobil Document 4::: An ion () is an atom or molecule with a net electrical charge. The charge of an electron is considered to be negative by convention and this charge is equal and opposite to the charge of a proton, which is considered to be positive by convention. The net charge of an ion is not zero because its total number of electrons is unequal to its total number of protons. A cation is a positively charged ion with fewer electrons than protons while an anion is a negatively charged ion with more electrons than protons. Opposite electric charges are pulled towards one another by electrostatic force, so cations and anions attract each other and readily form ionic compounds. Ions consisting of only a single atom are termed atomic or monatomic ions, while two or more atoms form molecular ions or polyatomic ions. In the case of physical ionization in a fluid (gas or liquid), "ion pairs" are created by spontaneous molecule collisions, where each generated pair consists of a free electron and a positive ion. Ions are also created by chemical interactions, such as the dissolution of a salt in liquids, or by other means, such as passing a direct current through a conducting solution, dissolving an anode via ionization. History of discovery The word ion was coined from Greek neuter present participle of ienai (), meaning "to go". A cation is something that moves down ( pronounced kato, meaning "down") and an anion is something that moves up (, meaning "up"). They are so called because ions move toward the electrode of opposite charge. This term was introduced (after a suggestion by the English polymath William Whewell) by English physicist and chemist Michael Faraday in 1834 for the then-unknown species that goes from one electrode to the other through an aqueous medium. Faraday did not know the nature of these species, but he knew that since metals dissolved into and entered a solution at one electrode and new metal came forth from a solution at the other electrode; that some kind of The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Cations have what type of charge? A. constant B. negative C. neutral D. positive Answer:
sciq-3402
multiple_choice
What is a group of similar ecosystems with the same general abiotic factors and primary producers called?
[ "biome", "ecological environment", "ecoculture", "population" ]
A
Relavent Documents: Document 0::: Ecological units, comprise concepts such as population, community, and ecosystem as the basic units, which are at the basis of ecological theory and research, as well as a focus point of many conservation strategies. The concept of ecological units continues to suffer from inconsistencies and confusion over its terminology. Analyses of the existing concepts used in describing ecological units have determined that they differ in respects to four major criteria: The questions as to whether they are defined statistically or via a network of interactions, If their boundaries are drawn by topographical or process-related criteria, How high the required internal relationships are, And if they are perceived as "real" entities or abstractions by an observer. A population is considered to be the smallest ecological unit, consisting of a group of individuals that belong to the same species. A community would be the next classification, referring to all of the population present in an area at a specific time, followed by an ecosystem, referring to the community and it's interactions with its physical environment. An ecosystem is the most commonly used ecological unit and can be universally defined by two common traits: The unit is often defined in terms of a natural border (maritime boundary, watersheds, etc.) Abiotic components and organisms within the unit are considered to be interlinked. See also Biogeographic realm Ecoregion Ecotope Holobiont Functional ecology Behavior settings Regional geology Document 1::: Ecological classification or ecological typology is the classification of land or water into geographical units that represent variation in one or more ecological features. Traditional approaches focus on geology, topography, biogeography, soils, vegetation, climate conditions, living species, habitats, water resources, and sometimes also anthropic factors. Most approaches pursue the cartographical delineation or regionalisation of distinct areas for mapping and planning. Approaches to classifications Different approaches to ecological classifications have been developed in terrestrial, freshwater and marine disciplines. Traditionally these approaches have focused on biotic components (vegetation classification), abiotic components (environmental approaches) or implied ecological and evolutionary processes (biogeographical approaches). Ecosystem classifications are specific kinds of ecological classifications that consider all four elements of the definition of ecosystems: a biotic component, an abiotic complex, the interactions between and within them, and the physical space they occupy (ecotope). Vegetation classification Vegetation is often used to classify terrestrial ecological units. Vegetation classification can be based on vegetation structure and floristic composition. Classifications based entirely on vegetation structure overlap with land cover mapping categories. Many schemes of vegetation classification are in use by the land, resource and environmental management agencies of different national and state jurisdictions. The International Vegetation Classification (IVC or EcoVeg) has been recently proposed but has not been yet widely adopted. Vegetation classifications have limited use in aquatic systems, since only a handful of freshwater or marine habitats are dominated by plants (e.g. kelp forests or seagrass meadows). Also, some extreme terrestrial environments, like subterranean or cryogenic ecosystems, are not properly described in vegetation c Document 2::: In ecology, habitat refers to the array of resources, physical and biotic factors that are present in an area, such as to support the survival and reproduction of a particular species. A species habitat can be seen as the physical manifestation of its ecological niche. Thus "habitat" is a species-specific term, fundamentally different from concepts such as environment or vegetation assemblages, for which the term "habitat-type" is more appropriate. The physical factors may include (for example): soil, moisture, range of temperature, and light intensity. Biotic factors include the availability of food and the presence or absence of predators. Every species has particular habitat requirements, with habitat generalist species able to thrive in a wide array of environmental conditions while habitat specialist species requiring a very limited set of factors to survive. The habitat of a species is not necessarily found in a geographical area, it can be the interior of a stem, a rotten log, a rock or a clump of moss; a parasitic organism has as its habitat the body of its host, part of the host's body (such as the digestive tract), or a single cell within the host's body. Habitat types are environmental categorizations of different environments based on the characteristics of a given geographical area, particularly vegetation and climate. Thus habitat types do not refer to a single species but to multiple species living in the same area. For example, terrestrial habitat types include forest, steppe, grassland, semi-arid or desert. Fresh-water habitat types include marshes, streams, rivers, lakes, and ponds; marine habitat types include salt marshes, the coast, the intertidal zone, estuaries, reefs, bays, the open sea, the sea bed, deep water and submarine vents. Habitat types may change over time. Causes of change may include a violent event (such as the eruption of a volcano, an earthquake, a tsunami, a wildfire or a change in oceanic currents); or change may occur mo Document 3::: Ecosystem diversity deals with the variations in ecosystems within a geographical location and its overall impact on human existence and the environment. Ecosystem diversity addresses the combined characteristics of biotic properties (biodiversity) and abiotic properties (geodiversity). It is a variation in the ecosystems found in a region or the variation in ecosystems over the whole planet. Ecological diversity includes the variation in both terrestrial and aquatic ecosystems. Ecological diversity can also take into account the variation in the complexity of a biological community, including the number of different niches, the number of and other ecological processes. An example of ecological diversity on a global scale would be the variation in ecosystems, such as deserts, forests, grasslands, wetlands and oceans. Ecological diversity is the largest scale of biodiversity, and within each ecosystem, there is a great deal of both species and genetic diversity. Impact Diversity in the ecosystem is significant to human existence for a variety of reasons. Ecosystem diversity boosts the availability of oxygen via the process of photosynthesis amongst plant organisms domiciled in the habitat. Diversity in an aquatic environment helps in the purification of water by plant varieties for use by humans. Diversity increases plant varieties which serves as a good source for medicines and herbs for human use. A lack of diversity in the ecosystem produces an opposite result. Examples Some examples of ecosystems that are rich in diversity are: Deserts Forests Large marine ecosystems Marine ecosystems Old-growth forests Rainforests Tundra Coral reefs Marine Ecosystem diversity as a result of evolutionary pressure Ecological diversity around the world can be directly linked to the evolutionary and selective pressures that constrain the diversity outcome of the ecosystems within different niches. Tundras, Rainforests, coral reefs and deciduous forests all are form Document 4::: This glossary of ecology is a list of definitions of terms and concepts in ecology and related fields. For more specific definitions from other glossaries related to ecology, see Glossary of biology, Glossary of evolutionary biology, and Glossary of environmental science. A B C D E F G H I J K L M N O P Q R S T U V W X Y Z See also Outline of ecology History of ecology The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is a group of similar ecosystems with the same general abiotic factors and primary producers called? A. biome B. ecological environment C. ecoculture D. population Answer:
sciq-3304
multiple_choice
The three types of mammals are characterized by their method of what?
[ "specializaiton", "pattern", "reproduction", "differentiation" ]
C
Relavent Documents: Document 0::: In zoology, mammalogy is the study of mammals – a class of vertebrates with characteristics such as homeothermic metabolism, fur, four-chambered hearts, and complex nervous systems. Mammalogy has also been known as "mastology," "theriology," and "therology." The archive of number of mammals on earth is constantly growing, but is currently set at 6,495 different mammal species including recently extinct. There are 5,416 living mammals identified on earth and roughly 1,251 have been newly discovered since 2006. The major branches of mammalogy include natural history, taxonomy and systematics, anatomy and physiology, ethology, ecology, and management and control. The approximate salary of a mammalogist varies from $20,000 to $60,000 a year, depending on their experience. Mammalogists are typically involved in activities such as conducting research, managing personnel, and writing proposals. Mammalogy branches off into other taxonomically-oriented disciplines such as primatology (study of primates), and cetology (study of cetaceans). Like other studies, mammalogy is also a part of zoology which is also a part of biology, the study of all living things. Research purposes Mammalogists have stated that there are multiple reasons for the study and observation of mammals. Knowing how mammals contribute or thrive in their ecosystems gives knowledge on the ecology behind it. Mammals are often used in business industries, agriculture, and kept for pets. Studying mammals habitats and source of energy has led to aiding in survival. The domestication of some small mammals has also helped discover several different diseases, viruses, and cures. Mammalogist A mammalogist studies and observes mammals. In studying mammals, they can observe their habitats, contributions to the ecosystem, their interactions, and the anatomy and physiology. A mammalogist can do a broad variety of things within the realm of mammals. A mammalogist on average can make roughly $58,000 a year. This dep Document 1::: Vertebrate zoology is the biological discipline that consists of the study of Vertebrate animals, i.e., animals with a backbone, such as fish, amphibians, reptiles, birds and mammals. Many natural history museums have departments named Vertebrate Zoology. In some cases whole museums bear this name, e.g. the Museum of Vertebrate Zoology at the University of California, Berkeley. Subdivisions This subdivision of zoology has many further subdivisions, including: Ichthyology - the study of fishes. Mammalogy - the study of mammals. Chiropterology - the study of bats. Primatology - the study of primates. Ornithology - the study of birds. Herpetology - the study of reptiles. Batrachology - the study of amphibians. These divisions are sometimes further divided into more specific specialties. Document 2::: Animals are multicellular eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, are able to move, reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Over 1.5 million living animal species have been described—of which around 1 million are insects—but it has been estimated there are over 7 million in total. Animals range in size from 8.5 millionths of a metre to long and have complex interactions with each other and their environments, forming intricate food webs. The study of animals is called zoology. Animals may be listed or indexed by many criteria, including taxonomy, status as endangered species, their geographical location, and their portrayal and/or naming in human culture. By common name List of animal names (male, female, young, and group) By aspect List of common household pests List of animal sounds List of animals by number of neurons By domestication List of domesticated animals By eating behaviour List of herbivorous animals List of omnivores List of carnivores By endangered status IUCN Red List endangered species (Animalia) United States Fish and Wildlife Service list of endangered species By extinction List of extinct animals List of extinct birds List of extinct mammals List of extinct cetaceans List of extinct butterflies By region Lists of amphibians by region Lists of birds by region Lists of mammals by region Lists of reptiles by region By individual (real or fictional) Real Lists of snakes List of individual cats List of oldest cats List of giant squids List of individual elephants List of historical horses List of leading Thoroughbred racehorses List of individual apes List of individual bears List of giant pandas List of individual birds List of individual bovines List of individual cetaceans List of individual dogs List of oldest dogs List of individual monkeys List of individual pigs List of w Document 3::: Animals are multicellular, eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. As of 2022, 2.16 million living animal species have been described—of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are around 7.77 million animal species. Animals range in length from to . They have complex interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology. Most living animal species are in Bilateria, a clade whose members have a bilaterally symmetric body plan. The Bilateria include the protostomes, containing animals such as nematodes, arthropods, flatworms, annelids and molluscs, and the deuterostomes, containing the echinoderms and the chordates, the latter including the vertebrates. Life forms interpreted as early animals were present in the Ediacaran biota of the late Precambrian. Many modern animal phyla became clearly established in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago. Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on ad Document 4::: Roshd Biological Education is a quarterly science educational magazine covering recent developments in biology and biology education for a biology teacher Persian -speaking audience. Founded in 1985, it is published by The Teaching Aids Publication Bureau, Organization for Educational Planning and Research, Ministry of Education, Iran. Roshd Biological Education has an editorial board composed of Iranian biologists, experts in biology education, science journalists and biology teachers. It is read by both biology teachers and students, as a way of launching innovations and new trends in biology education, and helping biology teachers to teach biology in better and more effective ways. Magazine layout As of Autumn 2012, the magazine is laid out as follows: Editorial—often offering a view of point from editor in chief on an educational and/or biological topics. Explore— New research methods and results on biology and/or education. World— Reports and explores on biological education worldwide. In Brief—Summaries of research news and discoveries. Trends—showing how new technology is altering the way we live our lives. Point of View—Offering personal commentaries on contemporary topics. Essay or Interview—often with a pioneer of a biological and/or educational researcher or an influential scientific educational leader. Muslim Biologists—Short histories of Muslim Biologists. Environment—An article on Iranian environment and its problems. News and Reports—Offering short news and reports events on biology education. In Brief—Short articles explaining interesting facts. Questions and Answers—Questions about biology concepts and their answers. Book and periodical Reviews—About new publication on biology and/or education. Reactions—Letter to the editors. Editorial staff Mohammad Karamudini, editor in chief History Roshd Biological Education started in 1985 together with many other magazines in other science and art. The first editor was Dr. Nouri-Dalooi, th The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The three types of mammals are characterized by their method of what? A. specializaiton B. pattern C. reproduction D. differentiation Answer:
sciq-4323
multiple_choice
The attractive force between water molecules is what kind of reaction?
[ "symmetrical reaction", "diffusion reaction", "dipole reaction", "aquatic reaction" ]
C
Relavent Documents: Document 0::: The hydrophobic effect is the observed tendency of nonpolar substances to aggregate in an aqueous solution and exclude water molecules. The word hydrophobic literally means "water-fearing", and it describes the segregation of water and nonpolar substances, which maximizes hydrogen bonding between molecules of water and minimizes the area of contact between water and nonpolar molecules. In terms of thermodynamics, the hydrophobic effect is the free energy change of water surrounding a solute. A positive free energy change of the surrounding solvent indicates hydrophobicity, whereas a negative free energy change implies hydrophilicity. The hydrophobic effect is responsible for the separation of a mixture of oil and water into its two components. It is also responsible for effects related to biology, including: cell membrane and vesicle formation, protein folding, insertion of membrane proteins into the nonpolar lipid environment and protein-small molecule associations. Hence the hydrophobic effect is essential to life. Substances for which this effect is observed are known as hydrophobes. Amphiphiles Amphiphiles are molecules that have both hydrophobic and hydrophilic domains. Detergents are composed of amphiphiles that allow hydrophobic molecules to be solubilized in water by forming micelles and bilayers (as in soap bubbles). They are also important to cell membranes composed of amphiphilic phospholipids that prevent the internal aqueous environment of a cell from mixing with external water. Folding of macromolecules In the case of protein folding, the hydrophobic effect is important to understanding the structure of proteins that have hydrophobic amino acids (such as glycine, alanine, valine, leucine, isoleucine, phenylalanine, tryptophan and methionine) clustered together within the protein. Structures of water-soluble proteins have a hydrophobic core in which side chains are buried from water, which stabilizes the folded state. Charged and polar side ch Document 1::: The Szyszkowski Equation has been used by Meissner and Michaels to describe the decrease in surface tension of aqueous solutions of carboxylic acids, alcohols and esters at varying mole fractions. It describes the exponential decrease of the surface tension at low concentrations reasonably but should be used only at concentrations below 1 mole%. Equation with: σm is surface tension of the mixture σw is surface tension of pure water a is component specific constant (see table below) x is mole fraction of the solvated component The equation can be rearranged to be explicit in a: This allows the direct calculation of that component specific parameter a from experimental data. The equation can also be written as: with: γ is surface tension of the mixture γ0 is surface tension of pure water R is ideal gas constant 8.31 J/(mol*K) T is temperature in K ω is cross-sectional area of the surfactant molecules at the surface The surface tension of pure water is dependent on temperature. At room temperature (298 K), it is equal to 71.97 mN/m Parameters Meissner and Michaels published the following a constants: Example The following table and diagram show experimentally determined surface tensions in the mixture of water and propionic acid. This example shows a good agreement between the published value a=2.6*10−3 and the calculated value a=2.59*10−3 at the smallest given mole fraction of 0.00861 but at higher concentrations of propionic acid the value of an increases considerably, showing deviations from the predicted value. See also Bohdan Szyszkowski Document 2::: Chemisorption is a kind of adsorption which involves a chemical reaction between the surface and the adsorbate. New chemical bonds are generated at the adsorbent surface. Examples include macroscopic phenomena that can be very obvious, like corrosion, and subtler effects associated with heterogeneous catalysis, where the catalyst and reactants are in different phases. The strong interaction between the adsorbate and the substrate surface creates new types of electronic bonds. In contrast with chemisorption is physisorption, which leaves the chemical species of the adsorbate and surface intact. It is conventionally accepted that the energetic threshold separating the binding energy of "physisorption" from that of "chemisorption" is about 0.5 eV per adsorbed species. Due to specificity, the nature of chemisorption can greatly differ, depending on the chemical identity and the surface structural properties. The bond between the adsorbate and adsorbent in chemisorption is either ionic or covalent. Uses An important example of chemisorption is in heterogeneous catalysis which involves molecules reacting with each other via the formation of chemisorbed intermediates. After the chemisorbed species combine (by forming bonds with each other) the product desorbs from the surface. Self-assembled monolayers Self-assembled monolayers (SAMs) are formed by chemisorbing reactive reagents with metal surfaces. A famous example involves thiols (RS-H) adsorbing onto the surface of gold. This process forms strong Au-SR bonds and releases H2. The densely packed SR groups protect the surface. Gas-surface chemisorption Adsorption kinetics As an instance of adsorption, chemisorption follows the adsorption process. The first stage is for the adsorbate particle to come into contact with the surface. The particle needs to be trapped onto the surface by not possessing enough energy to leave the gas-surface potential well. If it elastically collides with the surface, then it would Document 3::: An ideal solid surface is flat, rigid, perfectly smooth, and chemically homogeneous, and has zero contact angle hysteresis. Zero hysteresis implies the advancing and receding contact angles are equal. In other words, only one thermodynamically stable contact angle exists. When a drop of liquid is placed on such a surface, the characteristic contact angle is formed as depicted in Fig. 1. Furthermore, on an ideal surface, the drop will return to its original shape if it is disturbed. The following derivations apply only to ideal solid surfaces; they are only valid for the state in which the interfaces are not moving and the phase boundary line exists in equilibrium. Minimization of energy, three phases Figure 3 shows the line of contact where three phases meet. In equilibrium, the net force per unit length acting along the boundary line between the three phases must be zero. The components of net force in the direction along each of the interfaces are given by: where α, β, and θ are the angles shown and γij is the surface energy between the two indicated phases. These relations can also be expressed by an analog to a triangle known as Neumann’s triangle, shown in Figure 4. Neumann’s triangle is consistent with the geometrical restriction that , and applying the law of sines and law of cosines to it produce relations that describe how the interfacial angles depend on the ratios of surface energies. Because these three surface energies form the sides of a triangle, they are constrained by the triangle inequalities, γij < γjk + γik meaning that no one of the surface tensions can exceed the sum of the other two. If three fluids with surface energies that do not follow these inequalities are brought into contact, no equilibrium configuration consistent with Figure 3 will exist. Simplification to planar geometry, Young's relation If the β phase is replaced by a flat rigid surface, as shown in Figure 5, then β = π, and the second net force equation simplifies to the Y Document 4::: A micellar cubic phase is a lyotropic liquid crystal phase formed when the concentration of micelles dispersed in a solvent (usually water) is sufficiently high that they are forced to pack into a structure having a long-ranged positional (translational) order. For example, spherical micelles a cubic packing of a body-centered cubic lattice. Normal topology micellar cubic phases, denoted by the symbol I1, are the first lyotropic liquid crystalline phases that are formed by type I amphiphiles. The amphiphiles' hydrocarbon tails are contained on the inside of the micelle and hence the polar-apolar interface of the aggregates has a positive mean curvature, by definition (it curves away from the polar phase). The first pure surfactant system found to exhibit three different type I (oil-in-water) micellar cubic phases was observed in the dodecaoxyethylene mono-n-dodecyl ether (C12EO12)/water system. Inverse topology micellar cubic phases (such as the Fd3m phase) are observed for some type II amphiphiles at very high amphiphile concentrations. These aggregates, in which water is the minority phase, have a polar-apolar interface with a negative mean curvature. The structures of the normal topology micellar cubic phases that are formed by some types of amphiphiles (e.g. the oligoethyleneoxide monoalkyl ether series of non-ionic surfactants are the subject of debate. Micellar cubic phases are isotropic phases but are distinguished from micellar solutions by their very high viscosity. When thin film samples of micellar cubic phases are viewed under a polarising microscope they appear dark and featureless. Small air bubbles trapped in these preparations tend to appear highly distorted and occasionally have faceted surfaces. A reversed micellar cubic phase has been observed, although it is much less common. It was observed that a reverse micellar cubic phase with Fd3m (Q227) symmetry formed in a ternary system of an amphiphilic diblock copolymer (EO17BO10, where EO represents The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The attractive force between water molecules is what kind of reaction? A. symmetrical reaction B. diffusion reaction C. dipole reaction D. aquatic reaction Answer:
sciq-2449
multiple_choice
What rate is generally higher for endotherms than for ectotherms?
[ "metabolic", "mortality", "reproduction", "respiration" ]
A
Relavent Documents: Document 0::: An energy budget is a balance sheet of energy income against expenditure. It is studied in the field of Energetics which deals with the study of energy transfer and transformation from one form to another. Calorie is the basic unit of measurement. An organism in a laboratory experiment is an open thermodynamic system, exchanging energy with its surroundings in three ways - heat, work and the potential energy of biochemical compounds. Organisms use ingested food resources (C=consumption) as building blocks in the synthesis of tissues (P=production) and as fuel in the metabolic process that power this synthesis and other physiological processes (R=respiratory loss). Some of the resources are lost as waste products (F=faecal loss, U=urinary loss). All these aspects of metabolism can be represented in energy units. The basic model of energy budget may be shown as: P = C - R - U - F or P = C - (R + U + F) or C = P + R + U + F All the aspects of metabolism can be represented in energy units (e.g. joules (J);1 calorie = 4.2 kJ). Energy used for metabolism will be R = C - (F + U + P) Energy used in the maintenance will be R + F + U = C - P Endothermy and ectothermy Energy budget allocation varies for endotherms and ectotherms. Ectotherms rely on the environment as a heat source while endotherms maintain their body temperature through the regulation of metabolic processes. The heat produced in association with metabolic processes facilitates the active lifestyles of endotherms and their ability to travel far distances over a range of temperatures in the search for food. Ectotherms are limited by the ambient temperature of the environment around them but the lack of substantial metabolic heat production accounts for an energetically inexpensive metabolic rate. The energy demands for ectotherms are generally one tenth of that required for endotherms. Document 1::: Endothermic organisms known as homeotherms maintain internal temperatures with minimal metabolic regulation within a range of ambient temperatures called the thermal neutral zone (TNZ). Within the TNZ the basal rate of heat production is equal to the rate of heat loss to the environment. Homeothermic organisms adjust to the temperatures within the TNZ through different responses requiring little energy. Environmental temperatures can cause fluctuations in a homeothermic organism's metabolic rate. This response is due to the energy required to maintain a relatively constant body temperature above ambient temperature by controlling heat loss and heat gain. The degree of this response depends not only on the species, but also on the levels of insulative and metabolic adaptation. Environmental temperatures below the TNZ, the lower critical temperature (LCT), require an organism to increase its metabolic rate to meet the environmental demands for heat. The Regulation about the TNZ requires metabolic heat production when the LCT is reached, as heat is lost to the environment. The organism reaches the LCT when the Ta (ambient temp.) decreases. When an organism reaches this stage the metabolic rate increases significantly and thermogenesis increases the Tb (body temp.) If the Ta continues to decrease far below the LCT hypothermia occurs. Alternatively, evaporative heat loss for cooling occurs when temperatures above the TNZ, the upper critical zone (UCT), are realized (Speakman and Keijer 2013). When the Ta reaches too far above the UCT, the rate of heat gain and rate of heat production become higher than the rate of heat dissipation (heat loss through evaporative cooling), resulting in hyperthermia. It can show postural changes where it changes its body shape or moves and exposes different areas to the sun/shade, and through radiation, convection and conduction, heat exchange occurs. Vasomotor responses allow control of the flow of blood between the periphery and the c Document 2::: Basal metabolic rate (BMR) is the rate of energy expenditure per unit time by endothermic animals at rest. It is reported in energy units per unit time ranging from watt (joule/second) to ml O2/min or joule per hour per kg body mass J/(h·kg). Proper measurement requires a strict set of criteria to be met. These criteria include being in a physically and psychologically undisturbed state and being in a thermally neutral environment while in the post-absorptive state (i.e., not actively digesting food). In bradymetabolic animals, such as fish and reptiles, the equivalent term standard metabolic rate (SMR) applies. It follows the same criteria as BMR, but requires the documentation of the temperature at which the metabolic rate was measured. This makes BMR a variant of standard metabolic rate measurement that excludes the temperature data, a practice that has led to problems in defining "standard" rates of metabolism for many mammals. Metabolism comprises the processes that the body needs to function. Basal metabolic rate is the amount of energy per unit of time that a person needs to keep the body functioning at rest. Some of those processes are breathing, blood circulation, controlling body temperature, cell growth, brain and nerve function, and contraction of muscles. Basal metabolic rate affects the rate that a person burns calories and ultimately whether that individual maintains, gains, or loses weight. The basal metabolic rate accounts for about 60 to 75% of the daily calorie expenditure by individuals. It is influenced by several factors. In humans, BMR typically declines by 1–2% per decade after age 20, mostly due to loss of fat-free mass, although the variability between individuals is high. Description The body's generation of heat is known as thermogenesis and it can be measured to determine the amount of energy expended. BMR generally decreases with age, and with the decrease in lean body mass (as may happen with aging). Increasing muscle mass has the ef Document 3::: GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test. Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95. After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17. Content specification Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below: Biochemistry (36%) A Chemical and Physical Foundations Thermodynamics and kinetics Redox states Water, pH, acid-base reactions and buffers Solutions and equilibria Solute-solvent interactions Chemical interactions and bonding Chemical reaction mechanisms B Structural Biology: Structure, Assembly, Organization and Dynamics Small molecules Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids) Supramolecular complexes (e.g. Document 4::: Respirometry is a general term that encompasses a number of techniques for obtaining estimates of the rates of metabolism of vertebrates, invertebrates, plants, tissues, cells, or microorganisms via an indirect measure of heat production (calorimetry). Whole-animal metabolic rates The metabolism of an animal is estimated by determining rates of carbon dioxide production (VCO2) and oxygen consumption (VO2) of individual animals, either in a closed or an open-circuit respirometry system. Two measures are typically obtained: standard (SMR) or basal metabolic rate (BMR) and maximal rate (VO2max). SMR is measured while the animal is at rest (but not asleep) under specific laboratory (temperature, hydration) and subject-specific conditions (e.g., size or allometry), age, reproduction status, post-absorptive to avoid thermic effect of food). VO2max is typically determined during aerobic exercise at or near physiological limits. In contrast, field metabolic rate (FMR) refers to the metabolic rate of an unrestrained, active animal in nature. Whole-animal metabolic rates refer to these measures without correction for body mass. If SMR or BMR values are divided by the body mass value for the animal, then the rate is termed mass-specific. It is this mass-specific value that one typically hears in comparisons among species. Closed respirometry Respirometry depends on a "what goes in must come out" principle. Consider a closed system first. Imagine that we place a mouse into an air-tight container. The air sealed in the container initially contains the same composition and proportions of gases that were present in the room: 20.95% O2, 0.04% CO2, water vapor (the exact amount depends on air temperature, see dew point), 78% (approximately) N2, 0.93% argon and a variety of trace gases making up the rest (see Earth's atmosphere). As time passes, the mouse in the chamber produces CO2 and water vapor, but extracts O2 from the air in proportion to its metabolic demands. Therefo The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What rate is generally higher for endotherms than for ectotherms? A. metabolic B. mortality C. reproduction D. respiration Answer:
sciq-8108
multiple_choice
What do you call an abnormal mass formed when cells divide uncontrollably?
[ "inflammation", "tumor", "layer", "infection." ]
B
Relavent Documents: Document 0::: Hyperplasia (from ancient Greek ὑπέρ huper 'over' + πλάσις plasis 'formation'), or hypergenesis, is an enlargement of an organ or tissue caused by an increase in the amount of organic tissue that results from cell proliferation. It may lead to the gross enlargement of an organ, and the term is sometimes confused with benign neoplasia or benign tumor. Hyperplasia is a common preneoplastic response to stimulus. Microscopically, cells resemble normal cells but are increased in numbers. Sometimes cells may also be increased in size (hypertrophy). Hyperplasia is different from hypertrophy in that the adaptive cell change in hypertrophy is an increase in the size of cells, whereas hyperplasia involves an increase in the number of cells. Causes Hyperplasia may be due to any number of causes, including proliferation of basal layer of epidermis to compensate skin loss, chronic inflammatory response, hormonal dysfunctions, or compensation for damage or disease elsewhere. Hyperplasia may be harmless and occur on a particular tissue. An example of a normal hyperplastic response would be the growth and multiplication of milk-secreting glandular cells in the breast as a response to pregnancy, thus preparing for future breast feeding. Perhaps the most interesting and potent effect insulin-like growth factor 1 (IGF) has on the human body is its ability to cause hyperplasia, which is an actual splitting of cells. By contrast, hypertrophy is what occurs, for example, to skeletal muscle cells during weight training and is simply an increase in the size of the cells. With IGF use, one is able to cause hyperplasia which actually increases the number of muscle cells present in the tissue. Weight training enables these new cells to mature in size and strength. It is theorized that hyperplasia may also be induced through specific power output training for athletic performance, thus increasing the number of muscle fibers instead of increasing the size of a single fiber. Mechanism Hype Document 1::: In haematology atypical localization of immature precursors (ALIP) refers to finding of atypically localized precursors (myeloblasts and promyelocytes) on bone marrow biopsy. In healthy humans, precursors are rare and are found localized near the endosteum, and consist of 1-2 cells. In some cases of myelodysplastic syndromes, immature precursors might be located in the intertrabecular region and occasionally aggregate as clusters of 3 ~ 5 cells. The presence of ALIPs is associated with worse prognosis of MDS . Recently, in bone marrow sections of patients with acute myeloid leukemia cells similar to ALIPs were defined as ALIP-like clusters. The presence of ALIP-like clusters in AML patients within remission was reported to be associated with early relapse of the disease. Document 2::: Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence. Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism. Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry. See also Cell (biology) Cell biology Biomolecule Organelle Tissue (biology) External links https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm Document 3::: Cell proliferation is the process by which a cell grows and divides to produce two daughter cells. Cell proliferation leads to an exponential increase in cell number and is therefore a rapid mechanism of tissue growth. Cell proliferation requires both cell growth and cell division to occur at the same time, such that the average size of cells remains constant in the population. Cell division can occur without cell growth, producing many progressively smaller cells (as in cleavage of the zygote), while cell growth can occur without cell division to produce a single larger cell (as in growth of neurons). Thus, cell proliferation is not synonymous with either cell growth or cell division, despite these terms sometimes being used interchangeably. Stem cells undergo cell proliferation to produce proliferating "transit amplifying" daughter cells that later differentiate to construct tissues during normal development and tissue growth, during tissue regeneration after damage, or in cancer. The total number of cells in a population is determined by the rate of cell proliferation minus the rate of cell death. Cell size depends on both cell growth and cell division, with a disproportionate increase in the rate of cell growth leading to production of larger cells and a disproportionate increase in the rate of cell division leading to production of many smaller cells. Cell proliferation typically involves balanced cell growth and cell division rates that maintain a roughly constant cell size in the exponentially proliferating population of cells. Cell proliferation occurs by combining cell growth with regular "G1-S-M-G2" cell cycles to produce many diploid cell progeny. In single-celled organisms, cell proliferation is largely responsive to the availability of nutrients in the environment (or laboratory growth medium). In multicellular organisms, the process of cell proliferation is tightly controlled by gene regulatory networks encoded in the genome and executed mainly Document 4::: Anaplasia (from ana, "backward" + πλάσις plasis, "formation") is a condition of cells with poor cellular differentiation, losing the morphological characteristics of mature cells and their orientation with respect to each other and to endothelial cells. The term also refers to a group of morphological changes in a cell (nuclear pleomorphism, altered nuclear-cytoplasmic ratio, presence of nucleoli, high proliferation index) that point to a possible malignant transformation. Such loss of structural differentiation is especially seen in most, but not all, malignant neoplasms. Sometimes, the term also includes an increased capacity for multiplication. Lack of differentiation is considered a hallmark of aggressive malignancies (for example, it differentiates leiomyosarcomas from leiomyomas). The term anaplasia literally means "to form backward". It implies dedifferentiation, or loss of structural and functional differentiation of normal cells. It is now known, however, that at least some cancers arise from stem cells in tissues; in these tumors failure of differentiation, rather than dedifferentiation of specialized cells, account for undifferentiated tumors. Anaplastic cells display marked pleomorphism (variability). The nuclei are characteristically extremely hyperchromatic (darkly stained) and large. The nuclear-cytoplasmic ratio may approach 1:1 instead of the normal 1:4 or 1:6. Giant cells that are considerably larger than their neighbors may be formed and possess either one enormous nucleus or several nuclei (syncytia). Anaplastic nuclei are variable and bizarre in size and shape. The chromatin is coarse and clumped, and nucleoli may be of astounding size. More important, mitoses are often numerous and distinctly atypical; anarchic multiple spindles may be seen and sometimes appear as tripolar or quadripolar forms. Also, anaplastic cells usually fail to develop recognizable patterns of orientation to one another (i.e., they lose normal polarity). They may grow i The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What do you call an abnormal mass formed when cells divide uncontrollably? A. inflammation B. tumor C. layer D. infection. Answer:
sciq-867
multiple_choice
Which kinds of cells have nuclei and other membrane bound organelles?
[ "prokaryotes", "monocytes", "eukaryotes", "lipids" ]
C
Relavent Documents: Document 0::: Cell physiology is the biological study of the activities that take place in a cell to keep it alive. The term physiology refers to normal functions in a living organism. Animal cells, plant cells and microorganism cells show similarities in their functions even though they vary in structure. General characteristics There are two types of cells: prokaryotes and eukaryotes. Prokaryotes were the first of the two to develop and do not have a self-contained nucleus. Their mechanisms are simpler than later-evolved eukaryotes, which contain a nucleus that envelops the cell's DNA and some organelles. Prokaryotes Prokaryotes have DNA located in an area called the nucleoid, which is not separated from other parts of the cell by a membrane. There are two domains of prokaryotes: bacteria and archaea. Prokaryotes have fewer organelles than eukaryotes. Both have plasma membranes and ribosomes (structures that synthesize proteins and float free in cytoplasm). Two unique characteristics of prokaryotes are fimbriae (finger-like projections on the surface of a cell) and flagella (threadlike structures that aid movement). Eukaryotes Eukaryotes have a nucleus where DNA is contained. They are usually larger than prokaryotes and contain many more organelles. The nucleus, the feature of a eukaryote that distinguishes it from a prokaryote, contains a nuclear envelope, nucleolus and chromatin. In cytoplasm, endoplasmic reticulum (ER) synthesizes membranes and performs other metabolic activities. There are two types, rough ER (containing ribosomes) and smooth ER (lacking ribosomes). The Golgi apparatus consists of multiple membranous sacs, responsible for manufacturing and shipping out materials such as proteins. Lysosomes are structures that use enzymes to break down substances through phagocytosis, a process that comprises endocytosis and exocytosis. In the mitochondria, metabolic processes such as cellular respiration occur. The cytoskeleton is made of fibers that support the str Document 1::: Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence. Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism. Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry. See also Cell (biology) Cell biology Biomolecule Organelle Tissue (biology) External links https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm Document 2::: The nucleoplasm, also known as karyoplasm, is the type of protoplasm that makes up the cell nucleus, the most prominent organelle of the eukaryotic cell. It is enclosed by the nuclear envelope, also known as the nuclear membrane. The nucleoplasm resembles the cytoplasm of a eukaryotic cell in that it is a gel-like substance found within a membrane, although the nucleoplasm only fills out the space in the nucleus and has its own unique functions. The nucleoplasm suspends structures within the nucleus that are not membrane-bound and is responsible for maintaining the shape of the nucleus. The structures suspended in the nucleoplasm include chromosomes, various proteins, nuclear bodies, the nucleolus, nucleoporins, nucleotides, and nuclear speckles. The soluble, liquid portion of the nucleoplasm is called the karyolymph nucleosol, or nuclear hyaloplasm. History The existence of the nucleus, including the nucleoplasm, was first documented as early as 1682 by the Dutch microscopist Leeuwenhoek and was later described and drawn by Franz Bauer. However, the cell nucleus was not named and described in detail until Robert Brown's presentation to the Linnean Society in 1831. The nucleoplasm, while described by Bauer and Brown, was not specifically isolated as a separate entity until its naming in 1882 by Polish-German scientist Eduard Strasburger, one of the most famous botanists of the 19th century, and the first person to discover mitosis in plants. Role Many important cell functions take place in the nucleus, more specifically in the nucleoplasm. The main function of the nucleoplasm is to provide the proper environment for essential processes that take place in the nucleus, serving as the suspension substance for all organelles inside the nucleus, and storing the structures that are used in these processes. 34% of proteins encoded in the human genome are ones that localize to the nucleoplasm. These proteins take part in RNA transcription and gene regulation in the n Document 3::: The cell is the basic structural and functional unit of all forms of life. Every cell consists of cytoplasm enclosed within a membrane, and contains many macromolecules such as proteins, DNA and RNA, as well as many small molecules of nutrients and metabolites. The term comes from the Latin word meaning 'small room'. Cells can acquire specified function and carry out various tasks within the cell such as replication, DNA repair, protein synthesis, and motility. Cells are capable of specialization and mobility within the cell. Most plant and animal cells are only visible under a light microscope, with dimensions between 1 and 100 micrometres. Electron microscopy gives a much higher resolution showing greatly detailed cell structure. Organisms can be classified as unicellular (consisting of a single cell such as bacteria) or multicellular (including plants and animals). Most unicellular organisms are classed as microorganisms. The study of cells and how they work has led to many other studies in related areas of biology, including: discovery of DNA, cancer systems biology, aging and developmental biology. Cell biology is the study of cells, which were discovered by Robert Hooke in 1665, who named them for their resemblance to cells inhabited by Christian monks in a monastery. Cell theory, first developed in 1839 by Matthias Jakob Schleiden and Theodor Schwann, states that all organisms are composed of one or more cells, that cells are the fundamental unit of structure and function in all living organisms, and that all cells come from pre-existing cells. Cells emerged on Earth about 4 billion years ago. Discovery With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as i Document 4::: Cellular compartments in cell biology comprise all of the closed parts within the cytosol of a eukaryotic cell, usually surrounded by a single or double lipid layer membrane. These compartments are often, but not always, defined as membrane-bound organelles. The formation of cellular compartments is called compartmentalization. Both organelles, the mitochondria and chloroplasts (in photosynthetic organisms), are compartments that are believed to be of endosymbiotic origin. Other compartments such as peroxisomes, lysosomes, the endoplasmic reticulum, the cell nucleus or the Golgi apparatus are not of endosymbiotic origin. Smaller elements like vesicles, and sometimes even microtubules can also be counted as compartments. It was thought that compartmentalization is not found in prokaryotic cells., but the discovery of carboxysomes and many other metabolosomes revealed that prokaryotic cells are capable of making compartmentalized structures, albeit these are in most cases not surrounded by a lipid bilayer, but of pure proteinaceous built. Types In general there are 4 main cellular compartments, they are: The nuclear compartment comprising the nucleus The intercisternal space which comprises the space between the membranes of the endoplasmic reticulum (which is continuous with the nuclear envelope) Organelles (the mitochondrion in all eukaryotes and the plastid in phototrophic eukaryotes) The cytosol Function Compartments have three main roles. One is to establish physical boundaries for biological processes that enables the cell to carry out different metabolic activities at the same time. This may include keeping certain biomolecules within a region, or keeping other molecules outside. Within the membrane-bound compartments, different intracellular pH, different enzyme systems, and other differences are isolated from other organelles and cytosol. With mitochondria, the cytosol has an oxidizing environment which converts NADH to NAD+. With these cases, the The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Which kinds of cells have nuclei and other membrane bound organelles? A. prokaryotes B. monocytes C. eukaryotes D. lipids Answer:
sciq-5276
multiple_choice
The number of moles of solute in 1 kg of solvent is defined as what?
[ "singularity", "pollenation", "molarity", "kilocalorie" ]
C
Relavent Documents: Document 0::: Molar concentration (also called molarity, amount concentration or substance concentration) is a measure of the concentration of a chemical species, in particular, of a solute in a solution, in terms of amount of substance per unit volume of solution. In chemistry, the most commonly used unit for molarity is the number of moles per liter, having the unit symbol mol/L or mol/dm3 in SI units. A solution with a concentration of 1 mol/L is said to be 1 molar, commonly designated as 1 M or 1 M. Molarity is often depicted with square brackets around the substance of interest; for example, the molarity of the hydrogen ion is depicted as [H+]. Definition Molar concentration or molarity is most commonly expressed in units of moles of solute per litre of solution. For use in broader applications, it is defined as amount of substance of solute per unit volume of solution, or per unit volume available to the species, represented by lowercase : Here, is the amount of the solute in moles, is the number of constituent particles present in volume (in litres) of the solution, and is the Avogadro constant, since 2019 defined as exactly . The ratio is the number density . In thermodynamics the use of molar concentration is often not convenient because the volume of most solutions slightly depends on temperature due to thermal expansion. This problem is usually resolved by introducing temperature correction factors, or by using a temperature-independent measure of concentration such as molality. The reciprocal quantity represents the dilution (volume) which can appear in Ostwald's law of dilution. Formality or analytical concentration If a molecular entity dissociates in solution, the concentration refers to the original chemical formula in solution, the molar concentration is sometimes called formal concentration or formality (FA) or analytical concentration (cA). For example, if a sodium carbonate solution () has a formal concentration of c() = 1 mol/L, the molar concentra Document 1::: Osmotic concentration, formerly known as osmolarity, is the measure of solute concentration, defined as the number of osmoles (Osm) of solute per litre (L) of solution (osmol/L or Osm/L). The osmolarity of a solution is usually expressed as Osm/L (pronounced "osmolar"), in the same way that the molarity of a solution is expressed as "M" (pronounced "molar"). Whereas molarity measures the number of moles of solute per unit volume of solution, osmolarity measures the number of osmoles of solute particles per unit volume of solution. This value allows the measurement of the osmotic pressure of a solution and the determination of how the solvent will diffuse across a semipermeable membrane (osmosis) separating two solutions of different osmotic concentration. Unit The unit of osmotic concentration is the osmole. This is a non-SI unit of measurement that defines the number of moles of solute that contribute to the osmotic pressure of a solution. A milliosmole (mOsm) is 1/1,000 of an osmole. A microosmole (μOsm) (also spelled micro-osmole) is 1/1,000,000 of an osmole. Types of solutes Osmolarity is distinct from molarity because it measures osmoles of solute particles rather than moles of solute. The distinction arises because some compounds can dissociate in solution, whereas others cannot. Ionic compounds, such as salts, can dissociate in solution into their constituent ions, so there is not a one-to-one relationship between the molarity and the osmolarity of a solution. For example, sodium chloride (NaCl) dissociates into Na+ and Cl− ions. Thus, for every 1 mole of NaCl in solution, there are 2 osmoles of solute particles (i.e., a 1 mol/L NaCl solution is a 2 osmol/L NaCl solution). Both sodium and chloride ions affect the osmotic pressure of the solution. Document 2::: In chemistry, molality is a measure of the amount of solute in a solution relative to a given mass of solvent. This contrasts with the definition of molarity which is based on a given volume of solution. A commonly used unit for molality is the moles per kilogram (mol/kg). A solution of concentration 1 mol/kg is also sometimes denoted as 1 molal. The unit mol/kg requires that molar mass be expressed in kg/mol, instead of the usual g/mol or kg/kmol. Definition The molality (b), of a solution is defined as the amount of substance (in moles) of solute, nsolute, divided by the mass (in kg) of the solvent, msolvent: In the case of solutions with more than one solvent, molality can be defined for the mixed solvent considered as a pure pseudo-solvent. Instead of mole solute per kilogram solvent as in the binary case, units are defined as mole solute per kilogram mixed solvent. Origin The term molality is formed in analogy to molarity which is the molar concentration of a solution. The earliest known use of the intensive property molality and of its adjectival unit, the now-deprecated molal, appears to have been published by G. N. Lewis and M. Randall in the 1923 publication of Thermodynamics and the Free Energies of Chemical Substances. Though the two terms are subject to being confused with one another, the molality and molarity of a dilute aqueous solution are nearly the same, as one kilogram of water (solvent) occupies the volume of 1 liter at room temperature and a small amount of solute has little effect on the volume. Unit The SI unit for molality is moles per kilogram of solvent. A solution with a molality of 3 mol/kg is often described as "3 molal", "3 m" or "3 m". However, following the SI system of units, the National Institute of Standards and Technology, the United States authority on measurement, considers the term "molal" and the unit symbol "m" to be obsolete, and suggests mol/kg or a related unit of the SI. Usage considerations Advantages The pri Document 3::: In chemistry, the mole map is a graphical representation of an algorithm that compares molar mass, number of particles per mole, and factors from balanced equations or other formulae. Stoichiometry Document 4::: The kauri-butanol value ("Kb value") is an international, standardized measure of solvent power for a hydrocarbon solvent, and is governed by an ASTM standardized test, ASTM D1133. The result of this test is a scaleless index, usually referred to as the "Kb value". A higher Kb value means the solvent is more aggressive or active in the ability to dissolve certain materials. Mild solvents have low scores in the tens and twenties; powerful solvents like chlorinated solvents and naphthenic aromatic solvents (i.e. "High Sol 10", "High Sol 15") have ratings that are in the low hundreds. In terms of the test itself, the kauri-butanol value (Kb) of a chemical shows the maximum amount of the hydrocarbon that can be added to a solution of kauri resin (a thick, gum-like material) in butanol (butyl alcohol) without causing cloudiness. Since kauri resin is readily soluble in butyl alcohol but not in most hydrocarbon solvents, the resin solution will tolerate only a certain amount of dilution. "Stronger" solvents such as benzene can be added in a greater amount (and thus have a higher Kb value) than "weaker" solvents like mineral spirits. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The number of moles of solute in 1 kg of solvent is defined as what? A. singularity B. pollenation C. molarity D. kilocalorie Answer:
sciq-281
multiple_choice
What do we call the energy-rich product of photosynthesis?
[ "chloride", "insulin", "sugar", "glucose" ]
D
Relavent Documents: Document 0::: The photosynthetic efficiency is the fraction of light energy converted into chemical energy during photosynthesis in green plants and algae. Photosynthesis can be described by the simplified chemical reaction 6 H2O + 6 CO2 + energy → C6H12O6 + 6 O2 where C6H12O6 is glucose (which is subsequently transformed into other sugars, starches, cellulose, lignin, and so forth). The value of the photosynthetic efficiency is dependent on how light energy is defined – it depends on whether we count only the light that is absorbed, and on what kind of light is used (see Photosynthetically active radiation). It takes eight (or perhaps ten or more) photons to use one molecule of CO2. The Gibbs free energy for converting a mole of CO2 to glucose is 114 kcal, whereas eight moles of photons of wavelength 600 nm contains 381 kcal, giving a nominal efficiency of 30%. However, photosynthesis can occur with light up to wavelength 720 nm so long as there is also light at wavelengths below 680 nm to keep Photosystem II operating (see Chlorophyll). Using longer wavelengths means less light energy is needed for the same number of photons and therefore for the same amount of photosynthesis. For actual sunlight, where only 45% of the light is in the photosynthetically active wavelength range, the theoretical maximum efficiency of solar energy conversion is approximately 11%. In actuality, however, plants do not absorb all incoming sunlight (due to reflection, respiration requirements of photosynthesis and the need for optimal solar radiation levels) and do not convert all harvested energy into biomass, which results in a maximum overall photosynthetic efficiency of 3 to 6% of total solar radiation. If photosynthesis is inefficient, excess light energy must be dissipated to avoid damaging the photosynthetic apparatus. Energy can be dissipated as heat (non-photochemical quenching), or emitted as chlorophyll fluorescence. Typical efficiencies Plants Quoted values sunlight-to-biomass efficien Document 1::: The evolution of photosynthesis refers to the origin and subsequent evolution of photosynthesis, the process by which light energy is used to assemble sugars from carbon dioxide and a hydrogen and electron source such as water. The process of photosynthesis was discovered by Jan Ingenhousz, a Dutch-born British physician and scientist, first publishing about it in 1779. The first photosynthetic organisms probably evolved early in the evolutionary history of life and most likely used reducing agents such as hydrogen rather than water. There are three major metabolic pathways by which photosynthesis is carried out: C3 photosynthesis, C4 photosynthesis, and CAM photosynthesis. C3 photosynthesis is the oldest and most common form. A C3 plant uses the Calvin cycle for the initial steps that incorporate into organic material. A C4 plant prefaces the Calvin cycle with reactions that incorporate into four-carbon compounds. A CAM plant uses crassulacean acid metabolism, an adaptation for photosynthesis in arid conditions. C4 and CAM plants have special adaptations that save water. Origin Available evidence from geobiological studies of Archean (>2500 Ma) sedimentary rocks indicates that life existed 3500 Ma. Fossils of what are thought to be filamentous photosynthetic organisms have been dated at 3.4 billion years old, consistent with recent studies of photosynthesis. Early photosynthetic systems, such as those from green and purple sulfur and green and purple nonsulfur bacteria, are thought to have been anoxygenic, using various molecules as electron donors. Green and purple sulfur bacteria are thought to have used hydrogen and hydrogen sulfide as electron and hydrogen donors. Green nonsulfur bacteria used various amino and other organic acids. Purple nonsulfur bacteria used a variety of nonspecific organic and inorganic molecules. It is suggested that photosynthesis likely originated at low-wavelength geothermal light from acidic hydrothermal vents, Zn-tetrapyrroles w Document 2::: The Bionic Leaf is a biomimetic system that gathers solar energy via photovoltaic cells that can be stored or used in a number of different functions. Bionic leaves can be composed of both synthetic (metals, ceramics, polymers, etc.) and organic materials (bacteria), or solely made of synthetic materials. The Bionic Leaf has the potential to be implemented in communities, such as urbanized areas to provide clean air as well as providing needed clean energy. History In 2009 at MIT, Daniel Nocera's lab first developed the "artificial leaf", a device made from silicon and an anode electrocatalyst for the oxidation of water, capable of splitting water into hydrogen and oxygen gases. In 2012, Nocera came to Harvard and The Silver Lab of Harvard Medical School joined Nocera’s team. Together the teams expanded the existing technology to create the Bionic Leaf. It merged the concept of the artificial leaf with genetically engineered bacteria that feed on the hydrogen and convert CO2 in the air into alcohol fuels or chemicals. The first version of the teams Bionic Leaf was created in 2015 but the catalyst used was harmful to the bacteria. In 2016, a new catalyst was designed to solve this issue, named the "Bionic Leaf 2.0". Other versions of artificial leaves have been developed by the California Institute of Technology and the Joint Center for Artificial Photosynthesis, the University of Waterloo, and the University of Cambridge. Mechanics Photosynthesis In natural photosynthesis, photosynthetic organisms produce energy-rich organic molecules from water and carbon dioxide by using solar radiation. Therefore, the process of photosynthesis removes carbon dioxide, a greenhouse gas, from the air. Artificial photosynthesis, as performed by the Bionic Leaf, is approximately 10 times more efficient than natural photosynthesis. Using a catalyst, the Bionic Leaf can remove excess carbon dioxide in the air and convert that to useful alcohol fuels, like isopropanol and isobutan Document 3::: In chemistry and particularly biochemistry, an energy-rich species (usually energy-rich molecule) or high-energy species (usually high-energy molecule) is a chemical species which reacts, potentially with other species found in the environment, to release chemical energy. In particular, the term is often used for: adenosine triphosphate (ATP) and similar molecules called high-energy phosphates, which release inorganic phosphate into the environment in an exothermic reaction with water: ATP + → ADP + Pi ΔG°' = −30.5 kJ/mol (−7.3 kcal/mol) fuels such as hydrocarbons, carbohydrates, lipids, proteins, and other organic molecules which react with oxygen in the environment to ultimately form carbon dioxide, water, and sometimes nitrogen, sulfates, and phosphates molecular hydrogen monatomic oxygen, ozone, hydrogen peroxide, singlet oxygen and other metastable or unstable species which spontaneously react without further reactants in particular, the vast majority of free radicals explosives such as nitroglycerin and other substances which react exothermically without requiring a second reactant metals or metal ions which can be oxidized to release energy This is contrasted to species that are either part of the environment (this sometimes includes diatomic triplet oxygen) or do not react with the environment (such as many metal oxides or calcium carbonate); those species are not considered energy-rich or high-energy species. Alternative definitions The term is often used without a definition. Some authors define the term "high-energy" to be equivalent to "chemically unstable", while others reserve the term for high-energy phosphates, such as the Great Soviet Encyclopedia which defines the term "high-energy compounds" to refer exclusively to those. The IUPAC glossary of terms used in ecotoxicology defines a primary producer as an "organism capable of using the energy derived from light or a chemical substance in order to manufacture energy-rich organic compou Document 4::: Lignocellulose refers to plant dry matter (biomass), so called lignocellulosic biomass. It is the most abundantly available raw material on the Earth for the production of biofuels. It is composed of two kinds of carbohydrate polymers, cellulose and hemicellulose, and an aromatic-rich polymer called lignin. Any biomass rich in cellulose, hemicelluloses, and lignin are commonly referred to as lignocellulosic biomass. Each component has a distinct chemical behavior. Being a composite of three very different components makes the processing of lignocellulose challenging. The evolved resistance to degradation or even separation is referred to as recalcitrance. Overcoming this recalcitrance to produce useful, high value products requires a combination of heat, chemicals, enzymes, and microorganisms. These carbohydrate-containing polymers contain different sugar monomers (six and five carbon sugars) and they are covalently bound to lignin. Lignocellulosic biomass can be broadly classified as virgin biomass, waste biomass, and energy crops. Virgin biomass includes plants. Waste biomass is produced as a low value byproduct of various industrial sectors such as agriculture (corn stover, sugarcane bagasse, straw etc.) and forestry (saw mill and paper mill discards). Energy crops are crops with a high yield of lignocellulosic biomass produced as a raw material for the production of second-generation biofuel; examples include switchgrass (Panicum virgatum) and Elephant grass. The biofuels generated from these energy crops are sources of sustainable energy. Chemical composition Lignocellulose consists of three components, each with properties that pose challenges to commercial applications. lignin is a heterogeneous, highly crosslinked polymer akin to phenol-formaldehyde resins. It is derived from 3-4 monomers, the ratio of which varies from species to species. The crosslinking is extensive. Being rich in aromatics, lignin is hydrophobic and relatively rigid. Lignin confe The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What do we call the energy-rich product of photosynthesis? A. chloride B. insulin C. sugar D. glucose Answer:
sciq-1865
multiple_choice
Photoautotrophs and chemoautotrophs are two basic types of what?
[ "plants", "autotrophs", "decomposers", "consumers" ]
B
Relavent Documents: Document 0::: The soil food web is the community of organisms living all or part of their lives in the soil. It describes a complex living system in the soil and how it interacts with the environment, plants, and animals. Food webs describe the transfer of energy between species in an ecosystem. While a food chain examines one, linear, energy pathway through an ecosystem, a food web is more complex and illustrates all of the potential pathways. Much of this transferred energy comes from the sun. Plants use the sun’s energy to convert inorganic compounds into energy-rich, organic compounds, turning carbon dioxide and minerals into plant material by photosynthesis. Plant flowers exude energy-rich nectar above ground and plant roots exude acids, sugars, and ectoenzymes into the rhizosphere, adjusting the pH and feeding the food web underground. Plants are called autotrophs because they make their own energy; they are also called producers because they produce energy available for other organisms to eat. Heterotrophs are consumers that cannot make their own food. In order to obtain energy they eat plants or other heterotrophs. Above ground food webs In above ground food webs, energy moves from producers (plants) to primary consumers (herbivores) and then to secondary consumers (predators). The phrase, trophic level, refers to the different levels or steps in the energy pathway. In other words, the producers, consumers, and decomposers are the main trophic levels. This chain of energy transferring from one species to another can continue several more times, but eventually ends. At the end of the food chain, decomposers such as bacteria and fungi break down dead plant and animal material into simple nutrients. Methodology The nature of soil makes direct observation of food webs difficult. Since soil organisms range in size from less than 0.1 mm (nematodes) to greater than 2 mm (earthworms) there are many different ways to extract them. Soil samples are often taken using a metal Document 1::: Consumer–resource interactions are the core motif of ecological food chains or food webs, and are an umbrella term for a variety of more specialized types of biological species interactions including prey-predator (see predation), host-parasite (see parasitism), plant-herbivore and victim-exploiter systems. These kinds of interactions have been studied and modeled by population ecologists for nearly a century. Species at the bottom of the food chain, such as algae and other autotrophs, consume non-biological resources, such as minerals and nutrients of various kinds, and they derive their energy from light (photons) or chemical sources. Species higher up in the food chain survive by consuming other species and can be classified by what they eat and how they obtain or find their food. Classification of consumer types The standard categorization Various terms have arisen to define consumers by what they eat, such as meat-eating carnivores, fish-eating piscivores, insect-eating insectivores, plant-eating herbivores, seed-eating granivores, and fruit-eating frugivores and omnivores are meat eaters and plant eaters. An extensive classification of consumer categories based on a list of feeding behaviors exists. The Getz categorization Another way of categorizing consumers, proposed by South African American ecologist Wayne Getz, is based on a biomass transformation web (BTW) formulation that organizes resources into five components: live and dead animal, live and dead plant, and particulate (i.e. broken down plant and animal) matter. It also distinguishes between consumers that gather their resources by moving across landscapes from those that mine their resources by becoming sessile once they have located a stock of resources large enough for them to feed on during completion of a full life history stage. In Getz's scheme, words for miners are of Greek etymology and words for gatherers are of Latin etymology. Thus a bestivore, such as a cat, preys on live animal Document 2::: Plants are the eukaryotes that form the kingdom Plantae; they are predominantly photosynthetic. This means that they obtain their energy from sunlight, using chloroplasts derived from endosymbiosis with cyanobacteria to produce sugars from carbon dioxide and water, using the green pigment chlorophyll. Exceptions are parasitic plants that have lost the genes for chlorophyll and photosynthesis, and obtain their energy from other plants or fungi. Historically, as in Aristotle's biology, the plant kingdom encompassed all living things that were not animals, and included algae and fungi. Definitions have narrowed since then; current definitions exclude the fungi and some of the algae. By the definition used in this article, plants form the clade Viridiplantae (green plants), which consists of the green algae and the embryophytes or land plants (hornworts, liverworts, mosses, lycophytes, ferns, conifers and other gymnosperms, and flowering plants). A definition based on genomes includes the Viridiplantae, along with the red algae and the glaucophytes, in the clade Archaeplastida. There are about 380,000 known species of plants, of which the majority, some 260,000, produce seeds. They range in size from single cells to the tallest trees. Green plants provide a substantial proportion of the world's molecular oxygen; the sugars they create supply the energy for most of Earth's ecosystems; other organisms, including animals, either consume plants directly or rely on organisms which do so. Grain, fruit, and vegetables are basic human foods and have been domesticated for millennia. People use plants for many purposes, such as building materials, ornaments, writing materials, and, in great variety, for medicines. The scientific study of plants is known as botany, a branch of biology. Definition Taxonomic history All living things were traditionally placed into one of two groups, plants and animals. This classification dates from Aristotle (384–322 BC), who distinguished d Document 3::: Macroflora is a term used for all the plants occurring in a particular area that are large enough to be seen with the naked eye. It is usually synonymous with the Flora and can be contrasted with the microflora, a term used for all the bacteria and other microorganisms in an ecosystem. Macroflora is also an informal term used by many palaeobotanists to refer to an assemblage of plant fossils as preserved in the rock. This is in contrast to the flora, which in this context refers to the assemblage of living plants that were growing in a particular area, whose fragmentary remains became entrapped within the sediment from which the rock was formed and thus became the macroflora. Document 4::: The trophic level of an organism is the position it occupies in a food web. A food chain is a succession of organisms that eat other organisms and may, in turn, be eaten themselves. The trophic level of an organism is the number of steps it is from the start of the chain. A food web starts at trophic level 1 with primary producers such as plants, can move to herbivores at level 2, carnivores at level 3 or higher, and typically finish with apex predators at level 4 or 5. The path along the chain can form either a one-way flow or a food "web". Ecological communities with higher biodiversity form more complex trophic paths. The word trophic derives from the Greek τροφή (trophē) referring to food or nourishment. History The concept of trophic level was developed by Raymond Lindeman (1942), based on the terminology of August Thienemann (1926): "producers", "consumers", and "reducers" (modified to "decomposers" by Lindeman). Overview The three basic ways in which organisms get food are as producers, consumers, and decomposers. Producers (autotrophs) are typically plants or algae. Plants and algae do not usually eat other organisms, but pull nutrients from the soil or the ocean and manufacture their own food using photosynthesis. For this reason, they are called primary producers. In this way, it is energy from the sun that usually powers the base of the food chain. An exception occurs in deep-sea hydrothermal ecosystems, where there is no sunlight. Here primary producers manufacture food through a process called chemosynthesis. Consumers (heterotrophs) are species that cannot manufacture their own food and need to consume other organisms. Animals that eat primary producers (like plants) are called herbivores. Animals that eat other animals are called carnivores, and animals that eat both plants and other animals are called omnivores. Decomposers (detritivores) break down dead plant and animal material and wastes and release it again as energy and nutrients into The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Photoautotrophs and chemoautotrophs are two basic types of what? A. plants B. autotrophs C. decomposers D. consumers Answer:
sciq-6357
multiple_choice
What senses the movement of liquid in ear canals?
[ "Ear Drum", "muscle cells", "hair cells", "Brain Cells" ]
C
Relavent Documents: Document 0::: Earwax, also known by the medical term cerumen, is a waxy substance secreted in the ear canal of humans and other mammals. Earwax can be many colors, including brown, orange, red, yellowish, and gray. Earwax protects the skin of the human ear canal, assists in cleaning and lubrication, and provides protection against bacteria, fungi, particulate matter, and water. Major components of earwax include cerumen, produced by a type of modified sweat gland, and sebum, an oily substance. Both components are made by glands located in the outer ear canal. The chemical composition of earwax includes long chain fatty acids, both saturated and unsaturated, alcohols, squalene, and cholesterol. Earwax also contains dead skin cells and hair. Excess or compacted cerumen is the buildup of ear wax causing a blockage in the ear canal and it can press against the eardrum or block the outside ear canal or hearing aids, potentially causing hearing loss. Physiology Cerumen is produced in the cartilaginous outer third portion of the ear canal. It is a mixture of secretions from sebaceous glands and less-viscous ones from modified apocrine sweat glands. The primary components of both wet and dry earwax are shed layers of skin, with, on average, 60% of the earwax consisting of keratin, 12–20% saturated and unsaturated long-chain fatty acids, alcohols, squalene and 6–9% cholesterol. Wet or dry There are two genetically-determined types of earwax: the wet type, which is dominant, and the dry type, which is recessive. This distinction is caused by a single base change in the "ATP-binding cassette C11 gene". Dry-type individuals are homozygous for adenine (AA) whereas wet-type requires at least one guanine (AG or GG). Dry earwax is gray or tan and brittle, and is about 20% lipid. It has a smaller concentration of lipid and pigment granules than wet earwax. Wet earwax is light brown or dark brown and has a viscous and sticky consistency, and is about 50% lipid. Wet-type earwax is associated Document 1::: The stria vascularis of the cochlear duct is a capillary loop in the upper portion of the spiral ligament (the outer wall of the cochlear duct). It produces endolymph for the scala media in the cochlea. Structure The stria vascularis is part of the lateral wall of the cochlear duct. It is a somewhat stratified epithelium containing primarily three cell types: marginal cells, which are involved in K+ transport, and line the endolymphatic space of the scala media. intermediate cells, which are pigment-containing cells scattered among capillaries. basal cells, which separate the stria vascularis from the underlying spiral ligament. They are connected to basal cells with gap junctions. The stria vascularis also contains pericytes, melanocytes, and endothelial cells. It also contains intraepithelial capillaries - it is the only epithelial tissue that is not avascular (completely lacking blood vessels and lymphatic vessels). Function The stria vascularis produces endolymph for the scala media, one of the three fluid-filled compartments of the cochlea. This maintains the ion balance of the endolymph that surround inner hair cells and outer hair cells of the organ of Corti. It secretes lots of K+, and may also secrete H+. Document 2::: The torus semicircularis is a region of the vertebrate midbrain that contributes to auditory perception, studied most often in fish and amphibians. Neurons from the medulla project to the nucleus centralis and the nucleus ventrolateralis in the torus semicircularis, providing afferent auditory and hydrodynamic information. Research suggests that these nuclei interact with each other, suggesting that this area of the brain is bimodally sensitive. In the Gymnotiform fish, which are weakly electric fish, the torus semicircularis was observed to exhibit laminar organization. It receives afferent input, specifically electrosensory, mechanical, and auditory stimuli. In frogs, researchers have studied how neurons in the torus semicircularis prefer certain characteristics of sound differentially. Single neurons fire selectively based on the auditory parameters of a stimulus. Functionally, this can allow members of a species to distinguish whether a call is of the same (conspecific) or a different species. This has been observed to play a role in mate selection. In the Tungara frog, which produces a species-specific mating call, scientists studied responses in the laminar nucleus of the torus semicircularis to various parts of the call. They came to the conclusion that this part of the brain acts as a feature detector (a neuron/neurons that respond to a certain feature of a stimulus) for the parts of the auditory stimulus that are conspecific. From an evolutionary standpoint, research has been conducted in turtles to connect the distribution of calcium-binding proteins in the torus semicircularis among birds and mammals to a common reptile predecessor. Document 3::: Ceruminous glands are specialized sudoriferous glands (sweat glands) located subcutaneously in the external auditory canal, in the outer third. Ceruminous glands are simple, coiled, tubular glands made up of an inner secretory layer of cells and an outer myoepithelial layer of cells. They are classed as apocrine glands. The glands drain into larger ducts, which then drain into the guard hairs that reside in the external auditory canal. Here they produce cerumen, or earwax, by mixing their secretion with sebum and dead epidermal cells. Cerumen keeps the eardrum pliable, lubricates and cleans the external auditory canal, waterproofs the canal, kills bacteria, and serves as a barrier to trap foreign particles (dust, fungal spores, etc.) by coating the guard hairs of the ear, making them sticky. These glands are capable of developing both benign and malignant tumors. The benign tumors include ceruminous adenoma, ceruminous pleomorphic adenoma, and ceruminous syringocystadenoma papilliferum. The malignant tumors include ceruminous adenocarcinoma, adenoid cystic carcinoma, and mucoepidermoid carcinoma. See also List of specialized glands within the human integumentary system List of distinct cell types in the adult human body Document 4::: The ampullary cupula, or cupula, is a structure in the vestibular system, providing the sense of spatial orientation. The cupula is located within the ampullae of each of the three semicircular canals. Part of the crista ampullaris, the cupula has embedded within it hair cells that have several stereocilia associated with each kinocilium. The cupula itself is the gelatinous component of the crista ampullaris that extends from the crista to the roof of the ampullae. When the head rotates, the endolymph filling the semicircular ducts initially lags behind due to inertia. As a result, the cupula is deflected opposite the direction of head movement. As the endolymph pushes the cupula, the stereocilia is bent as well, stimulating the hair cells within the crista ampullaris. After a short time of continual rotation however, the endolymph's acceleration normalizes with the rate of rotation of the semicircular ducts. As a result, the cupula returns to its resting position and the hair cells cease to be stimulated. This continues until the head stops rotating which simultaneously halts semicircular duct rotation. Due to inertia, however, the endolymph continues on. As the endolymph continues to move, the cupula is once again deflected resulting in the compensatory movements of the body when spun. In only the first situation, as fluid rushes by the cupula, the hair cells stimulated transmit the corresponding signal to the brain through the vestibulocochlear nerve (CN VIII). In the second one, there is no stimulation as the kinocilium can only be bent in one direction. In their natural orientation within the head, the cupulae are located on the medial aspect of the semicircular canals. In this orientation, the kinocilia rest on the posterior aspect of the cupula. Effects of alcohol The Buoyancy Hypothesis posits that alcohol causes vertigo by affecting the neutral buoyancy of the cupula within the surrounding fluid called the endolymph. Linear accelerations (such as that The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What senses the movement of liquid in ear canals? A. Ear Drum B. muscle cells C. hair cells D. Brain Cells Answer:
sciq-2850
multiple_choice
How many underlying principles does the science of biology have?
[ "three", "four", "five", "eleven" ]
B
Relavent Documents: Document 0::: Advanced Placement (AP) Biology (also known as AP Bio) is an Advanced Placement biology course and exam offered by the College Board in the United States. For the 2012–2013 school year, the College Board unveiled a new curriculum with a greater focus on "scientific practices". This course is designed for students who wish to pursue an interest in the life sciences. The College Board recommends successful completion of high school biology and high school chemistry before commencing AP Biology, although the actual prerequisites vary from school to school and from state to state. This course, nevertheless, is considered very challenging and one of the most difficult AP classes, as shown with AP Finals grade distributions. Topic outline The exam covers the following 8 units. The percentage indicates the portion of the multiple-choice section of the exam focused on each content area: The course is based on and tests six skills, called scientific practices which include: In addition to the topics above, students are required to be familiar with general lab procedure. Students should know how to collect data, analyze data to form conclusions, and apply those conclusions. Exam Students are allowed to use a four-function, scientific, or graphing calculator. The exam has two sections: a 90 minute multiple choice section and a 90 minute free response section. There are 60 multiple choice questions and six free responses, two long and four short. Both sections are worth 50% of the score. Score distribution Commonly used textbooks Biology, AP Edition by Sylvia Mader (2012, hardcover ) Life: The Science of Biology (Sadava, Heller, Orians, Purves, and Hillis, ) Campbell Biology AP Ninth Edition (Reece, Urry, Cain, Wasserman, Minorsky, and Andrew Jackson ) See also Glossary of biology A.P Bio (TV Show) Document 1::: This list of life sciences comprises the branches of science that involve the scientific study of life – such as microorganisms, plants, and animals including human beings. This science is one of the two major branches of natural science, the other being physical science, which is concerned with non-living matter. Biology is the overall natural science that studies life, with the other life sciences as its sub-disciplines. Some life sciences focus on a specific type of organism. For example, zoology is the study of animals, while botany is the study of plants. Other life sciences focus on aspects common to all or many life forms, such as anatomy and genetics. Some focus on the micro-scale (e.g. molecular biology, biochemistry) other on larger scales (e.g. cytology, immunology, ethology, pharmacy, ecology). Another major branch of life sciences involves understanding the mindneuroscience. Life sciences discoveries are helpful in improving the quality and standard of life and have applications in health, agriculture, medicine, and the pharmaceutical and food science industries. For example, it has provided information on certain diseases which has overall aided in the understanding of human health. Basic life science branches Biology – scientific study of life Anatomy – study of form and function, in plants, animals, and other organisms, or specifically in humans Astrobiology – the study of the formation and presence of life in the universe Bacteriology – study of bacteria Biotechnology – study of combination of both the living organism and technology Biochemistry – study of the chemical reactions required for life to exist and function, usually a focus on the cellular level Bioinformatics – developing of methods or software tools for storing, retrieving, organizing and analyzing biological data to generate useful biological knowledge Biolinguistics – the study of the biology and evolution of language. Biological anthropology – the study of humans, non-hum Document 2::: The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591. On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education. Format This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions. The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test. Preparation The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a Document 3::: GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test. Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95. After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17. Content specification Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below: Biochemistry (36%) A Chemical and Physical Foundations Thermodynamics and kinetics Redox states Water, pH, acid-base reactions and buffers Solutions and equilibria Solute-solvent interactions Chemical interactions and bonding Chemical reaction mechanisms B Structural Biology: Structure, Assembly, Organization and Dynamics Small molecules Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids) Supramolecular complexes (e.g. Document 4::: Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research. Americas Human Biology major at Stanford University, Palo Alto (since 1970) Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government. Human and Social Biology (Caribbean) Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment. Human Biology Program at University of Toronto The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications. Asia BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002) BSc (honours) Human Biology at AIIMS (New The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. How many underlying principles does the science of biology have? A. three B. four C. five D. eleven Answer:
sciq-8945
multiple_choice
Drip iirigation requires less water and reduces what?
[ "salinization", "bacteria", "moisture", "sediment" ]
A
Relavent Documents: Document 0::: Wet Processing Engineering is one of the major streams in Textile Engineering or Textile manufacturing which refers to the engineering of textile chemical processes and associated applied science. The other three streams in textile engineering are yarn engineering, fabric engineering, and apparel engineering. The processes of this stream are involved or carried out in an aqueous stage. Hence, it is called a wet process which usually covers pre-treatment, dyeing, printing, and finishing. The wet process is usually done in the manufactured assembly of interlacing fibers, filaments and yarns, having a substantial surface (planar) area in relation to its thickness, and adequate mechanical strength to give it a cohesive structure. In other words, the wet process is done on manufactured fiber, yarn and fabric. All of these stages require an aqueous medium which is created by water. A massive amount of water is required in these processes per day. It is estimated that, on an average, almost 50–100 liters of water is used to process only 1 kilogram of textile goods, depending on the process engineering and applications. Water can be of various qualities and attributes. Not all water can be used in the textile processes; it must have some certain properties, quality, color and attributes of being used. This is the reason why water is a prime concern in wet processing engineering. Water Water consumption and discharge of wastewater are the two major concerns. The textile industry uses a large amount of water in its varied processes especially in wet operations such as pre-treatment, dyeing, and printing. Water is required as a solvent of various dyes and chemicals and it is used in washing or rinsing baths in different steps. Water consumption depends upon the application methods, processes, dyestuffs, equipment/machines and technology which may vary mill to mill and material composition. Longer processing sequences, processing of extra dark colors and reprocessing lead Document 1::: Irrigation informatics is a newly emerging academic field that is a cross-disciplinary science using informatics to study the information flows and data management related to irrigation. The field is one of many new informatics sub-specialities that uses the science of information, the practice of information processing, and the engineering of information systems to advance a biophysical science or engineering field. Background Agricultural productivity increases are eagerly sought by governments and industry, spurred by the realisation that world food production must double in the 21st century to feed growing populations and that as irrigation makes up 36% of global food production, but that new land for irrigation growth is very limited, irrigation efficiency must increase. Since irrigation science is a mature and stable field, irrigation researchers are looking to cross-disciplinary science to bring about production gains and informatics is one such science along with others such as social science. Much of the driver for work in the area of irrigation informatics is the perceived success of other informatics fields such as health informatics. Current research Irrigation informatics is very much a part of the wider research into irrigation wherever information technology or data systems are used, however the term informatics is not always used to describe research involving computer systems and data management so that information science or information technology may alternatively be used. This leads to a great number of irrigation informatics articles not using the term irrigation informatics. There are currently no formal publications (journals) that focus on irrigation informatics with the publication most likely to present articles on the topic being Computers and electronics in Agriculture or one of the many irrigation science journals such as Irrigation Science. Recent work in the general area of irrigation informatics has mentioned the exact phrase "Ir Document 2::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 3::: Water-use efficiency (WUE) refers to the ratio of water used in plant metabolism to water lost by the plant through transpiration. Two types of water-use efficiency are referred to most frequently: photosynthetic water-use efficiency (also called instantaneous water-use efficiency), which is defined as the ratio of the rate of carbon assimilation (photosynthesis) to the rate of transpiration, and water-use efficiency of productivity (also called integrated water-use efficiency), which is typically defined as the ratio of biomass produced to the rate of transpiration. Increases in water-use efficiency are commonly cited as a response mechanism of plants to moderate to severe soil water deficits and have been the focus of many programs that seek to increase crop tolerance to drought. However, there is some question as to the benefit of increased water-use efficiency of plants in agricultural systems, as the processes of increased yield production and decreased water loss due to transpiration (that is, the main driver of increases in water-use efficiency) are fundamentally opposed. If there existed a situation where water deficit induced lower transpirational rates without simultaneously decreasing photosynthetic rates and biomass production, then water-use efficiency would be both greatly improved and the desired trait in crop production. Document 4::: Fair river sharing is a kind of a fair division problem in which the waters of a river has to be divided among countries located along the river. It differs from other fair division problems in that the resource to be divided—the water—flows in one direction—from upstream countries to downstream countries. To attain any desired division, it may be required to limit the consumption of upstream countries, but this may require to give these countries some monetary compensation. In addition to sharing river water, which is an economic good, it is often required to share river pollution (or the cost of cleaning it), which is an economic bad. River sharing in practice There are 148 rivers in the world flowing through two countries, 30 through three, nine through four and 13 through five or more. Some notable examples are: The Jordan river, whose sources run from upstream Lebanon and Syria to downstream Israel and Jordan. The attempts of Syria to divert the Jordan river, starting in 1965, are cited as one of the reasons for the Six-Day War. Later, in 1994, the Israel–Jordan peace treaty determined a sharing of the waters between Israel and Jordan, by which Jordan receives water per year. The Nile, running from upstream Ethiopia through Sudan to downstream Egypt. There is a long history of disputes over the Nile agreements of 1929 and 1959. The Ganges, running from upstream India to downstream Bangladesh. There was controversy over the operation of the Farakka Barrage. Between Mexico and the United States, there was controversy over the desalination facility in the Morelos Dam. The Mekong runs from China's Yunnan Province to Myanmar, Laos, Thailand, Cambodia, and Vietnam. In 1995, Laos, Thailand, Cambodia, and Vietnam established the Mekong River Commission to assist in the management and coordinated use of the Mekong's resources. In 1996 China and Myanmar became "dialogue partners" of the MRC and the six countries now work together within a cooperative framework. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Drip iirigation requires less water and reduces what? A. salinization B. bacteria C. moisture D. sediment Answer:
sciq-9474
multiple_choice
Choosing to not smoke and using sunscreen are examples of lifestyle choices that help prevent forms of what disease?
[ "viral disease", "cancer", "bacterial disease", "pnemonia" ]
B
Relavent Documents: Document 0::: Preventive healthcare, or prophylaxis, is the application of healthcare measures to prevent diseases. Disease and disability are affected by environmental factors, genetic predisposition, disease agents, and lifestyle choices, and are dynamic processes that begin before individuals realize they are affected. Disease prevention relies on anticipatory actions that can be categorized as primal, primary, secondary, and tertiary prevention. Each year, millions of people die of preventable causes. A 2004 study showed that about half of all deaths in the United States in 2000 were due to preventable behaviors and exposures. Leading causes included cardiovascular disease, chronic respiratory disease, unintentional injuries, diabetes, and certain infectious diseases. This same study estimates that 400,000 people die each year in the United States due to poor diet and a sedentary lifestyle. According to estimates made by the World Health Organization (WHO), about 55 million people died worldwide in 2011, and two-thirds of these died from non-communicable diseases, including cancer, diabetes, and chronic cardiovascular and lung diseases. This is an increase from the year 2000, during which 60% of deaths were attributed to these diseases. Preventive healthcare is especially important given the worldwide rise in the prevalence of chronic diseases and deaths from these diseases. There are many methods for prevention of disease. One of them is prevention of teenage smoking through information giving. It is recommended that adults and children aim to visit their doctor for regular check-ups, even if they feel healthy, to perform disease screening, identify risk factors for disease, discuss tips for a healthy and balanced lifestyle, stay up to date with immunizations and boosters, and maintain a good relationship with a healthcare provider. In pediatrics, some common examples of primary prevention are encouraging parents to turn down the temperature of their home water heater in o Document 1::: The scientific community in the United States and Europe are primarily concerned with the possible effect of electronic cigarette use on public health. There is concern among public health experts that e-cigarettes could renormalize smoking, weaken measures to control tobacco, and serve as a gateway for smoking among youth. The public health community is divided over whether to support e-cigarettes, because their safety and efficacy for quitting smoking is unclear. Many in the public health community acknowledge the potential for their quitting smoking and decreasing harm benefits, but there remains a concern over their long-term safety and potential for a new era of users to get addicted to nicotine and then tobacco. There is concern among tobacco control academics and advocates that prevalent universal vaping "will bring its own distinct but as yet unknown health risks in the same way tobacco smoking did, as a result of chronic exposure", among other things. Medical organizations differ in their views about the health implications of vaping and avoid releasing statements about the relative toxicity of electronic cigarettes because of the many different device types, liquid formulations, and new devices that come onto the market. Some healthcare groups and policy makers have hesitated to recommend e-cigarettes with nicotine for quitting smoking, despite some evidence of effectiveness (when compared to Nicotine Replacement Therapy or e-cigarettes without nicotine) and safety. Reasons for hesitancy include challenges ensuring that quality control measures on the devices and liquids are met, unknown second hand vapour inhalation effects, uncertainty about EC use leading to the initiation of smoking or effects on people new to smoking who develop nicotine dependence, unknown long-term effects of electronic cigarette use on human health, uncertainty about the effects of ECs on smoking regulations and smoke free legislation measures, and uncertainty about involvement of Document 2::: The prevention paradox describes the seemingly contradictory situation where the majority of cases of a disease come from a population at low or moderate risk of that disease, and only a minority of cases come from the high risk population (of the same disease). This is because the number of people at high risk is small. The prevention paradox was first formally described in 1981 by the epidemiologist Geoffrey Rose. Especially during the COVID-19 pandemic of 2020, the term "prevention paradox" was also used to describe the apparent paradox of people questioning steps to prevent the spread of the pandemic because the prophesied spread did not occur. This however is instead an example of a self-defeating prophecy or a preparedness paradox. Hypothetical case study For example, Rose describes the case of Down syndrome where maternal age is a risk factor. Yet, most cases of Down syndrome will be born to younger, low risk mothers (this is true at least in populations where most women have children at a younger age). This situation is paradoxical because it is common and logical to equate high-risk populations with making up the majority of the burden of disease. Another example could be seen in terms of reducing overall alcohol problems in a population. Although less serious, most alcohol problems are not found among dependent drinkers. Greater societal gain will be obtained by achieving a small reduction in alcohol misuse within a far larger group of "risky" drinkers with less serious problems than by trying to reduce problems among a smaller number of dependent drinkers. See also False positive paradox Preparedness paradox Notes and references External links "Sick individuals and sick populations", G. Rose, Int J Epidem 1985; vol. 14, no. 1: pp. 32-38. "Commentary: The prevention paradox in lay epidemiology—Rose revisited", Kate Hunt and Carol Emslie, Int J Epidem 2001; vol. 30, no. 3: pp. 442-446. The Prevention Paradox Applies to Alcohol Use and Problem Document 3::: Atherosclerosis is a pattern of the disease arteriosclerosis, characterized by development of abnormalities called lesions in walls of arteries. These lesions may lead to narrowing of the arteries' walls due to buildup of atheromatous plaques. At onset there are usually no symptoms, but if they develop, symptoms generally begin around middle age. In severe cases, it can result in coronary artery disease, stroke, peripheral artery disease, or kidney disorders, depending on which body parts(s) the affected arteries are located in the body. The exact cause of atherosclerosis is unknown and is proposed to be multifactorial. Risk factors include abnormal cholesterol levels, elevated levels of inflammatory biomarkers, high blood pressure, diabetes, smoking (both active and passive smoking), obesity, genetic factors, family history, lifestyle habits, and an unhealthy diet. Plaque is made up of fat, cholesterol, calcium, and other substances found in the blood. The narrowing of arteries limits the flow of oxygen-rich blood to parts of the body. Diagnosis is based upon a physical exam, electrocardiogram, and exercise stress test, among others. Prevention is generally by eating a healthy diet, exercising, not smoking, and maintaining a normal weight. Treatment of established disease may include medications to lower cholesterol such as statins, blood pressure medication, or medications that decrease clotting, such as aspirin. A number of procedures may also be carried out such as percutaneous coronary intervention, coronary artery bypass graft, or carotid endarterectomy. Atherosclerosis generally starts when a person is young and worsens with age. Almost all people are affected to some degree by the age of 65. It is the number one cause of death and disability in developed countries. Though it was first described in 1575, there is evidence that the condition occurred in people more than 5,000 years ago. Signs and symptoms Atherosclerosis is asymptomatic for decades because Document 4::: The health action process approach (HAPA) is a psychological theory of health behavior change, developed by Ralf Schwarzer, Professor of Psychology at the Freie University Berlin of Berlin, Germany and SWPS University of Social Sciences and Humanities, Wroclaw, Poland, first published in 1992. Health behavior change refers to a replacement of health-compromising behaviors (such as sedentary behavior) by health-enhancing behaviors (such as physical exercise). To describe, predict, and explain such processes, theories or models are being developed. Health behavioural change theories are designed to examine a set of psychological constructs that jointly aim at explaining what motivates people to change and how they take preventive action. HAPA is an open framework of various motivational and volitional constructs that are assumed to explain and predict individual changes in health behaviors such as quitting smoking or drinking, and improving physical activity levels, dental hygiene, seat belt use, breast self-examination, dietary behaviors, and avoiding drunk driving. HAPA suggests that the adoption, initiation, and maintenance of health behaviors should be conceived of as a structured process including a motivation phase and a volition phase. The former describes the intention formation while the latter refers to planning, and action (initiative, maintenance, recovery). The model emphasizes the particular role of perceived self-efficacy at different stages of health behavior change. Background Models that describe health behavior change can be distinguished in terms of the assumption whether they are continuum-based or stage-based. A continuum (mediator) model claims that change is a continuous process that leads from lack of motivation via action readiness either to successful change or final disengagement. Research on such mediator models are reflected by path diagrams that include distal and proximal predictors of the target behavior. On the other hand, the stag The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Choosing to not smoke and using sunscreen are examples of lifestyle choices that help prevent forms of what disease? A. viral disease B. cancer C. bacterial disease D. pnemonia Answer:
sciq-10487
multiple_choice
What are ingrowths on arthropod exoskeletons to which muscles attach?
[ "rods", "pores", "joints", "apodemes" ]
D
Relavent Documents: Document 0::: Myomeres are blocks of skeletal muscle tissue arranged in sequence, commonly found in aquatic chordates. Myomeres are separated from adjacent myomeres by connective fascia (myosepta) and most easily seen in larval fishes or in the olm. Myomere counts are sometimes used for identifying specimens, since their number corresponds to the number of vertebrae in the adults. Location varies, with some species containing these only near the tails, while some have them located near the scapular or pelvic girdles. Depending on the species, myomeres could be arranged in an epaxial or hypaxial manner. Hypaxial refers to ventral muscles and related structures while epaxial refers to more dorsal muscles. The horizontal septum divides these two regions in vertebrates from cyclostomes to gnathostomes. In terrestrial chordates, the myomeres become fused as well as indistinct, due to the disappearance of myosepta. Shape The shape of myomeres varies by species. Myomeres are commonly zig-zag, "V" (lancelets), "W" (fishes), or straight (tetrapods)– shaped muscle fibers. Generally, cyclostome myomeres are arranged in vertical strips while those of jawed fishes are folded in a complex matter due to swimming capability evolution. Specifically, myomeres of elasmobranchs and eels are “W”-shaped. Contrastingly, myomeres of tetrapods run vertically and do not display complex folding. Another species with simply-lain myomeres are mudpuppies. Myomeres overlap each other in succession, meaning myomere activation also allows neighboring myomeres to activate. Myomeres are made up of myoglobin-rich dark muscle as well as white muscle. Dark muscle, generally, functions as slow-twitch muscle fibers while white muscle is composed of fast-twitch fibers. Function Specifically, three types of myomeres in fish-like chordates include amphioxine (lancelet), cyclostomine (jawless fish), and gnathostomine (jawed fish). A common function shared by all of these is that they function to flex the body lateral Document 1::: An exoskeleton (from Greek éxō "outer" and skeletós "skeleton") is an external skeleton that both supports the body shape and protects the internal organs of an animal, in contrast to an internal endoskeleton (e.g. that of a human) which is enclosed under other soft tissues. Some large, hard protective exoskeletons are known as "shells". Examples of exoskeletons in animals include the arthropod exoskeleton shared by arthropods (insects, chelicerates, myriapods and crustaceans) and tardigrades, as well as the outer shell of certain sponges and the mollusc shell shared by snails, clams, tusk shells, chitons and nautilus. Some vertebrate animals, such as the turtle, have both an endoskeleton and a protective exoskeleton. Role Exoskeletons contain rigid and resistant components that fulfil a set of functional roles in many animals including protection, excretion, sensing, support, feeding, and acting as a barrier against desiccation in terrestrial organisms. Exoskeletons have roles in defence from pests and predators and in providing an attachment framework for musculature. Arthropod exoskeletons contain chitin; the addition of calcium carbonate makes them harder and stronger, at the price of increased weight. Ingrowths of the arthropod exoskeleton known as apodemes serve as attachment sites for muscles. These structures are composed of chitin and are approximately six times stronger and twice the stiffness of vertebrate tendons. Similar to tendons, apodemes can stretch to store elastic energy for jumping, notably in locusts. Calcium carbonates constitute the shells of molluscs, brachiopods, and some tube-building polychaete worms. Silica forms the exoskeleton in the microscopic diatoms and radiolaria. One mollusc species, the scaly-foot gastropod, even uses the iron sulfides greigite and pyrite. Some organisms, such as some foraminifera, agglutinate exoskeletons by sticking grains of sand and shell to their exterior. Contrary to a common misconception, echinoder Document 2::: Arthropodology (from Greek - arthron, "joint", and , gen.: - pous, podos, "foot", which together mean "jointed feet") is a biological discipline concerned with the study of arthropods, a phylum of animals that include the insects, arachnids, crustaceans and others that are characterized by the possession of jointed limbs. This field is very important in medicine, studied together with parasitology. Medical arthropodology is the study of the parasitic effect of arthropods, not only as parasites but also as vectors. The first annual Conference on Medical Arthropodology was held in Madurai (Tamil Nadu) in 2007. Subfields Subfields of arthropodology are Arachnology - the study of spiders and other arachnids Entomology - the study of insects (until the 19th century this term was used for the study of all arthropods) Carcinology - the study of crustaceans Myriapodology - the study of centipedes, millipedes, and other myriapods Journals Journal of Arthropodology Document 3::: Arthropods are covered with a tough, resilient integument or exoskeleton of chitin. Generally the exoskeleton will have thickened areas in which the chitin is reinforced or stiffened by materials such as minerals or hardened proteins. This happens in parts of the body where there is a need for rigidity or elasticity. Typically the mineral crystals, mainly calcium carbonate, are deposited among the chitin and protein molecules in a process called biomineralization. The crystals and fibres interpenetrate and reinforce each other, the minerals supplying the hardness and resistance to compression, while the chitin supplies the tensile strength. Biomineralization occurs mainly in crustaceans. In insects and arachnids, the main reinforcing materials are various proteins hardened by linking the fibres in processes called sclerotisation and the hardened proteins are called sclerotin. The dorsal tergum, ventral sternum, and the lateral pleura form the hardened plates or sclerites of a typical body segment. In either case, in contrast to the carapace of a tortoise or the cranium of a vertebrate, the exoskeleton has little ability to grow or change its form once it has matured. Except in special cases, whenever the animal needs to grow, it moults, shedding the old skin after growing a new skin from beneath. Microscopic structure A typical arthropod exoskeleton is a multi-layered structure with four functional regions: epicuticle, procuticle, epidermis and basement membrane. Of these, the epicuticle is a multi-layered external barrier that, especially in terrestrial arthropods, acts as a barrier against desiccation. The strength of the exoskeleton is provided by the underlying procuticle, which is in turn secreted by the epidermis. Arthropod cuticle is a biological composite material, consisting of two main portions: fibrous chains of alpha-chitin within a matrix of silk-like and globular proteins, of which the best-known is the rubbery protein called resilin. The rel Document 4::: This glossary describes the terms used in formal descriptions of spiders; where applicable these terms are used in describing other arachnids. Links within the glossary are shown . Terms A Abdomen or opisthosoma: One of the two main body parts (tagmata), located towards the posterior end; see also Abdomen § Other animals Accessory claw: Modified at the tip of the in web-building spiders; used with to grip strands of the web Anal tubercle: A small protuberance (tubercule) above the through which the anus opens Apodeme: see Apophysis (plural apophyses): An outgrowth or process changing the general shape of a body part, particularly the appendages; often used in describing the male : see Atrium (plural atria): An internal chamber at the entrance to the in female haplogyne spiders B Bidentate: Having two Book lungs: Respiratory organs on the ventral side (underside) of the , in front of the , opening through narrow slits; see also Book lungs Branchial operculum: see Bulbus: see C Calamistrum (plural calamistra): Modified setae (bristles) on the of the fourth leg of spiders with a , arranged in one or more rows or in an oval shape, used to comb silk produced by the cribellum; see also Calamistrum Caput (plural capita): see Carapace: A hardened plate (sclerite) covering the upper (dorsal) portion of the ; see also Carapace Carpoblem: The principal on the male ; also just called the tibial apophysis Cephalic region or caput: The front part of the , separated from the thoracic region by the Cephalothorax or prosoma: One of the two main body parts (tagmata), located towards the anterior end, composed of the head ( or caput) and the thorax (thoracic region), the two regions being separated by the ; covered by the and bearing the , legs, and mouthparts Cervical groove: A shallow U-shaped groove, separating the and thoracic regions of the Chelate: A description of a where the closes against a tooth-like process Chelic The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What are ingrowths on arthropod exoskeletons to which muscles attach? A. rods B. pores C. joints D. apodemes Answer:
sciq-10441
multiple_choice
What is the process called when secretory cells export products?
[ "endocytosis", "exocytosis", "morphogenesis", "isolation" ]
B
Relavent Documents: Document 0::: Trans-endocytosis is the biological process where material created in one cell undergoes endocytosis (enters) into another cell. If the material is large enough, this can be observed using an electron microscope. Trans-endocytosis from neurons to glia has been observed using time-lapse microscopy. Trans-endocytosis also applies to molecules. For example, this process is involved when a part of the protein Notch is cleaved off and undergoes endocytosis into its neighboring cell. Without Notch trans-endocytosis, there would be too many neurons in a developing embryo. Trans-endocytosis is also involved in cell movement when the protein ephrin is bound by its receptor from a neighboring cell. Document 1::: Organelle biogenesis is the biogenesis, or creation, of cellular organelles in cells. Organelle biogenesis includes the process by which cellular organelles are split between daughter cells during mitosis; this process is called organelle inheritance. Discovery Following the discovery of cellular organelles in the nineteenth century, little was known about their function and synthesis until the development of electron microscopy and subcellular fractionation in the twentieth century. This allowed experiments on the function, structure, and biogenesis of these organelles to commence. Mechanisms of protein sorting and retrieval have been found to give organelles their characteristic composition. It is known that cellular organelles can come from preexisting organelles; however, it is a subject of controversy whether organelles can be created without a preexisting one. Process Several processes are known to have developed for organelle biogenesis. These can range from de novo synthesis to the copying of a template organelle; the formation of an organelle 'from scratch' and using a preexisting organelle as a template to manufacture an organelle, respectively. The distinct structures of each organelle are thought to be caused by the different mechanisms of the processes which create them and the proteins that they are made up of. Organelles may also be 'split' between two cells during the process of cellular division (known as organelle inheritance), where the organelle of the parent cell doubles in size and then splits with each half being delivered to their respective daughter cells. The process of organelle biogenesis is known to be regulated by specialized transcription networks that modulate the expression of the genes that code for specific organellar proteins. In order for organelle biogenesis to be carried out properly, the specific genes coding for the organellar proteins must be transcribed properly and the translation of the resulting mRNA must be succes Document 2::: Porosomes are cup-shaped supramolecular structures in the cell membranes of eukaryotic cells where secretory vesicles transiently dock in the process of vesicle fusion and secretion. The transient fusion of secretory vesicle membrane at a porosome, base via SNARE proteins, results in the formation of a fusion pore or continuity for the release of intravesicular contents from the cell. After secretion is complete, the fusion pore temporarily formed at the base of the porosome is sealed. Porosomes are few nanometers in size and contain many different types of protein, especially chloride and calcium channels, actin, and SNARE proteins that mediate the docking and fusion of the vesicles with the cell membrane. Once the vesicles have docked with the SNARE proteins, they swell, which increases their internal pressure. They then transiently fuse at the base of the porosome, and these pressurized contents are ejected from the cell. Examination of cells following secretion using electron microscopy, demonstrate increased presence of partially empty vesicles following secretion. This suggested that during the secretory process, only a portion of the vesicular contents are able to exit the cell. This could only be possible if the vesicle were to temporarily establish continuity with the cell plasma membrane, expel a portion of its contents, then detach, reseal, and withdraw into the cytosol (endocytose). In this way, the secretory vesicle could be reused for subsequent rounds of exo-endocytosis, until completely empty of its contents. Porosomes vary in size depending on the cell type. Porosome in the exocrine pancreas and in endocrine and neuroendocrine cells range from 100 nm to 180 nm in diameter while in neurons they range from 10 nm to 15 nm (about 1/10 the size of pancreatic porosomes). When a secretory vesicle containing v-SNARE docks at the porosome base containing t-SNARE, membrane continuity (ring complex) is formed between the two. The size of the t/v-SNARE complex Document 3::: This lecture, named in memory of Keith R. Porter, is presented to an eminent cell biologist each year at the ASCB Annual Meeting. The ASCB Program Committee and the ASCB President recommend the Porter Lecturer to the Porter Endowment each year. Lecturers Source: ASCB See also List of biology awards Document 4::: The stem cell secretome (also referred to as the stromal cell secretome) is a collective term for the paracrine soluble factors produced by stem cells and utilized for their inter-cell communication. In addition to inter-cell communication, the paracrine factors are also responsible for tissue development, homeostasis and (re-)generation. The stem cell secretome consists of extracellular vesicles, specifically exosomes, microvesicles, membrane particles, peptides and small proteins (cytokines). The paracrine activity of stem cells, i.e. the stem cell secretome, has been found to be the predominant mechanism by which stem cell-based therapies mediate their effects in degenerative, auto-immune and/or inflammatory diseases. Though not only stem cells possess a secretome which influences their cellular environment, their secretome currently appears to be the most relevant for therapeutic use. Extracellular Vesicles The Extracellular Vesicles are small partials that are normally discharged and have boundaries that are formed by a lipid bilayer. Although cells can replicate, extracellular vesicle is not able to. In the extracellular vesical, things that consist of the stem cell secretome and are being packed are organelles, mRNA, miRNA, and proteins. Exosomes are discharged from the extracellular vesicles, which are found in biological fluid. Biological fluid like the cerebrospinal fluid, which can be used for treatment. Most impotently, exosomes can be found in between the eukaryotic organism's cell, also known as the tissue matrix. Research Stem Cell therapies, here referred to as therapies employing non-hematopoietic, mesenchymal stem cells, have a wide range of potential therapeutic benefits for different diseases, most of which are currently investigated in clinical trials. Stem cell therapies can benefit as a regenerative medicine for patients that have or been diagnosed with disease that affect the mid part of the brain, strokes and heart disease, joint disease The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the process called when secretory cells export products? A. endocytosis B. exocytosis C. morphogenesis D. isolation Answer:
sciq-8755
multiple_choice
What is the term for the early growth and development of a plant embryo inside a seed?
[ "germination", "secretion", "rumination", "fertilization" ]
A
Relavent Documents: Document 0::: Plant embryonic development, also plant embryogenesis is a process that occurs after the fertilization of an ovule to produce a fully developed plant embryo. This is a pertinent stage in the plant life cycle that is followed by dormancy and germination. The zygote produced after fertilization must undergo various cellular divisions and differentiations to become a mature embryo. An end stage embryo has five major components including the shoot apical meristem, hypocotyl, root meristem, root cap, and cotyledons. Unlike the embryonic development in animals, and specifically in humans, plant embryonic development results in an immature form of the plant, lacking most structures like leaves, stems, and reproductive structures. However, both plants and animals including humans, pass through a phylotypic stage that evolved independently and that causes a developmental constraint limiting morphological diversification. Morphogenic events Embryogenesis occurs naturally as a result of single, or double fertilization, of the ovule, giving rise to two distinct structures: the plant embryo and the endosperm which go on to develop into a seed. The zygote goes through various cellular differentiations and divisions in order to produce a mature embryo. These morphogenic events form the basic cellular pattern for the development of the shoot-root body and the primary tissue layers; it also programs the regions of meristematic tissue formation. The following morphogenic events are only particular to eudicots, and not monocots. Plant Following fertilization, the zygote and endosperm are present within the ovule, as seen in stage I of the illustration on this page. Then the zygote undergoes an asymmetric transverse cell division that gives rise to two cells - a small apical cell resting above a large basal cell. These two cells are very different, and give rise to different structures, establishing polarity in the embryo. apical cellThe small apical cell is on the top and contains Document 1::: Germination is the process by which an organism grows from a seed or spore. The term is applied to the sprouting of a seedling from a seed of an angiosperm or gymnosperm, the growth of a sporeling from a spore, such as the spores of fungi, ferns, bacteria, and the growth of the pollen tube from the pollen grain of a seed plant. Seed plants Germination is usually the growth of a plant contained within a seed; it results in the formation of the seedling. It is also the process of reactivation of metabolic machinery of the seed resulting in the emergence of radicle and plumule. The seed of a vascular plant is a small package produced in a fruit or cone after the union of male and female reproductive cells. All fully developed seeds contain an embryo and, in most plant species some store of food reserves, wrapped in a seed coat. Some plants produce varying numbers of seeds that lack embryos; these are empty seeds which never germinate. Dormant seeds are viable seeds that do not germinate because they require specific internal or environmental stimuli to resume growth. Under proper conditions, the seed begins to germinate and the embryo resumes growth, developing into a seedling. Disturbance of soil can result in vigorous plant growth by exposing seeds already in the soil to changes in environmental factors where germination may have previously been inhibited by depth of the seeds or soil that was too compact. This is often observed at gravesites after a burial. Seed germination depends on both internal and external conditions. The most important external factors include right temperature, water, oxygen or air and sometimes light or darkness. Various plants require different variables for successful seed germination. Often this depends on the individual seed variety and is closely linked to the ecological conditions of a plant's natural habitat. For some seeds, their future germination response is affected by environmental conditions during seed formation; most ofte Document 2::: In plant science, the spermosphere is the zone in the soil surrounding a germinating seed. This is a small volume with radius perhaps 1 cm but varying with seed type, the variety of soil microorganisms, the level of soil moisture, and other factors. Within the spermosphere a range of complex interactions take place among the germinating seed, the soil, and the microbiome. Because germination is a brief process, the spermosphere is transient, but the impact of the microbial activity within the spermosphere can have strong and long-lasting effects on the developing plant. Seeds exude various molecules that influence their surrounding microbial communities, either inhibiting or stimulating their growth. The composition of the exudates varies according to the plant type and such properties of the soil as its pH and moisture content. With these biochemical effects, the spermosphere develops both downward—to form the rhizosphere (upon the emergence of the plant's radicle)—and upward to form the laimosphere, which is the soil surrounding the growing plant stem. Document 3::: Important structures in plant development are buds, shoots, roots, leaves, and flowers; plants produce these tissues and structures throughout their life from meristems located at the tips of organs, or between mature tissues. Thus, a living plant always has embryonic tissues. By contrast, an animal embryo will very early produce all of the body parts that it will ever have in its life. When the animal is born (or hatches from its egg), it has all its body parts and from that point will only grow larger and more mature. However, both plants and animals pass through a phylotypic stage that evolved independently and that causes a developmental constraint limiting morphological diversification. According to plant physiologist A. Carl Leopold, the properties of organization seen in a plant are emergent properties which are more than the sum of the individual parts. "The assembly of these tissues and functions into an integrated multicellular organism yields not only the characteristics of the separate parts and processes but also quite a new set of characteristics which would not have been predictable on the basis of examination of the separate parts." Growth A vascular plant begins from a single celled zygote, formed by fertilisation of an egg cell by a sperm cell. From that point, it begins to divide to form a plant embryo through the process of embryogenesis. As this happens, the resulting cells will organize so that one end becomes the first root while the other end forms the tip of the shoot. In seed plants, the embryo will develop one or more "seed leaves" (cotyledons). By the end of embryogenesis, the young plant will have all the parts necessary to begin in its life. Once the embryo germinates from its seed or parent plant, it begins to produce additional organs (leaves, stems, and roots) through the process of organogenesis. New roots grow from root meristems located at the tip of the root, and new stems and leaves grow from shoot meristems located at the Document 4::: A seedling is a young sporophyte developing out of a plant embryo from a seed. Seedling development starts with germination of the seed. A typical young seedling consists of three main parts: the radicle (embryonic root), the hypocotyl (embryonic shoot), and the cotyledons (seed leaves). The two classes of flowering plants (angiosperms) are distinguished by their numbers of seed leaves: monocotyledons (monocots) have one blade-shaped cotyledon, whereas dicotyledons (dicots) possess two round cotyledons. Gymnosperms are more varied. For example, pine seedlings have up to eight cotyledons. The seedlings of some flowering plants have no cotyledons at all. These are said to be acotyledons. The plumule is the part of a seed embryo that develops into the shoot bearing the first true leaves of a plant. In most seeds, for example the sunflower, the plumule is a small conical structure without any leaf structure. Growth of the plumule does not occur until the cotyledons have grown above ground. This is epigeal germination. However, in seeds such as the broad bean, a leaf structure is visible on the plumule in the seed. These seeds develop by the plumule growing up through the soil with the cotyledons remaining below the surface. This is known as hypogeal germination. Photomorphogenesis and etiolation Dicot seedlings grown in the light develop short hypocotyls and open cotyledons exposing the epicotyl. This is also referred to as photomorphogenesis. In contrast, seedlings grown in the dark develop long hypocotyls and their cotyledons remain closed around the epicotyl in an apical hook. This is referred to as skotomorphogenesis or etiolation. Etiolated seedlings are yellowish in color as chlorophyll synthesis and chloroplast development depend on light. They will open their cotyledons and turn green when treated with light. In a natural situation, seedling development starts with skotomorphogenesis while the seedling is growing through the soil and attempting to reach the The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the term for the early growth and development of a plant embryo inside a seed? A. germination B. secretion C. rumination D. fertilization Answer:
sciq-3045
multiple_choice
Where do most ecosystems get their energy from?
[ "heat", "water", "sun", "earth" ]
C
Relavent Documents: Document 0::: Earth system science (ESS) is the application of systems science to the Earth. In particular, it considers interactions and 'feedbacks', through material and energy fluxes, between the Earth's sub-systems' cycles, processes and "spheres"—atmosphere, hydrosphere, cryosphere, geosphere, pedosphere, lithosphere, biosphere, and even the magnetosphere—as well as the impact of human societies on these components. At its broadest scale, Earth system science brings together researchers across both the natural and social sciences, from fields including ecology, economics, geography, geology, glaciology, meteorology, oceanography, climatology, paleontology, sociology, and space science. Like the broader subject of systems science, Earth system science assumes a holistic view of the dynamic interaction between the Earth's spheres and their many constituent subsystems fluxes and processes, the resulting spatial organization and time evolution of these systems, and their variability, stability and instability. Subsets of Earth System science include systems geology and systems ecology, and many aspects of Earth System science are fundamental to the subjects of physical geography and climate science. Definition The Science Education Resource Center, Carleton College, offers the following description: "Earth System science embraces chemistry, physics, biology, mathematics and applied sciences in transcending disciplinary boundaries to treat the Earth as an integrated system. It seeks a deeper understanding of the physical, chemical, biological and human interactions that determine the past, current and future states of the Earth. Earth System science provides a physical basis for understanding the world in which we live and upon which humankind seeks to achieve sustainability". Earth System science has articulated four overarching, definitive and critically important features of the Earth System, which include: Variability: Many of the Earth System's natural 'modes' and variab Document 1::: Plant ecology is a subdiscipline of ecology that studies the distribution and abundance of plants, the effects of environmental factors upon the abundance of plants, and the interactions among plants and between plants and other organisms. Examples of these are the distribution of temperate deciduous forests in North America, the effects of drought or flooding upon plant survival, and competition among desert plants for water, or effects of herds of grazing animals upon the composition of grasslands. A global overview of the Earth's major vegetation types is provided by O.W. Archibold. He recognizes 11 major vegetation types: tropical forests, tropical savannas, arid regions (deserts), Mediterranean ecosystems, temperate forest ecosystems, temperate grasslands, coniferous forests, tundra (both polar and high mountain), terrestrial wetlands, freshwater ecosystems and coastal/marine systems. This breadth of topics shows the complexity of plant ecology, since it includes plants from floating single-celled algae up to large canopy forming trees. One feature that defines plants is photosynthesis. Photosynthesis is the process of a chemical reactions to create glucose and oxygen, which is vital for plant life. One of the most important aspects of plant ecology is the role plants have played in creating the oxygenated atmosphere of earth, an event that occurred some 2 billion years ago. It can be dated by the deposition of banded iron formations, distinctive sedimentary rocks with large amounts of iron oxide. At the same time, plants began removing carbon dioxide from the atmosphere, thereby initiating the process of controlling Earth's climate. A long term trend of the Earth has been toward increasing oxygen and decreasing carbon dioxide, and many other events in the Earth's history, like the first movement of life onto land, are likely tied to this sequence of events. One of the early classic books on plant ecology was written by J.E. Weaver and F.E. Clements. It Document 2::: Biobased economy, bioeconomy or biotechonomy is economic activity involving the use of biotechnology and biomass in the production of goods, services, or energy. The terms are widely used by regional development agencies, national and international organizations, and biotechnology companies. They are closely linked to the evolution of the biotechnology industry and the capacity to study, understand, and manipulate genetic material that has been possible due to scientific research and technological development. This includes the application of scientific and technological developments to agriculture, health, chemical, and energy industries. The terms bioeconomy (BE) and bio-based economy (BBE) are sometimes used interchangeably. However, it is worth to distinguish them: the biobased economy takes into consideration the production of non-food goods, whilst bioeconomy covers both bio-based economy and the production and use of food and feed. More than 60 countries and regions have bioeconomy or bioscience-related strategies, of which 20 have published dedicated bioeconomy strategies in Africa, Asia, Europe, Oceania, and the Americas. Definitions Bioeconomy has large variety of definitions. The bioeconomy comprises those parts of the economy that use renewable biological resources from land and sea – such as crops, forests, fish, animals and micro-organisms – to produce food, health, materials, products, textiles and energy. The definitions and usage does however vary between different areas of the world. An important aspect of the bioeconomy is understanding mechanisms and processes at the genetic, molecular, and genomic levels, and applying this understanding to creating or improving industrial processes, developing new products and services, and producing new energy. Bioeconomy aims to reduce our dependence on fossil natural resources, to prevent biodiversity loss and to create new economic growth and jobs that are in line with the principles of sustainable develo Document 3::: Energy quality is a measure of the ease with which a form of energy can be converted to useful work or to another form of energy: i.e. its content of thermodynamic free energy. A high quality form of energy has a high content of thermodynamic free energy, and therefore a high proportion of it can be converted to work; whereas with low quality forms of energy, only a small proportion can be converted to work, and the remainder is dissipated as heat. The concept of energy quality is also used in ecology, where it is used to track the flow of energy between different trophic levels in a food chain and in thermoeconomics, where it is used as a measure of economic output per unit of energy. Methods of evaluating energy quality often involve developing a ranking of energy qualities in hierarchical order. Examples: Industrialization, Biology The consideration of energy quality was a fundamental driver of industrialization from the 18th through 20th centuries. Consider for example the industrialization of New England in the 18th century. This refers to the construction of textile mills containing power looms for weaving cloth. The simplest, most economical and straightforward source of energy was provided by water wheels, extracting energy from a millpond behind a dam on a local creek. If another nearby landowner also decided to build a mill on the same creek, the construction of their dam would lower the overall hydraulic head to power the existing waterwheel, thus hurting power generation and efficiency. This eventually became an issue endemic to the entire region, reducing the overall profitability of older mills as newer ones were built. The search for higher quality energy was a major impetus throughout the 19th and 20th centuries. For example, burning coal to make steam to generate mechanical energy would not have been imaginable in the 18th century; by the end of the 19th century, the use of water wheels was long outmoded. Similarly, the quality of energy from elec Document 4::: Earth systems engineering and management (ESEM) is a discipline used to analyze, design, engineer and manage complex environmental systems. It entails a wide range of subject areas including anthropology, engineering, environmental science, ethics and philosophy. At its core, ESEM looks to "rationally design and manage coupled human–natural systems in a highly integrated and ethical fashion". ESEM is a newly emerging area of study that has taken root at the University of Virginia, Cornell and other universities throughout the United States, and at the Centre for Earth Systems Engineering Research (CESER) at Newcastle University in the United Kingdom. Founders of the discipline are Braden Allenby and Michael Gorman. Introduction to ESEM For centuries, humans have utilized the earth and its natural resources to advance civilization and develop technology. "As a principle result of Industrial Revolutions and associated changes in human demographics, technology systems, cultures, and economic systems have been the evolution of an Earth in which the dynamics of major natural systems are increasingly dominated by human activity". In many ways, ESEM views the earth as a human artifact. "In order to maintain continued stability of both natural and human systems, we need to develop the ability to rationally design and manage coupled human-natural systems in a highly integrated and ethical fashion- an Earth Systems Engineering and Management (ESEM) capability". ESEM has been developed by a few individuals. One of particular note is Braden Allenby. Allenby holds that the foundation upon which ESEM is built is the notion that "the Earth, as it now exists, is a product of human design". In fact there are no longer any natural systems left in the world, "there are no places left on Earth that don't fall under humanity's shadow". "So the question is not, as some might wish, whether we should begin ESEM, because we have been doing it for a long time, albeit unintentionally. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Where do most ecosystems get their energy from? A. heat B. water C. sun D. earth Answer:
sciq-5590
multiple_choice
An infection of the brain is called?
[ "syphilis", "tuberculosis", "encephalitis", "influenza" ]
C
Relavent Documents: Document 0::: Medical education is education related to the practice of being a medical practitioner, including the initial training to become a physician (i.e., medical school and internship) and additional training thereafter (e.g., residency, fellowship, and continuing medical education). Medical education and training varies considerably across the world. Various teaching methodologies have been used in medical education, which is an active area of educational research. Medical education is also the subject-didactic academic field of educating medical doctors at all levels, including entry-level, post-graduate, and continuing medical education. Specific requirements such as entrustable professional activities must be met before moving on in stages of medical education. Common techniques and evidence base Medical education applies theories of pedagogy specifically in the context of medical education. Medical education has been a leader in the field of evidence-based education, through the development of evidence syntheses such as the Best Evidence Medical Education collection, formed in 1999, which aimed to "move from opinion-based education to evidence-based education". Common evidence-based techniques include the Objective structured clinical examination (commonly known as the 'OSCE) to assess clinical skills, and reliable checklist-based assessments to determine the development of soft skills such as professionalism. However, there is a persistence of ineffective instructional methods in medical education, such as the matching of teaching to learning styles and Edgar Dales' "Cone of Learning". Entry-level education Entry-level medical education programs are tertiary-level courses undertaken at a medical school. Depending on jurisdiction and university, these may be either undergraduate-entry (most of Europe, Asia, South America and Oceania), or graduate-entry programs (mainly Australia, Philippines and North America). Some jurisdictions and universities provide both u Document 1::: Progress tests are longitudinal, feedback oriented educational assessment tools for the evaluation of development and sustainability of cognitive knowledge during a learning process. A progress test is a written knowledge exam (usually involving multiple choice questions) that is usually administered to all students in the "A" program at the same time and at regular intervals (usually twice to four times yearly) throughout the entire academic program. The test samples the complete knowledge domain expected of new graduates upon completion of their courses, regardless of the year level of the student). The differences between students’ knowledge levels show in the test scores; the further a student has progressed in the curriculum the higher the scores. As a result, these resultant scores provide a longitudinal, repeated measures, curriculum-independent assessment of the objectives (in knowledge) of the entire programme. History Since its inception in the late 1970s at both Maastricht University and the University of Missouri–Kansas City independently, the progress test of applied knowledge has been increasingly used in medical and health sciences programs across the globe. They are well established and increasingly used in medical education in both undergraduate and postgraduate medical education. They are used formatively and summatively. Use in academic programs The progress test is currently used by national progress test consortia in the United Kingdom, Italy, The Netherlands, in Germany (including Austria), and in individual schools in Africa, Saudi Arabia, South East Asia, the Caribbean, Australia, New Zealand, Sweden, Finland, UK, and the USA. The National Board of Medical Examiners in the USA also provides progress testing in various countries The feasibility of an international approach to progress testing has been recently acknowledged and was first demonstrated by Albano et al. in 1996, who compared test scores across German, Dutch and Italian medi Document 2::: TIME-ITEM is an ontology of Topics that describes the content of undergraduate medical education. TIME is an acronym for "Topics for Indexing Medical Education"; ITEM is an acronym for "Index de thèmes pour l’éducation médicale." Version 1.0 of the taxonomy has been released and the web application that allows users to work with it is still under development. Its developers are seeking more collaborators to expand and validate the taxonomy and to guide future development of the web application. History The development of TIME-ITEM began at the University of Ottawa in 2006. It was initially developed to act as a content index for a curriculum map being constructed there. After its initial presentation at the 2006 conference of the Canadian Association for Medical Education, early collaborators included the University of British Columbia, McMaster University and Queen's University. Features The TIME-ITEM ontology is unique in that it is designed specifically for undergraduate medical education. As such, it includes fewer strictly biomedical entries than other common medical vocabularies (such as MeSH or SNOMED CT) but more entries relating to the medico-social concepts of communication, collaboration, professionalism, etc. Topics within TIME-ITEM are arranged poly-hierarchically, meaning any Topic can have more than one parent. Relationships are established based on the logic that learning about a Topic contributes to the learning of all its parent Topics. In addition to housing the ontology of Topics, the TIME-ITEM web application can house multiple Outcome frameworks. All Outcomes, whether private Outcomes entered by single institutions or publicly available medical education Outcomes (such as CanMeds 2005) are hierarchically linked to one or more Topics in the ontology. In this way, the contribution of each Topic to multiple Outcomes is made explicit. The structure of the XML documents exported from TIME-ITEM (which contain the hierarchy of Outco Document 3::: Infectious diseases or ID, also known as infectiology, is a medical specialty dealing with the diagnosis and treatment of infections. An infectious diseases specialist's practice consists of managing nosocomial (healthcare-acquired) infections or community-acquired infections. An ID specialist investigates the cause of a disease to determine what kind of Bacteria, viruses, parasites, or fungi the disease is caused by. Once the pathogen is known, an ID specialist can then run various tests to determine the best antimicrobial drug to kill the pathogen and treat the disease. While infectious diseases have always been around, the infectious disease specialty did not exist until the late 1900s after scientists and physicians in the 19th century paved the way with research on the sources of infectious disease and the development of vaccines. Scope Infectious diseases specialists typically serve as consultants to other physicians in cases of complex infections, and often manage patients with HIV/AIDS and other forms of immunodeficiency. Although many common infections are treated by physicians without formal expertise in infectious diseases, specialists may be consulted for cases where an infection is difficult to diagnose or manage. They may also be asked to help determine the cause of a fever of unknown origin. Specialists in infectious diseases can practice both in hospitals (inpatient) and clinics (outpatient). In hospitals, specialists in infectious diseases help ensure the timely diagnosis and treatment of acute infections by recommending the appropriate diagnostic tests to identify the source of the infection and by recommending appropriate management such as prescribing antibiotics to treat bacterial infections. For certain types of infections, involvement of specialists in infectious diseases may improve patient outcomes. In clinics, specialists in infectious diseases can provide long-term care to patients with chronic infections such as HIV/AIDS. History Inf Document 4::: Medical Science Educator is a peer-reviewed journal that focuses on teaching the sciences that are fundamental to modern medicine and health. Coverage includes basic science education, clinical teaching and the incorporation of modern educational technologies. MSE offers all who teach in healthcare the most current information to succeed in their task by publishing scholarly activities, opinions, and resources in medical science education. MSE provides the readership a better understanding of teaching and learning techniques in order to advance medical science education. It is the official publication of the International Association of Medical Science Educators (IAMSE). The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. An infection of the brain is called? A. syphilis B. tuberculosis C. encephalitis D. influenza Answer:
sciq-10041
multiple_choice
What are the tiny sacs in the lungs where gas exchange takes place?
[ "alveoli", "chambers", "vacuoles", "ganglion" ]
A
Relavent Documents: Document 0::: Lung receptors sense irritation or inflammation in the bronchi and alveoli. Document 1::: The control of ventilation is the physiological mechanisms involved in the control of breathing, which is the movement of air into and out of the lungs. Ventilation facilitates respiration. Respiration refers to the utilization of oxygen and balancing of carbon dioxide by the body as a whole, or by individual cells in cellular respiration. The most important function of breathing is the supplying of oxygen to the body and balancing of the carbon dioxide levels. Under most conditions, the partial pressure of carbon dioxide (PCO2), or concentration of carbon dioxide, controls the respiratory rate. The peripheral chemoreceptors that detect changes in the levels of oxygen and carbon dioxide are located in the arterial aortic bodies and the carotid bodies. Central chemoreceptors are primarily sensitive to changes in the pH of the blood, (resulting from changes in the levels of carbon dioxide) and they are located on the medulla oblongata near to the medullar respiratory groups of the respiratory center. Information from the peripheral chemoreceptors is conveyed along nerves to the respiratory groups of the respiratory center. There are four respiratory groups, two in the medulla and two in the pons. The two groups in the pons are known as the pontine respiratory group. Dorsal respiratory group – in the medulla Ventral respiratory group – in the medulla Pneumotaxic center – various nuclei of the pons Apneustic center – nucleus of the pons From the respiratory center, the muscles of respiration, in particular the diaphragm, are activated to cause air to move in and out of the lungs. Control of respiratory rhythm Ventilatory pattern Breathing is normally an unconscious, involuntary, automatic process. The pattern of motor stimuli during breathing can be divided into an inhalation stage and an exhalation stage. Inhalation shows a sudden, ramped increase in motor discharge to the respiratory muscles (and the pharyngeal constrictor muscles). Before the end of inh Document 2::: Speech science refers to the study of production, transmission and perception of speech. Speech science involves anatomy, in particular the anatomy of the oro-facial region and neuroanatomy, physiology, and acoustics. Speech production The production of speech is a highly complex motor task that involves approximately 100 orofacial, laryngeal, pharyngeal, and respiratory muscles. Precise and expeditious timing of these muscles is essential for the production of temporally complex speech sounds, which are characterized by transitions as short as 10 ms between frequency bands and an average speaking rate of approximately 15 sounds per second. Speech production requires airflow from the lungs (respiration) to be phonated through the vocal folds of the larynx (phonation) and resonated in the vocal cavities shaped by the jaw, soft palate, lips, tongue and other articulators (articulation). Respiration Respiration is the physical process of gas exchange between an organism and its environment involving four steps (ventilation, distribution, perfusion and diffusion) and two processes (inspiration and expiration). Respiration can be described as the mechanical process of air flowing into and out of the lungs on the principle of Boyle's law, stating that, as the volume of a container increases, the air pressure will decrease. This relatively negative pressure will cause air to enter the container until the pressure is equalized. During inspiration of air, the diaphragm contracts and the lungs expand drawn by pleurae through surface tension and negative pressure. When the lungs expand, air pressure becomes negative compared to atmospheric pressure and air will flow from the area of higher pressure to fill the lungs. Forced inspiration for speech uses accessory muscles to elevate the rib cage and enlarge the thoracic cavity in the vertical and lateral dimensions. During forced expiration for speech, muscles of the trunk and abdomen reduce the size of the thoracic cavity by Document 3::: Mucociliary clearance (MCC), mucociliary transport, or the mucociliary escalator, describes the self-clearing mechanism of the airways in the respiratory system. It is one of the two protective processes for the lungs in removing inhaled particles including pathogens before they can reach the delicate tissue of the lungs. The other clearance mechanism is provided by the cough reflex. Mucociliary clearance has a major role in pulmonary hygiene. MCC effectiveness relies on the correct properties of the airway surface liquid produced, both of the periciliary sol layer and the overlying mucus gel layer, and of the number and quality of the cilia present in the lining of the airways. An important factor is the rate of mucin secretion. The ion channels CFTR and ENaC work together to maintain the necessary hydration of the airway surface liquid. Any disturbance in the closely regulated functioning of the cilia can cause a disease. Disturbances in the structural formation of the cilia can cause a number of ciliopathies, notably primary ciliary dyskinesia. Cigarette smoke exposure can cause shortening of the cilia. Function In the upper part of the respiratory tract the nasal hair in the nostrils traps large particles, and the sneeze reflex may also be triggered to expel them. The nasal mucosa also traps particles preventing their entry further into the tract. In the rest of the respiratory tract, particles of different sizes become deposited along different parts of the airways. Larger particles are trapped higher up in the larger bronchi. As the airways become narrower only smaller particles can pass. The branchings of the airways cause turbulence in the airflow at all of their junctions where particles can then be deposited and they never reach the alveoli. Only very small pathogens are able to gain entry to the alveoli. Mucociliary clearance functions to remove these particulates and also to trap and remove pathogens from the airways, in order to protect the delicate Document 4::: In anatomy, a potential space is a space between two adjacent structures that are normally pressed together (directly apposed). Many anatomic spaces are potential spaces, which means that they are potential rather than realized (with their realization being dynamic according to physiologic or pathophysiologic events). In other words, they are like an empty plastic bag that has not been opened (two walls collapsed against each other; no interior volume until opened) or a balloon that has not been inflated. The pleural space, between the visceral and parietal pleura of the lung, is a potential space. Though it only contains a small amount of fluid normally, it can sometimes accumulate fluid or air that widens the space. The pericardial space is another potential space that may fill with fluid (effusion) in certain disease states (e.g. pericarditis; a large pericardial effusion may result in cardiac tamponade). Examples costodiaphragmatic recess pericardial cavity epidural space (within the skull) subdural space peritoneal cavity buccal space See also Fascial spaces of the head and neck The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What are the tiny sacs in the lungs where gas exchange takes place? A. alveoli B. chambers C. vacuoles D. ganglion Answer:
scienceQA-6124
multiple_choice
What do these two changes have in common? water boiling on a stove tearing a piece of paper
[ "Both are caused by cooling.", "Both are only physical changes.", "Both are chemical changes.", "Both are caused by heating." ]
B
Step 1: Think about each change. Water boiling on the stove is a change of state. So, it is a physical change. The liquid changes into a gas, but a different type of matter is not formed. Tearing a piece of paper is a physical change. The paper tears into pieces. But each piece is still made of paper. Step 2: Look at each answer choice. Both are only physical changes. Both changes are physical changes. No new matter is created. Both are chemical changes. Both changes are physical changes. They are not chemical changes. Both are caused by heating. Water boiling is caused by heating. But tearing a piece of paper is not. Both are caused by cooling. Neither change is caused by cooling.
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds. Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate. A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density. An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge. Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change. Examples Heating and cooling Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation. Magnetism Ferro-magnetic materials can become magnetic. The process is reve Document 2::: Thermofluids is a branch of science and engineering encompassing four intersecting fields: Heat transfer Thermodynamics Fluid mechanics Combustion The term is a combination of "thermo", referring to heat, and "fluids", which refers to liquids, gases and vapors. Temperature, pressure, equations of state, and transport laws all play an important role in thermofluid problems. Phase transition and chemical reactions may also be important in a thermofluid context. The subject is sometimes also referred to as "thermal fluids". Heat transfer Heat transfer is a discipline of thermal engineering that concerns the transfer of thermal energy from one physical system to another. Heat transfer is classified into various mechanisms, such as heat conduction, convection, thermal radiation, and phase-change transfer. Engineers also consider the transfer of mass of differing chemical species, either cold or hot, to achieve heat transfer. Sections include : Energy transfer by heat, work and mass Laws of thermodynamics Entropy Refrigeration Techniques Properties and nature of pure substances Applications Engineering : Predicting and analysing the performance of machines Thermodynamics Thermodynamics is the science of energy conversion involving heat and other forms of energy, most notably mechanical work. It studies and interrelates the macroscopic variables, such as temperature, volume and pressure, which describe physical, thermodynamic systems. Fluid mechanics Fluid Mechanics the study of the physical forces at work during fluid flow. Fluid mechanics can be divided into fluid kinematics, the study of fluid motion, and fluid kinetics, the study of the effect of forces on fluid motion. Fluid mechanics can further be divided into fluid statics, the study of fluids at rest, and fluid dynamics, the study of fluids in motion. Some of its more interesting concepts include momentum and reactive forces in fluid flow and fluid machinery theory and performance. Sections include: Flu Document 3::: Heat transfer is a discipline of thermal engineering that concerns the generation, use, conversion, and exchange of thermal energy (heat) between physical systems. Heat transfer is classified into various mechanisms, such as thermal conduction, thermal convection, thermal radiation, and transfer of energy by phase changes. Engineers also consider the transfer of mass of differing chemical species (mass transfer in the form of advection), either cold or hot, to achieve heat transfer. While these mechanisms have distinct characteristics, they often occur simultaneously in the same system. Heat conduction, also called diffusion, is the direct microscopic exchanges of kinetic energy of particles (such as molecules) or quasiparticles (such as lattice waves) through the boundary between two systems. When an object is at a different temperature from another body or its surroundings, heat flows so that the body and the surroundings reach the same temperature, at which point they are in thermal equilibrium. Such spontaneous heat transfer always occurs from a region of high temperature to another region of lower temperature, as described in the second law of thermodynamics. Heat convection occurs when the bulk flow of a fluid (gas or liquid) carries its heat through the fluid. All convective processes also move heat partly by diffusion, as well. The flow of fluid may be forced by external processes, or sometimes (in gravitational fields) by buoyancy forces caused when thermal energy expands the fluid (for example in a fire plume), thus influencing its own transfer. The latter process is often called "natural convection". The former process is often called "forced convection." In this case, the fluid is forced to flow by use of a pump, fan, or other mechanical means. Thermal radiation occurs through a vacuum or any transparent medium (solid or fluid or gas). It is the transfer of energy by means of photons or electromagnetic waves governed by the same laws. Overview Heat Document 4::: A characteristic property is a chemical or physical property that helps identify and classify substances. The characteristic properties of a substance are always the same whether the sample being observed is large or small. Thus, conversely, if the property of a substance changes as the sample size changes, that property is not a characteristic property. Examples of physical properties that are not characteristic properties are mass and volume. Examples of characteristic properties include melting points, boiling points, density, viscosity, solubility, crystal shape, and color. Substances with characteristic properties can be separated. For example, in fractional distillation, liquids are separated using the boiling point. The water Boiling point is 212 degrees Fahrenheit. Identifying a substance Every characteristic property is unique to one given substance. Scientists use characteristic properties to identify unknown substances. However, characteristic properties are most useful for distinguishing between two or more substances, not identifying a single substance. For example, isopropanol and water can be distinguished by the characteristic property of odor. Characteristic properties are used because the sample size and the shape of the substance does not matter. For example, 1 gram of lead is the same color as 100 tons of lead. See also Intensive and extensive properties The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What do these two changes have in common? water boiling on a stove tearing a piece of paper A. Both are caused by cooling. B. Both are only physical changes. C. Both are chemical changes. D. Both are caused by heating. Answer:
sciq-10346
multiple_choice
The type of what dictates how far it can penetrate into matter, such as lead or human flesh?
[ "radiation", "specific gravity", "insulation", "evaporation" ]
A
Relavent Documents: Document 0::: Penetration depth is a measure of how deep light or any electromagnetic radiation can penetrate into a material. It is defined as the depth at which the intensity of the radiation inside the material falls to 1/e (about 37%) of its original value at (or more properly, just beneath) the surface. When electromagnetic radiation is incident on the surface of a material, it may be (partly) reflected from that surface and there will be a field containing energy transmitted into the material. This electromagnetic field interacts with the atoms and electrons inside the material. Depending on the nature of the material, the electromagnetic field might travel very far into the material, or may die out very quickly. For a given material, penetration depth will generally be a function of wavelength. Beer–Lambert law According to Beer–Lambert law, the intensity of an electromagnetic wave inside a material falls off exponentially from the surface as If denotes the penetration depth, we have Penetration depth is one term that describes the decay of electromagnetic waves inside of a material. The above definition refers to the depth at which the intensity or power of the field decays to 1/e of its surface value. In many contexts one is concentrating on the field quantities themselves: the electric and magnetic fields in the case of electromagnetic waves. Since the power of a wave in a particular medium is proportional to the square of a field quantity, one may speak of a penetration depth at which the magnitude of the electric (or magnetic) field has decayed to 1/e of its surface value, and at which point the power of the wave has thereby decreased to or about 13% of its surface value: Note that is identical to the skin depth, the latter term usually applying to metals in reference to the decay of electrical currents (which follow the decay in the electric or magnetic field due to a plane wave incident on a bulk conductor). The attenuation constant is also identical Document 1::: Health physics, also referred to as the science of radiation protection, is the profession devoted to protecting people and their environment from potential radiation hazards, while making it possible to enjoy the beneficial uses of radiation. Health physicists normally require a four-year bachelor’s degree and qualifying experience that demonstrates a professional knowledge of the theory and application of radiation protection principles and closely related sciences. Health physicists principally work at facilities where radionuclides or other sources of ionizing radiation (such as X-ray generators) are used or produced; these include research, industry, education, medical facilities, nuclear power, military, environmental protection, enforcement of government regulations, and decontamination and decommissioning—the combination of education and experience for health physicists depends on the specific field in which the health physicist is engaged. Sub-specialties There are many sub-specialties in the field of health physics, including Ionising radiation instrumentation and measurement Internal dosimetry and external dosimetry Radioactive waste management Radioactive contamination, decontamination and decommissioning Radiological engineering (shielding, holdup, etc.) Environmental assessment, radiation monitoring and radon evaluation Operational radiation protection/health physics Particle accelerator physics Radiological emergency response/planning - (e.g., Nuclear Emergency Support Team) Industrial uses of radioactive material Medical health physics Public information and communication involving radioactive materials Biological effects/radiation biology Radiation standards Radiation risk analysis Nuclear power Radioactive materials and homeland security Radiation protection Nanotechnology Operational health physics The subfield of operational health physics, also called applied health physics in older sources, focuses on field work and the p Document 2::: Radiation sensitivity is the susceptibility of a material to physical or chemical changes induced by radiation. Examples of radiation sensitive materials are silver chloride, photoresists and biomaterials. Pine trees are more radiation susceptible than birch due to the complexity of the pine DNA in comparison to the birch. Examples of radiation insensitive materials are metals and ionic crystals such as quartz and sapphire. The radiation effect depends on the type of the irradiating particles, their energy, and the number of incident particles per unit volume. Radiation effects can be transient or permanent. The persistence of the radiation effect depends on the stability of the induced physical and chemical change. Physical radiation effects depending on diffusion properties can be thermally annealed whereby the original structure of the material is recovered. Chemical radiation effects usually cannot be recovered. Document 3::: absorbed dose Electromagnetic radiation equivalent dose hormesis Ionizing radiation Louis Harold Gray (British physicist) rad (unit) radar radar astronomy radar cross section radar detector radar gun radar jamming (radar reflector) corner reflector radar warning receiver (Radarange) microwave oven radiance (radiant: see) meteor shower radiation Radiation absorption Radiation acne Radiation angle radiant barrier (radiation belt: see) Van Allen radiation belt Radiation belt electron Radiation belt model Radiation Belt Storm Probes radiation budget Radiation burn Radiation cancer (radiation contamination) radioactive contamination Radiation contingency Radiation damage Radiation damping Radiation-dominated era Radiation dose reconstruction Radiation dosimeter Radiation effect radiant energy Radiation enteropathy (radiation exposure) radioactive contamination Radiation flux (radiation gauge: see) gauge fixing radiation hardening (radiant heat) thermal radiation radiant heating radiant intensity radiation hormesis radiation impedance radiation implosion Radiation-induced lung injury Radiation Laboratory radiation length radiation mode radiation oncologist radiation pattern radiation poisoning (radiation sickness) radiation pressure radiation protection (radiation shield) (radiation shielding) radiation resistance Radiation Safety Officer radiation scattering radiation therapist radiation therapy (radiotherapy) (radiation treatment) radiation therapy (radiation units: see) :Category:Units of radiation dose (radiation weight factor: see) equivalent dose radiation zone radiative cooling radiative forcing radiator radio (radio amateur: see) amateur radio (radio antenna) antenna (radio) radio astronomy radio beacon (radio broadcasting: see) broadcasting radio clock (radio communications) radio radio control radio controlled airplane radio controlled car radio-controlled helicopter radio control Document 4::: Permeance, in general, is the degree to which a material admits a flow of matter or energy. Permeance is usually represented by a curly capital P: . Electromagnetism In electromagnetism, permeance is the inverse of reluctance. In a magnetic circuit, permeance is a measure of the quantity of magnetic flux for a number of current-turns. A magnetic circuit almost acts as though the flux is conducted, therefore permeance is larger for large cross-sections of a material and smaller for smaller cross section lengths. This concept is analogous to electrical conductance in the electric circuit. Magnetic permeance is defined as the reciprocal of magnetic reluctance (in analogy with the reciprocity between electric conductance and resistance): which can also be re-written: using Hopkinson's law (magnetic circuit analogue of Ohm's law for electric circuits) and the definition of magnetomotive force (magnetic analogue of electromotive force): where: , magnetic flux, , current, in amperes, , winding number of, or count of turns in the electric coil. Alternatively in terms of magnetic permeability (analogous to electric conductivity): where: , permeability of material, , cross-sectional area, , magnetic path length. The SI unit of magnetic permeance is the henry (H), that is webers per ampere-turn. Materials science In materials science, permeance is the degree to which a material transmits another substance. See also Dielectric complex reluctance Reluctance External articles and references Electromagnetism Properties of Magnetic Materials (units of magnetic permeance) Material science Bombaru, D., Jutras, R., and Patenaude, A., "Air Permeance of Building Materials". Summary report prepared by, AIR-INS Inc. for Canada Mortgage and Housing Corporation, Ottawa, 1988. Electric and magnetic fields in matter The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The type of what dictates how far it can penetrate into matter, such as lead or human flesh? A. radiation B. specific gravity C. insulation D. evaporation Answer:
sciq-353
multiple_choice
The attraction between all objects in the universe is known as ______.
[ "magnetism", "gravity", "variation", "electricity" ]
B
Relavent Documents: Document 0::: This is a list of topics that are included in high school physics curricula or textbooks. Mathematical Background SI Units Scalar (physics) Euclidean vector Motion graphs and derivatives Pythagorean theorem Trigonometry Motion and forces Motion Force Linear motion Linear motion Displacement Speed Velocity Acceleration Center of mass Mass Momentum Newton's laws of motion Work (physics) Free body diagram Rotational motion Angular momentum (Introduction) Angular velocity Centrifugal force Centripetal force Circular motion Tangential velocity Torque Conservation of energy and momentum Energy Conservation of energy Elastic collision Inelastic collision Inertia Moment of inertia Momentum Kinetic energy Potential energy Rotational energy Electricity and magnetism Ampère's circuital law Capacitor Coulomb's law Diode Direct current Electric charge Electric current Alternating current Electric field Electric potential energy Electron Faraday's law of induction Ion Inductor Joule heating Lenz's law Magnetic field Ohm's law Resistor Transistor Transformer Voltage Heat Entropy First law of thermodynamics Heat Heat transfer Second law of thermodynamics Temperature Thermal energy Thermodynamic cycle Volume (thermodynamics) Work (thermodynamics) Waves Wave Longitudinal wave Transverse waves Transverse wave Standing Waves Wavelength Frequency Light Light ray Speed of light Sound Speed of sound Radio waves Harmonic oscillator Hooke's law Reflection Refraction Snell's law Refractive index Total internal reflection Diffraction Interference (wave propagation) Polarization (waves) Vibrating string Doppler effect Gravity Gravitational potential Newton's law of universal gravitation Newtonian constant of gravitation See also Outline of physics Physics education Document 1::: Applied physics is the application of physics to solve scientific or engineering problems. It is usually considered a bridge or a connection between physics and engineering. "Applied" is distinguished from "pure" by a subtle combination of factors, such as the motivation and attitude of researchers and the nature of the relationship to the technology or science that may be affected by the work. Applied physics is rooted in the fundamental truths and basic concepts of the physical sciences but is concerned with the utilization of scientific principles in practical devices and systems and with the application of physics in other areas of science and high technology. Examples of research and development areas Accelerator physics Acoustics Atmospheric physics Biophysics Brain–computer interfacing Chemistry Chemical physics Differentiable programming Artificial intelligence Scientific computing Engineering physics Chemical engineering Electrical engineering Electronics Sensors Transistors Materials science and engineering Metamaterials Nanotechnology Semiconductors Thin films Mechanical engineering Aerospace engineering Astrodynamics Electromagnetic propulsion Fluid mechanics Military engineering Lidar Radar Sonar Stealth technology Nuclear engineering Fission reactors Fusion reactors Optical engineering Photonics Cavity optomechanics Lasers Photonic crystals Geophysics Materials physics Medical physics Health physics Radiation dosimetry Medical imaging Magnetic resonance imaging Radiation therapy Microscopy Scanning probe microscopy Atomic force microscopy Scanning tunneling microscopy Scanning electron microscopy Transmission electron microscopy Nuclear physics Fission Fusion Optical physics Nonlinear optics Quantum optics Plasma physics Quantum technology Quantum computing Quantum cryptography Renewable energy Space physics Spectroscopy See also Applied science Applied mathematics Engineering Engineering Physics High Technology Document 2::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 3::: Physics education or physics teaching refers to the education methods currently used to teach physics. The occupation is called physics educator or physics teacher. Physics education research refers to an area of pedagogical research that seeks to improve those methods. Historically, physics has been taught at the high school and college level primarily by the lecture method together with laboratory exercises aimed at verifying concepts taught in the lectures. These concepts are better understood when lectures are accompanied with demonstration, hand-on experiments, and questions that require students to ponder what will happen in an experiment and why. Students who participate in active learning for example with hands-on experiments learn through self-discovery. By trial and error they learn to change their preconceptions about phenomena in physics and discover the underlying concepts. Physics education is part of the broader area of science education. Ancient Greece Aristotle wrote what is considered now as the first textbook of physics. Aristotle's ideas were taught unchanged until the Late Middle Ages, when scientists started making discoveries that didn't fit them. For example, Copernicus' discovery contradicted Aristotle's idea of an Earth-centric universe. Aristotle's ideas about motion weren't displaced until the end of the 17th century, when Newton published his ideas. Today's physics students often think of physics concepts in Aristotelian terms, despite being taught only Newtonian concepts. Hong Kong High schools In Hong Kong, physics is a subject for public examination. Local students in Form 6 take the public exam of Hong Kong Diploma of Secondary Education (HKDSE). Compare to the other syllabus include GCSE, GCE etc. which learn wider and boarder of different topics, the Hong Kong syllabus is learning more deeply and more challenges with calculations. Topics are narrow down to a smaller amount compared to the A-level due to the insufficient teachi Document 4::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The attraction between all objects in the universe is known as ______. A. magnetism B. gravity C. variation D. electricity Answer:
sciq-445
multiple_choice
Where do the eggs develop?
[ "the ovaries", "the follicles", "the glands", "the uterus" ]
A
Relavent Documents: Document 0::: The germ cell nest (germ-line cyst) forms in the ovaries during their development. The nest consists of multiple interconnected oogonia formed by incomplete cell division. The interconnected oogonia are surrounded by somatic cells called granulosa cells. Later on in development, the germ cell nests break down through invasion of granulosa cells. The result is individual oogonia surrounded by a single layer of granulosa cells. There is also a comparative germ cell nest structure in the developing spermatogonia, with interconnected intracellular cytoplasmic bridges. Formation of germ cell nests Prior to meiosis primordial germ cells (PGCs) migrate to the gonads and mitotically divide along the genital ridge in clusters or nests of cells referred to as germline cysts or germ cell nests. The understating of germ cell nest formation is limited. However, invertebrate models, especially drosophila have provided insight into the mechanisms surrounding formation. In females, it is suggested that cysts form from dividing progenitor cells. During this cyst formation, 4 rounds of division with incomplete cytokinesis occur resulting in cystocytes that are joined by intercellular bridges, also known as ring canals. Rodent PGCs migrate to the gonads and mitotically divide at embryonic day (E) 10.5. It is at this stage they switch from complete to incomplete cytokinesis during the mitotic cycle from E10.5-E14.5. Germ cell nests emerge following consecutive divisions of progenitor cells resulting from cleavage furrows arresting and forming intercellular bridges. The intercellular bridges are crucial in maintaining effective communication. They ensure meiosis begins immediately after the mitotic cyst formation cycle is complete. In females, mitosis will end at E14.5 and meiosis will commence. However, It is possible that germ cells may travel to the gonads and cluster together forming nests after their arrival or form through cellular aggregation. Function Most of our understan Document 1::: Oogenesis, ovogenesis, or oögenesis is the differentiation of the ovum (egg cell) into a cell competent to further develop when fertilized. It is developed from the primary oocyte by maturation. Oogenesis is initiated in the embryonic stage. Oogenesis in non-human mammals In mammals, the first part of oogenesis starts in the germinal epithelium, which gives rise to the development of ovarian follicles, the functional unit of the ovary. Oogenesis consists of several sub-processes: oocytogenesis, ootidogenesis, and finally maturation to form an ovum (oogenesis proper). Folliculogenesis is a separate sub-process that accompanies and supports all three oogenetic sub-processes. Oogonium —(Oocytogenesis)—> Primary Oocyte —(Meiosis I)—> First Polar body (Discarded afterward) + Secondary oocyte —(Meiosis II)—> Second Polar Body (Discarded afterward) + Ovum Oocyte meiosis, important to all animal life cycles yet unlike all other instances of animal cell division, occurs completely without the aid of spindle-coordinating centrosomes. The creation of oogonia The creation of oogonia traditionally doesn't belong to oogenesis proper, but, instead, to the common process of gametogenesis, which, in the female human, begins with the processes of folliculogenesis, oocytogenesis, and ootidogenesis. Oogonia enter meiosis during embryonic development, becoming oocytes. Meiosis begins with DNA replication and meiotic crossing over. It then stops in early prophase. Maintenance of meiotic arrest Mammalian oocytes are maintained in meiotic prophase arrest for a very long time—months in mice, years in humans. Initially the arrest is due to lack of sufficient cell cycle proteins to allow meiotic progression. However, as the oocyte grows, these proteins are synthesized, and meiotic arrest becomes dependent on cyclic AMP. The cyclic AMP is generated by the oocyte by adenylyl cyclase in the oocyte membrane. The adenylyl cyclase is kept active by a constitutively active G-protein-coupled Document 2::: The larger ovarian follicles consist of an external fibrovascular coat, connected with the surrounding stroma of the ovary by a network of blood vessels, and an internal coat, which consists of several layers of nucleated cells, called the membrana granulosa. It contains numerous granulosa cells. At one part of the mature follicle the cells of the membrana granulosa are collected into a mass which projects into the cavity of the follicle. This is termed the discus proligerus. Document 3::: Reproductive biology includes both sexual and asexual reproduction. Reproductive biology includes a wide number of fields: Reproductive systems Endocrinology Sexual development (Puberty) Sexual maturity Reproduction Fertility Human reproductive biology Endocrinology Human reproductive biology is primarily controlled through hormones, which send signals to the human reproductive structures to influence growth and maturation. These hormones are secreted by endocrine glands, and spread to different tissues in the human body. In humans, the pituitary gland synthesizes hormones used to control the activity of endocrine glands. Reproductive systems Internal and external organs are included in the reproductive system. There are two reproductive systems including the male and female, which contain different organs from one another. These systems work together in order to produce offspring. Female reproductive system The female reproductive system includes the structures involved in ovulation, fertilization, development of an embryo, and birth. These structures include: Ovaries Oviducts Uterus Vagina Mammary Glands Estrogen is one of the sexual reproductive hormones that aid in the sexual reproductive system of the female. Male reproductive system The male reproductive system includes testes, rete testis, efferent ductules, epididymis, sex accessory glands, sex accessory ducts and external genitalia. Testosterone, an androgen, although present in both males and females, is relatively more abundant in males. Testosterone serves as one of the major sexual reproductive hormones in the male reproductive system However, the enzyme aromatase is present in testes and capable of synthesizing estrogens from androgens. Estrogens are present in high concentrations in luminal fluids of the male reproductive tract. Androgen and estrogen receptors are abundant in epithelial cells of the male reproductive tract. Animal Reproductive Biology Animal reproduction oc Document 4::: The germinal epithelium is the epithelial layer of the seminiferous tubules of the testicles. It is also known as the wall of the seminiferous tubules. The cells in the epithelium are connected via tight junctions. There are two types of cells in the germinal epithelium. The large Sertoli cells (which are not dividing) function as supportive cells to the developing sperm. The second cell type are the cells belonging to the spermatogenic cell lineage. These develop to eventually become sperm cells (spermatozoon). Typically, the spermatogenic cells will make four to eight layers in the germinal epithelium. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Where do the eggs develop? A. the ovaries B. the follicles C. the glands D. the uterus Answer:
sciq-7236
multiple_choice
What nucleic acid stores the genetic information?
[ "Nucleus", "Ribosomes", "dna", "molecule" ]
C
Relavent Documents: Document 0::: A nucleic acid sequence is a succession of bases within the nucleotides forming alleles within a DNA (using GACT) or RNA (GACU) molecule. This succession is denoted by a series of a set of five different letters that indicate the order of the nucleotides. By convention, sequences are usually presented from the 5' end to the 3' end. For DNA, with its double helix, there are two possible directions for the notated sequence; of these two, the sense strand is used. Because nucleic acids are normally linear (unbranched) polymers, specifying the sequence is equivalent to defining the covalent structure of the entire molecule. For this reason, the nucleic acid sequence is also termed the primary structure. The sequence represents biological information. Biological deoxyribonucleic acid represents the information which directs the functions of an organism. Nucleic acids also have a secondary structure and tertiary structure. Primary structure is sometimes mistakenly referred to as "primary sequence". However there is no parallel concept of secondary or tertiary sequence. Nucleotides Nucleic acids consist of a chain of linked units called nucleotides. Each nucleotide consists of three subunits: a phosphate group and a sugar (ribose in the case of RNA, deoxyribose in DNA) make up the backbone of the nucleic acid strand, and attached to the sugar is one of a set of nucleobases. The nucleobases are important in base pairing of strands to form higher-level secondary and tertiary structures such as the famed double helix. The possible letters are A, C, G, and T, representing the four nucleotide bases of a DNA strand – adenine, cytosine, guanine, thymine – covalently linked to a phosphodiester backbone. In the typical case, the sequences are printed abutting one another without gaps, as in the sequence AAAGTCTGAC, read left to right in the 5' to 3' direction. With regards to transcription, a sequence is on the coding strand if it has the same order as the transcribed RNA. Document 1::: Experimental approaches of determining the structure of nucleic acids, such as RNA and DNA, can be largely classified into biophysical and biochemical methods. Biophysical methods use the fundamental physical properties of molecules for structure determination, including X-ray crystallography, NMR and cryo-EM. Biochemical methods exploit the chemical properties of nucleic acids using specific reagents and conditions to assay the structure of nucleic acids. Such methods may involve chemical probing with specific reagents, or rely on native or analogue chemistry. Different experimental approaches have unique merits and are suitable for different experimental purposes. Biophysical methods X-ray crystallography X-ray crystallography is not common for nucleic acids alone, since neither DNA nor RNA readily form crystals. This is due to the greater degree of intrinsic disorder and dynamism in nucleic acid structures and the negatively charged (deoxy)ribose-phosphate backbones, which repel each other in close proximity. Therefore, crystallized nucleic acids tend to be complexed with a protein of interest to provide structural order and neutralize the negative charge. Nuclear magnetic resonance spectroscopy (NMR) Nucleic acid NMR is the use of NMR spectroscopy to obtain information about the structure and dynamics of nucleic acid molecules, such as DNA or RNA. As of 2003, nearly half of all known RNA structures had been determined by NMR spectroscopy. Nucleic acid NMR uses similar techniques as protein NMR, but has several differences. Nucleic acids have a smaller percentage of hydrogen atoms, which are the atoms usually observed in NMR, and because nucleic acid double helices are stiff and roughly linear, they do not fold back on themselves to give "long-range" correlations. The types of NMR usually done with nucleic acids are 1H or proton NMR, 13C NMR, 15N NMR, and 31P NMR. Two-dimensional NMR methods are almost always used, such as correlation spectroscopy (COSY Document 2::: In molecular biology, a library is a collection of DNA fragments that is stored and propagated in a population of micro-organisms through the process of molecular cloning. There are different types of DNA libraries, including cDNA libraries (formed from reverse-transcribed RNA), genomic libraries (formed from genomic DNA) and randomized mutant libraries (formed by de novo gene synthesis where alternative nucleotides or codons are incorporated). DNA library technology is a mainstay of current molecular biology, genetic engineering, and protein engineering, and the applications of these libraries depend on the source of the original DNA fragments. There are differences in the cloning vectors and techniques used in library preparation, but in general each DNA fragment is uniquely inserted into a cloning vector and the pool of recombinant DNA molecules is then transferred into a population of bacteria (a Bacterial Artificial Chromosome or BAC library) or yeast such that each organism contains on average one construct (vector + insert). As the population of organisms is grown in culture, the DNA molecules contained within them are copied and propagated (thus, "cloned"). Terminology The term "library" can refer to a population of organisms, each of which carries a DNA molecule inserted into a cloning vector, or alternatively to the collection of all of the cloned vector molecules. cDNA libraries A cDNA library represents a sample of the mRNA purified from a particular source (either a collection of cells, a particular tissue, or an entire organism), which has been converted back to a DNA template by the use of the enzyme reverse transcriptase. It thus represents the genes that were being actively transcribed in that particular source under the physiological, developmental, or environmental conditions that existed when the mRNA was purified. cDNA libraries can be generated using techniques that promote "full-length" clones or under conditions that generate shorter f Document 3::: What Is Life? The Physical Aspect of the Living Cell is a 1944 science book written for the lay reader by physicist Erwin Schrödinger. The book was based on a course of public lectures delivered by Schrödinger in February 1943, under the auspices of the Dublin Institute for Advanced Studies, where he was Director of Theoretical Physics, at Trinity College, Dublin. The lectures attracted an audience of about 400, who were warned "that the subject-matter was a difficult one and that the lectures could not be termed popular, even though the physicist’s most dreaded weapon, mathematical deduction, would hardly be utilized." Schrödinger's lecture focused on one important question: "how can the events in space and time which take place within the spatial boundary of a living organism be accounted for by physics and chemistry?" In the book, Schrödinger introduced the idea of an "aperiodic crystal" that contained genetic information in its configuration of covalent chemical bonds. In the 1950s, this idea stimulated enthusiasm for discovering the chemical basis of genetic inheritance. Although the existence of some form of hereditary information had been hypothesized since 1869, its role in reproduction and its helical shape were still unknown at the time of Schrödinger's lecture. In retrospect, Schrödinger's aperiodic crystal can be viewed as a well-reasoned theoretical prediction of what biologists should have been looking for during their search for genetic material. In 1953, James D. Watson and Francis Crick jointly proposed the double helix structure of deoxyribonucleic acid (DNA) on the basis of, amongst other theoretical insights, X-ray diffraction experiments conducted by Rosalind Franklin. They both credited Schrödinger's book with presenting an early theoretical description of how the storage of genetic information would work, and each independently acknowledged the book as a source of inspiration for their initial researches. Background The book, published i Document 4::: Artificially Expanded Genetic Information System (AEGIS) is a synthetic DNA analog experiment that uses some unnatural base pairs from the laboratories of the Foundation for Applied Molecular Evolution in Gainesville, Florida. AEGIS is a NASA-funded project to try to understand how extraterrestrial life may have developed. The system uses twelve different nucleobases in its genetic code. These include the four canonical nucleobases found in DNA (adenine, cytosine, guanine and thymine) plus eight synthetic nucleobases). AEGIS includes S:B, Z:P, V:J and K:X base pairs. See also Abiogenesis Astrobiology Hachimoji DNA xDNA Hypothetical types of biochemistry Xeno nucleic acid The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What nucleic acid stores the genetic information? A. Nucleus B. Ribosomes C. dna D. molecule Answer:
sciq-1852
multiple_choice
What forms when water vapor condenses around particles in the air?
[ "humididty", "clouds", "wind", "storms" ]
B
Relavent Documents: Document 0::: Condensation is the change of the state of matter from the gas phase into the liquid phase, and is the reverse of vaporization. The word most often refers to the water cycle. It can also be defined as the change in the state of water vapor to liquid water when in contact with a liquid or solid surface or cloud condensation nuclei within the atmosphere. When the transition happens from the gaseous phase into the solid phase directly, the change is called deposition. Initiation Condensation is initiated by the formation of atomic/molecular clusters of that species within its gaseous volume—like rain drop or snow flake formation within clouds—or at the contact between such gaseous phase and a liquid or solid surface. In clouds, this can be catalyzed by water-nucleating proteins, produced by atmospheric microbes, which are capable of binding gaseous or liquid water molecules. Reversibility scenarios A few distinct reversibility scenarios emerge here with respect to the nature of the surface. absorption into the surface of a liquid (either of the same substance or one of its solvents)—is reversible as evaporation. adsorption (as dew droplets) onto solid surface at pressures and temperatures higher than the species' triple point—also reversible as evaporation. adsorption onto solid surface (as supplemental layers of solid) at pressures and temperatures lower than the species' triple point—is reversible as sublimation. Most common scenarios Condensation commonly occurs when a vapor is cooled and/or compressed to its saturation limit when the molecular density in the gas phase reaches its maximal threshold. Vapor cooling and compressing equipment that collects condensed liquids is called a "condenser". Measurement Psychrometry measures the rates of condensation through evaporation into the air moisture at various atmospheric pressures and temperatures. Water is the product of its vapor condensation—condensation is the process of such phase conversion. Applicatio Document 1::: Aerosol mass spectrometry is the application of mass spectrometry to the analysis of the composition of aerosol particles. Aerosol particles are defined as solid and liquid particles suspended in a gas (air), with size range of 3 nm to 100 μm in diameter and are produced from natural and anthropogenic sources, through a variety of different processes that include wind-blown suspension and combustion of fossil fuels and biomass. Analysis of these particles is important owing to their major impacts on global climate change, visibility, regional air pollution and human health. Aerosols are very complex in structure, can contain thousands of different chemical compounds within a single particle, and need to be analysed for both size and chemical composition, in real-time or off-line applications. Off-line mass spectrometry is performed on collected particles, while on-line mass spectrometry is performed on particles introduced in real time. History In literature from ancient Rome there are complaints of foul air, while in 1273 the inhabitants of London were discussing the prohibition of coal burning to improve air quality. However, the measurement and analysis of aerosols only became established in the second half of the 19th century. In 1847 Henri Becquerel presented the first concept of particles in the air in his condensation nuclei experiment and his ideas were confirmed in later experiments by Coulier in 1875. These ideas were expanded on between 1880 and 1890 by meteorologist John Aitken who demonstrated the fundamental role of dust particles in the formation of clouds and fogs. Aitken's method for aerosol analysis consisted of counting and sizing particles mounted on a slide, using a microscope. The composition of the particles was determined by their refractive index. In the 1920s aerosol measurements, using Aitken's simple microscopic method, became more common place because the negative health effects of industrial aerosols and dust were starting to be re Document 2::: At equilibrium, the relationship between water content and equilibrium relative humidity of a material can be displayed graphically by a curve, the so-called moisture sorption isotherm. For each humidity value, a sorption isotherm indicates the corresponding water content value at a given, constant temperature. If the composition or quality of the material changes, then its sorption behaviour also changes. Because of the complexity of sorption process the isotherms cannot be determined explicitly by calculation, but must be recorded experimentally for each product. The relationship between water content and water activity (aw) is complex. An increase in aw is usually accompanied by an increase in water content, but in a non-linear fashion. This relationship between water activity and moisture content at a given temperature is called the moisture sorption isotherm. These curves are determined experimentally and constitute the fingerprint of a food system. BET theory (Brunauer-Emmett-Teller) provides a calculation to describe the physical adsorption of gas molecules on a solid surface. Because of the complexity of the process, these calculations are only moderately successful; however, Stephen Brunauer was able to classify sorption isotherms into five generalized shapes as shown in Figure 2. He found that Type II and Type III isotherms require highly porous materials or desiccants, with first monolayer adsorption, followed by multilayer adsorption and finally leading to capillary condensation, explaining these materials high moisture capacity at high relative humidity. Care must be used in extracting data from isotherms, as the representation for each axis may vary in its designation. Brunauer provided the vertical axis as moles of gas adsorbed divided by the moles of the dry material, and on the horizontal axis he used the ratio of partial pressure of the gas just over the sample, divided by its partial pressure at saturation. More modern isotherms showing the Document 3::: Haze is traditionally an atmospheric phenomenon in which dust, smoke, and other dry particulates suspended in air obscure visibility and the clarity of the sky. The World Meteorological Organization manual of codes includes a classification of particulates causing horizontal obscuration into categories of fog, ice fog, steam fog, mist, haze, smoke, volcanic ash, dust, sand, and snow. Sources for particles that cause haze include farming (ploughing in dry weather), traffic, industry, windy weather, volcanic activity and wildfires. Seen from afar (e.g. an approaching airplane) and depending on the direction of view with respect to the Sun, haze may appear brownish or bluish, while mist tends to be bluish grey instead. Whereas haze often is considered a phenomenon occurring in dry air, mist formation is a phenomenon in saturated, humid air. However, haze particles may act as condensation nuclei that leads to the subsequent vapor condensation and formation of mist droplets; such forms of haze are known as "wet haze". In meteorological literature, the word haze is generally used to denote visibility-reducing aerosols of the wet type suspended in the atmosphere. Such aerosols commonly arise from complex chemical reactions that occur as sulfur dioxide gases emitted during combustion are converted into small droplets of sulfuric acid when exposed. The reactions are enhanced in the presence of sunlight, high relative humidity, and an absence of air flow (wind). A small component of wet-haze aerosols appear to be derived from compounds released by trees when burning, such as terpenes. For all these reasons, wet haze tends to be primarily a warm-season phenomenon. Large areas of haze covering many thousands of kilometers may be produced under extensive favorable conditions each summer. Air pollution Haze often occurs when suspended dust and smoke particles accumulate in relatively dry air. When weather conditions block the dispersal of smoke and other pollutants they concen Document 4::: In fluid dynamics, a convection cell is the phenomenon that occurs when density differences exist within a body of liquid or gas. These density differences result in rising and/or falling convection currents, which are the key characteristics of a convection cell. When a volume of fluid is heated, it expands and becomes less dense and thus more buoyant than the surrounding fluid. The colder, denser part of the fluid descends to settle below the warmer, less-dense fluid, and this causes the warmer fluid to rise. Such movement is called convection, and the moving body of liquid is referred to as a convection cell. This particular type of convection, where a horizontal layer of fluid is heated from below, is known as Rayleigh–Bénard convection. Convection usually requires a gravitational field, but in microgravity experiments, thermal convection has been observed without gravitational effects. Fluids are generalized as materials that exhibit the property of flow; however, this behavior is not unique to liquids. Fluid properties can also be observed in gases and even in particulate solids (such as sand, gravel, and larger objects during rock slides). A convection cell is most notable in the formation of clouds with its release and transportation of energy. As air moves along the ground it absorbs heat, loses density and moves up into the atmosphere. When it is forced into the atmosphere, which has a lower air pressure, it cannot contain as much fluid as at a lower altitude, so it releases its moist air, producing rain. In this process the warm air is cooled; it gains density and falls towards the earth and the cell repeats the cycle. Convection cells can form in any fluid, including the Earth's atmosphere (where they are called Hadley cells), boiling water, soup (where the cells can be identified by the particles they transport, such as grains of rice), the ocean, or the surface of the Sun. The size of convection cells is largely determined by the fluid's properties. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What forms when water vapor condenses around particles in the air? A. humididty B. clouds C. wind D. storms Answer:
sciq-5173
multiple_choice
What state of matter has a definite volume, but not a definite shape?
[ "solid", "gas", "mixture", "liquid" ]
D
Relavent Documents: Document 0::: States of matter are distinguished by changes in the properties of matter associated with external factors like pressure and temperature. States are usually distinguished by a discontinuity in one of those properties: for example, raising the temperature of ice produces a discontinuity at 0°C, as energy goes into a phase transition, rather than temperature increase. The three classical states of matter are solid, liquid and gas. In the 20th century, however, increased understanding of the more exotic properties of matter resulted in the identification of many additional states of matter, none of which are observed in normal conditions. Low-energy states of matter Classical states Solid: A solid holds a definite shape and volume without a container. The particles are held very close to each other. Amorphous solid: A solid in which there is no far-range order of the positions of the atoms. Crystalline solid: A solid in which atoms, molecules, or ions are packed in regular order. Plastic crystal: A molecular solid with long-range positional order but with constituent molecules retaining rotational freedom. Quasicrystal: A solid in which the positions of the atoms have long-range order, but this is not in a repeating pattern. Liquid: A mostly non-compressible fluid. Able to conform to the shape of its container but retains a (nearly) constant volume independent of pressure. Liquid crystal: Properties intermediate between liquids and crystals. Generally, able to flow like a liquid but exhibiting long-range order. Gas: A compressible fluid. Not only will a gas take the shape of its container but it will also expand to fill the container. Modern states Plasma: Free charged particles, usually in equal numbers, such as ions and electrons. Unlike gases, plasma may self-generate magnetic fields and electric currents and respond strongly and collectively to electromagnetic forces. Plasma is very uncommon on Earth (except for the ionosphere), although it is the mo Document 1::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 2::: A liquid is a nearly incompressible fluid that conforms to the shape of its container but retains a nearly constant volume independent of pressure. It is one of the four fundamental states of matter (the others being solid, gas, and plasma), and is the only state with a definite volume but no fixed shape. The density of a liquid is usually close to that of a solid, and much higher than that of a gas. Therefore, liquid and solid are both termed condensed matter. On the other hand, as liquids and gases share the ability to flow, they are both called fluids. A liquid is made up of tiny vibrating particles of matter, such as atoms, held together by intermolecular bonds. Like a gas, a liquid is able to flow and take the shape of a container. Unlike a gas, a liquid maintains a fairly constant density and does not disperse to fill every space of a container. Although liquid water is abundant on Earth, this state of matter is actually the least common in the known universe, because liquids require a relatively narrow temperature/pressure range to exist. Most known matter in the universe is either gas (as interstellar clouds) or plasma (as stars). Introduction Liquid is one of the four primary states of matter, with the others being solid, gas and plasma. A liquid is a fluid. Unlike a solid, the molecules in a liquid have a much greater freedom to move. The forces that bind the molecules together in a solid are only temporary in a liquid, allowing a liquid to flow while a solid remains rigid. A liquid, like a gas, displays the properties of a fluid. A liquid can flow, assume the shape of a container, and, if placed in a sealed container, will distribute applied pressure evenly to every surface in the container. If liquid is placed in a bag, it can be squeezed into any shape. Unlike a gas, a liquid is nearly incompressible, meaning that it occupies nearly a constant volume over a wide range of pressures; it does not generally expand to fill available space in a containe Document 3::: Solid is one of the four fundamental states of matter along with liquid, gas, and plasma. The molecules in a solid are closely packed together and contain the least amount of kinetic energy. A solid is characterized by structural rigidity (as in rigid bodies) and resistance to a force applied to the surface. Unlike a liquid, a solid object does not flow to take on the shape of its container, nor does it expand to fill the entire available volume like a gas. The atoms in a solid are bound to each other, either in a regular geometric lattice (crystalline solids, which include metals and ordinary ice), or irregularly (an amorphous solid such as common window glass). Solids cannot be compressed with little pressure whereas gases can be compressed with little pressure because the molecules in a gas are loosely packed. The branch of physics that deals with solids is called solid-state physics, and is the main branch of condensed matter physics (which also includes liquids). Materials science is primarily concerned with the physical and chemical properties of solids. Solid-state chemistry is especially concerned with the synthesis of novel materials, as well as the science of identification and chemical composition. Microscopic description The atoms, molecules or ions that make up solids may be arranged in an orderly repeating pattern, or irregularly. Materials whose constituents are arranged in a regular pattern are known as crystals. In some cases, the regular ordering can continue unbroken over a large scale, for example diamonds, where each diamond is a single crystal. Solid objects that are large enough to see and handle are rarely composed of a single crystal, but instead are made of a large number of single crystals, known as crystallites, whose size can vary from a few nanometers to several meters. Such materials are called polycrystalline. Almost all common metals, and many ceramics, are polycrystalline. In other materials, there is no long-range order in the Document 4::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What state of matter has a definite volume, but not a definite shape? A. solid B. gas C. mixture D. liquid Answer:
sciq-7221
multiple_choice
What's the most common disorder from having an extra chromosome?
[ "Turner syndrome", "down syndrome", "Williams syndrome", "cri-du-chat syndrome" ]
B
Relavent Documents: Document 0::: Down syndrome is a chromosomal abnormality characterized by the presence of an extra copy of genetic material on chromosome 21, either in whole (trisomy 21) or part (such as due to translocations). The effects of the extra copy varies greatly from individual to individual, depending on the extent of the extra copy, genetic background, environmental factors, and random chance. Down syndrome can occur in all human populations, and analogous effects have been found in other species, such as chimpanzees and mice. In 2005, researchers have been able to create transgenic mice with most of human chromosome 21 (in addition to their normal chromosomes). A typical human karyotype is shown here. Every chromosome has two copies. In the bottom right, there are chromosomal differences between males (XY) and females (XX), which do not concern us. A typical human karyotype is designated as 46,XX or 46,XY, indicating 46 chromosomes with an XX arrangement for females and 46 chromosomes with an XY arrangement for males. For this article, we will use females for the karyotype designation (46,XX). Trisomy 21 Trisomy 21 (47,XY,+21) is caused by a meiotic nondisjunction event. A typical gamete (either egg or sperm) has one copy of each chromosome (23 total). When it is combined with a gamete from the other parent during conception, the child has 46 chromosomes. However, with nondisjunction, a gamete is produced with an extra copy of chromosome 21 (the gamete has 24 chromosomes). When combined with a typical gamete from the other parent, the child now has 47 chromosomes, with three copies of chromosome 21. The trisomy 21 karyotype figure shows the chromosomal arrangement, with the prominent extra chromosome 21. Trisomy 21 is the cause of approximately 95% of observed Down syndrome, with 88% coming from nondisjunction in the maternal gamete and 8% coming from nondisjunction in the paternal gamete. Mitotic nondisjunction after conception would lead to mosaicism, and is discussed later. Document 1::: Turner syndrome (TS), also known as 45,X, or 45,X0, is a genetic disorder caused by a sex chromosome monosomy, compared to the two sex chromosomes (XX or XY) in most people, it only affects women. Signs and symptoms vary among those affected. Often, a short and webbed neck, low-set ears, low hairline at the back of the neck, short stature, and swollen hands and feet are seen at birth. Typically, those affected do not develop menstrual periods or breasts without hormone treatment and are unable to have children without reproductive technology. Heart defects, diabetes, and low thyroid hormone occur in the disorder more frequently than average. Most people with Turner syndrome have normal intelligence; however, many have problems with spatial visualization that may be needed in order to learn mathematics. Vision and hearing problems also occur more often than average. Turner syndrome is not usually inherited; rather, it occurs during formation of the reproductive cells in a parent or in early cell division during development. No environmental risks are known, and the mother's age does not play a role. While most people have 46 chromosomes, people with Turner syndrome usually have 45 in some or all cells. The chromosomal abnormality is often present in just some cells, in which case it is known as Turner syndrome with mosaicism. In these cases the symptoms are usually fewer, and possibly none occur at all. Diagnosis is based on physical signs and genetic testing. No cure for Turner syndrome is known. Treatment may help with symptoms. Human growth hormone injections during childhood may increase adult height. Estrogen replacement therapy can promote development of the breasts and hips. Medical care is often required to manage other health problems with which Turner syndrome is associated. Turner syndrome occurs in between one in 2,000 and one in 5,000 females at birth. All regions of the world and cultures are affected about equally. Generally people with Turner syndr Document 2::: Monosomy is a form of aneuploidy with the presence of only one chromosome from a pair. Partial monosomy occurs when a portion of one chromosome in a pair is missing. Human monosomy Human conditions due to monosomy: Turner syndrome – People with Turner syndrome typically have one X chromosome instead of the usual two X chromosomes. Turner syndrome is the only full monosomy that is seen in humans — all other cases of full monosomy are lethal and the individual will not survive development. Cri du chat syndrome – (French for "cry of the cat" after the persons' malformed larynx) a partial monosomy caused by a deletion of the end of the short arm of chromosome 5 1p36 deletion syndrome – a partial monosomy caused by a deletion at the end of the short arm of chromosome 1 17q12 microdeletion syndrome - a partial monosomy caused by a deletion of part of the long arm of chromosome 17 See also Anaphase lag Miscarriage Document 3::: Klinefelter syndrome (KS), also known as 47,XXY, is an aneuploid genetic condition where a male has an additional copy of the X chromosome. The primary features are infertility and small, poorly functioning testicles. Usually, symptoms are subtle and subjects do not realize they are affected. Sometimes, symptoms are more evident and may include weaker muscles, greater height, poor motor coordination, less body hair, breast growth, and less interest in sex. Often, these symptoms are noticed only at puberty. Intelligence is usually average, but reading difficulties and problems with speech are more common. Klinefelter syndrome occurs randomly. The extra X chromosome comes from the father and mother nearly equally. An older mother may have a slightly increased risk of a child with KS. The syndrome is defined by the presence of at least one extra X chromosome in addition to a Y chromosome yielding a total of 47 or more chromosomes rather than the usual 46. KS is diagnosed by the genetic test known as a karyotype. While no cure is known, a number of treatments may help. Physical therapy, occupational therapy, speech and language therapy, counselling, and adjustments of teaching methods may be useful. Testosterone replacement may be used in those who have significantly lower levels. Enlarged breasts may be removed by surgery. Approximately half of affected males have a chance of fathering children with the help of assisted reproductive technology, but this is expensive and not risk free. XXY males have a ~15-fold higher risk of developing breast cancer than typical males, but still lower than that of females. People with the condition have a nearly normal life expectancy. Klinefelter syndrome is one of the most common chromosomal disorders, occurring in one to two per 1,000 live male births. It is named after American endocrinologist Harry Klinefelter, who identified the condition in the 1940s. In 1956, the extra X chromosome was identified as the cause. Mice can als Document 4::: A chromosomal abnormality, chromosomal anomaly, chromosomal aberration, chromosomal mutation, or chromosomal disorder, is a missing, extra, or irregular portion of chromosomal DNA. These can occur in the form of numerical abnormalities, where there is an atypical number of chromosomes, or as structural abnormalities, where one or more individual chromosomes are altered. Chromosome mutation was formerly used in a strict sense to mean a change in a chromosomal segment, involving more than one gene. Chromosome anomalies usually occur when there is an error in cell division following meiosis or mitosis. Chromosome abnormalities may be detected or confirmed by comparing an individual's karyotype, or full set of chromosomes, to a typical karyotype for the species via genetic testing. Numerical abnormality An abnormal number of chromosomes is known as aneuploidy, and occurs when an individual is either missing a chromosome from a pair (resulting in monosomy) or has more than two chromosomes of a pair (trisomy, tetrasomy, etc.). Aneuploidy can be full, involving a whole chromosome missing or added, or partial, where only part of a chromosome is missing or added. Aneuploidy can occur with sex chromosomes or autosomes. An example of trisomy in humans is Down syndrome, which is a developmental disorder caused by an extra copy of chromosome 21; the disorder is therefore also called trisomy 21. An example of monosomy in humans is Turner syndrome, where the individual is born with only one sex chromosome, an X. Sperm aneuploidy Exposure of males to certain lifestyle, environmental and/or occupational hazards may increase the risk of aneuploid spermatozoa. In particular, risk of aneuploidy is increased by tobacco smoking, and occupational exposure to benzene, insecticides, and perfluorinated compounds. Increased aneuploidy is often associated with increased DNA damage in spermatozoa. Structural abnormalities When the chromosome's structure is altered, this can take several The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What's the most common disorder from having an extra chromosome? A. Turner syndrome B. down syndrome C. Williams syndrome D. cri-du-chat syndrome Answer:
sciq-2911
multiple_choice
Where would you find most pollution of ocean water?
[ "coastline", "trenches", "poles", "midocean" ]
A
Relavent Documents: Document 0::: Marine technology is defined by WEGEMT (a European association of 40 universities in 17 countries) as "technologies for the safe use, exploitation, protection of, and intervention in, the marine environment." In this regard, according to WEGEMT, the technologies involved in marine technology are the following: naval architecture, marine engineering, ship design, ship building and ship operations; oil and gas exploration, exploitation, and production; hydrodynamics, navigation, sea surface and sub-surface support, underwater technology and engineering; marine resources (including both renewable and non-renewable marine resources); transport logistics and economics; inland, coastal, short sea and deep sea shipping; protection of the marine environment; leisure and safety. Education and training According to the Cape Fear Community College of Wilmington, North Carolina, the curriculum for a marine technology program provides practical skills and academic background that are essential in succeeding in the area of marine scientific support. Through a marine technology program, students aspiring to become marine technologists will become proficient in the knowledge and skills required of scientific support technicians. The educational preparation includes classroom instructions and practical training aboard ships, such as how to use and maintain electronic navigation devices, physical and chemical measuring instruments, sampling devices, and data acquisition and reduction systems aboard ocean-going and smaller vessels, among other advanced equipment. As far as marine technician programs are concerned, students learn hands-on to trouble shoot, service and repair four- and two-stroke outboards, stern drive, rigging, fuel & lube systems, electrical including diesel engines. Relationship to commerce Marine technology is related to the marine science and technology industry, also known as maritime commerce. The Executive Office of Housing and Economic Development (EOHED Document 1::: Aquatic science is the study of the various bodies of water that make up our planet including oceanic and freshwater environments. Aquatic scientists study the movement of water, the chemistry of water, aquatic organisms, aquatic ecosystems, the movement of materials in and out of aquatic ecosystems, and the use of water by humans, among other things. Aquatic scientists examine current processes as well as historic processes, and the water bodies that they study can range from tiny areas measured in millimeters to full oceans. Moreover, aquatic scientists work in Interdisciplinary groups. For example, a physical oceanographer might work with a biological oceanographer to understand how physical processes, such as tropical cyclones or rip currents, affect organisms in the Atlantic Ocean. Chemists and biologists, on the other hand, might work together to see how the chemical makeup of a certain body of water affects the plants and animals that reside there. Aquatic scientists can work to tackle global problems such as global oceanic change and local problems, such as trying to understand why a drinking water supply in a certain area is polluted. There are two main fields of study that fall within the field of aquatic science. These fields of study include oceanography and limnology. Oceanography Oceanography refers to the study of the physical, chemical, and biological characteristics of oceanic environments. Oceanographers study the history, current condition, and future of the planet's oceans. They also study marine life and ecosystems, ocean circulation, plate tectonics, the geology of the seafloor, and the chemical and physical properties of the ocean. Oceanography is interdisciplinary. For example, there are biological oceanographers and marine biologists. These scientists specialize in marine organisms. They study how these organisms develop, their relationship with one another, and how they interact and adapt to their environment. Biological oceanographers Document 2::: This is a list of oceanography institutions and programs worldwide. Oceanographic institutions and programs are broadly defined as places where scientific research is carried out relating to oceanography. This list is organized geographically. Some oceanographic institutions are standalone programs, such as non-governmental organizations or government-funded agencies. Other oceanographic institutions are departments within colleges and universities. While oceanographic research happens at many other departments at other colleges and universities, such as Biology and Geology departments, this list focuses on larger departments and large research centers specifically devoted to oceanography and marine science. Aquaria are not listed here. International International oceanographic programs Intergovernmental Oceanographic Commission, UNESCO International Council for the Exploration of the Sea, (ICES) International Hydrographic Organization International Ocean Discovery Program, formerly called the Integrated Ocean Drilling Program. InterRidge, an international research collaboration on oceanic seafloor spreading zones. Mediterranean Science Commission, (CIESM) North Pacific Marine Science Organization (PICES) Scientific Committee on Oceanic Research, part of the International Science Council. Societies and professional affiliations American Geophysical Union Association for the Sciences of Limnology and Oceanography Coastal and Estuarine Research Federation European Geosciences Union The Oceanography Society Institutions by country Australia Australian Institute of Marine Science, Queensland. AIMS Australian Meteorological and Oceanographic Society, a scholarly society. AMOS Commonwealth Scientific and Industrial Research Organisation, Canberra. CSIRO Institute for Marine and Antarctic Studies, Hobart, Tasmania. IMAS University of New South Wales, Sydney. Coastal and Regional Oceanography Lab Bangladesh Bangabandhu Sheikh Mujibur Rahman Maritime Document 3::: The Malaspina circumnavigation expedition was an interdisciplinary research project to assess the impact of global change on the oceans and explore their biodiversity. The 250 scientists on board the Hespérides and Sarmiento de Gamboa embarked on an eight-month expedition (starting in December 2010) scientific research with training for young researchers - advancing marine science and fostering the public understanding of science. The project was under the umbrella of the Spanish Ministry of Science and Innovation's Consolider – Ingenio 2010 programme and was led by the Spanish National Research Council (CSIC) with the support of the Spanish Navy. It is named after the original scientific Malaspina Expedition between 1789 and 1794, that was commanded by Alejandro Malaspina. Due to Malaspina's involvement in a conspiracy to overthrow the Spanish government, he was jailed upon his return and a large part of the expedition's reports and collections were put away unpublished, not to see the light again until late in the 20th century. Objectives Assessing the impact of global change on the oceans Global change relates to the impact of human activities on the functioning of the biosphere. These include activities which, although performed locally, have effects on the functioning of the earth's system as a whole. The ocean plays a central role in regulating the planet's climate and is its biggest sink of and other substances produced by human activity. The project will put together Colección Malaspina 2010, a collection of environmental and biological data and samples which will be available to the scientific community for it to evaluate the impacts of future global changes. This will be particularly valuable, for example, when new technologies allow levels of pollutants below current thresholds of detection to be evaluated. Exploring the biodiversity of the deep ocean Half the Earth's surface is covered by oceans over 3,000 metres deep, making them the biggest Document 4::: The Ramón Margalef Award for Excellence in Education was launched in 2008 by the Association for the Sciences of Limnology and Oceanography to recognize innovations and excellence in teaching and mentoring students in the fields of limnology and oceanography. Criteria for the award requires "adherence to the highest standards of excellence" in pedagogy as well as verification that the teaching techniques have furthered the field of aquatic science. The award is not affiliated with the Ramon Margalef Prize in Ecology, often referred to as the Ramon Margalef Award, given by the Generalitat de Catalunya in Barcelona. The award has been presented annually since 2009. Winners The winners have included: The information in this table is from the Association for the Sciences of Limnology and Oceanography. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Where would you find most pollution of ocean water? A. coastline B. trenches C. poles D. midocean Answer:
sciq-366
multiple_choice
The kinetic-molecular theory as it applies to gases has how many basic assumptions?
[ "four", "seven", "two", "five" ]
D
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test. Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95. After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17. Content specification Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below: Biochemistry (36%) A Chemical and Physical Foundations Thermodynamics and kinetics Redox states Water, pH, acid-base reactions and buffers Solutions and equilibria Solute-solvent interactions Chemical interactions and bonding Chemical reaction mechanisms B Structural Biology: Structure, Assembly, Organization and Dynamics Small molecules Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids) Supramolecular complexes (e.g. Document 2::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of Document 3::: The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered. The 1995 version has 30 five-way multiple choice questions. Example question (question 4): Gender differences The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher. Document 4::: This is a list of topics that are included in high school physics curricula or textbooks. Mathematical Background SI Units Scalar (physics) Euclidean vector Motion graphs and derivatives Pythagorean theorem Trigonometry Motion and forces Motion Force Linear motion Linear motion Displacement Speed Velocity Acceleration Center of mass Mass Momentum Newton's laws of motion Work (physics) Free body diagram Rotational motion Angular momentum (Introduction) Angular velocity Centrifugal force Centripetal force Circular motion Tangential velocity Torque Conservation of energy and momentum Energy Conservation of energy Elastic collision Inelastic collision Inertia Moment of inertia Momentum Kinetic energy Potential energy Rotational energy Electricity and magnetism Ampère's circuital law Capacitor Coulomb's law Diode Direct current Electric charge Electric current Alternating current Electric field Electric potential energy Electron Faraday's law of induction Ion Inductor Joule heating Lenz's law Magnetic field Ohm's law Resistor Transistor Transformer Voltage Heat Entropy First law of thermodynamics Heat Heat transfer Second law of thermodynamics Temperature Thermal energy Thermodynamic cycle Volume (thermodynamics) Work (thermodynamics) Waves Wave Longitudinal wave Transverse waves Transverse wave Standing Waves Wavelength Frequency Light Light ray Speed of light Sound Speed of sound Radio waves Harmonic oscillator Hooke's law Reflection Refraction Snell's law Refractive index Total internal reflection Diffraction Interference (wave propagation) Polarization (waves) Vibrating string Doppler effect Gravity Gravitational potential Newton's law of universal gravitation Newtonian constant of gravitation See also Outline of physics Physics education The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The kinetic-molecular theory as it applies to gases has how many basic assumptions? A. four B. seven C. two D. five Answer:
sciq-11410
multiple_choice
How many possible alleles do the majority of human genes have?
[ "less than four", "two or less", "two or more", "three or more" ]
C
Relavent Documents: Document 0::: The Generalist Genes hypothesis of learning abilities and disabilities was originally coined in an article by Plomin & Kovas (2005). The Generalist Genes hypothesis suggests that most genes associated with common learning disabilities and abilities are generalist in three ways. Firstly, the same genes that influence common learning abilities (e.g., high reading aptitude) are also responsible for common learning disabilities (e.g., reading disability): they are strongly genetically correlated. Secondly, many of the genes associated with one aspect of a learning disability (e.g., vocabulary problems) also influence other aspects of this learning disability (e.g., grammar problems). Thirdly, genes that influence one learning disability (e.g., reading disability) are largely the same as those that influence other learning disabilities (e.g., mathematics disability). The Generalist Genes hypothesis has important implications for education, cognitive sciences and molecular genetics. Document 1::: Alleles Document 2::: Genome-wide complex trait analysis (GCTA) Genome-based restricted maximum likelihood (GREML) is a statistical method for heritability estimation in genetics, which quantifies the total additive contribution of a set of genetic variants to a trait. GCTA is typically applied to common single nucleotide polymorphisms (SNPs) on a genotyping array (or "chip") and thus termed "chip" or "SNP" heritability. GCTA operates by directly quantifying the chance genetic similarity of unrelated individuals and comparing it to their measured similarity on a trait; if two unrelated individuals are relatively similar genetically and also have similar trait measurements, then the measured genetics are likely to causally influence that trait, and the correlation can to some degree tell how much. This can be illustrated by plotting the squared pairwise trait differences between individuals against their estimated degree of relatedness. GCTA makes a number of modeling assumptions and whether/when these assumptions are satisfied continues to be debated. The GCTA framework has also been extended in a number of ways: quantifying the contribution from multiple SNP categories (i.e. functional partitioning); quantifying the contribution of Gene-Environment interactions; quantifying the contribution of non-additive/non-linear effects of SNPs; and bivariate analyses of multiple phenotypes to quantify their genetic covariance (co-heritability or genetic correlation). GCTA estimates have implications for the potential for discovery from Genome-wide Association Studies (GWAS) as well as the design and accuracy of polygenic scores. GCTA estimates from common variants are typically substantially lower than other estimates of total or narrow-sense heritability (such as from twin or kinship studies), which has contributed to the debate over the Missing heritability problem. History Estimation in biology/animal breeding using standard ANOVA/REML methods of variance components such as heritability, Document 3::: Human Heredity is a peer-reviewed scientific journal covering all aspects of human genetics. It was established in 1948 as Acta Genetica et Statistica Medica, obtaining its current name in 1969. It is published eight times per year by Karger Publishers and the editor-in-chief is Pak Sham (University of Hong Kong). According to the Journal Citation Reports, the journal has a 2017 impact factor of 0.542. Document 4::: GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test. Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95. After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17. Content specification Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below: Biochemistry (36%) A Chemical and Physical Foundations Thermodynamics and kinetics Redox states Water, pH, acid-base reactions and buffers Solutions and equilibria Solute-solvent interactions Chemical interactions and bonding Chemical reaction mechanisms B Structural Biology: Structure, Assembly, Organization and Dynamics Small molecules Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids) Supramolecular complexes (e.g. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. How many possible alleles do the majority of human genes have? A. less than four B. two or less C. two or more D. three or more Answer:
sciq-11040
multiple_choice
Which veins return oxygen-rich blood from the lungs to the heart?
[ "respiratory", "varicose", "jugular", "pulmonary" ]
D
Relavent Documents: Document 0::: The pulmonary veins are the veins that transfer oxygenated blood from the lungs to the heart. The largest pulmonary veins are the four main pulmonary veins, two from each lung that drain into the left atrium of the heart. The pulmonary veins are part of the pulmonary circulation. Structure There are four main pulmonary veins, two from each lung – an inferior and a superior main vein, emerging from each hilum. The main pulmonary veins receive blood from three or four feeding veins in each lung, and drain into the left atrium. The peripheral feeding veins do not follow the bronchial tree. They run between the pulmonary segments from which they drain the blood. At the root of the lung, the right superior pulmonary vein lies in front of and a little below the pulmonary artery; the inferior is situated at the lowest part of the lung hilum. Behind the pulmonary artery is the bronchus. The right main pulmonary veins (contains oxygenated blood) pass behind the right atrium and superior vena cava; the left in front of the descending thoracic aorta. Variation Occasionally the three lobar veins on the right side remain separate, and not infrequently the two left lobar veins end by a common opening into the left atrium. Therefore, the number of pulmonary veins opening into the left atrium can vary between three and five in the healthy population. The two left lobar veins may be united as a single pulmonary vein in about 25% of people; the two right veins may be united in about 3%. Function The pulmonary veins play an essential role in respiration, by receiving blood that has been oxygenated in the alveoli and returning it to the left atrium. Clinical significance As part of the pulmonary circulation they carry oxygenated blood back to the heart, as opposed to the veins of the systemic circulation which carry deoxygenated blood. On chest X-ray, the diameters of pulmonary veins increases from upper to lower lobes, from 3 mm at the first intercoastal space, to 6 mm jus Document 1::: Great vessels are the large vessels that bring blood to and from the heart. These are: Superior vena cava Inferior vena cava Pulmonary arteries Pulmonary veins Aorta Transposition of the great vessels is a group of congenital heart defects involving an abnormal spatial arrangement of any of the great vessels. Document 2::: Veins () are blood vessels in the circulatory system of humans and most other animals that carry blood toward the heart. Most veins carry deoxygenated blood from the tissues back to the heart; exceptions are those of the pulmonary and fetal circulations which carry oxygenated blood to the heart. In the systemic circulation arteries carry oxygenated blood away from the heart, and veins return deoxygenated blood to the heart, in the deep veins. There are three sizes of veins, large, medium, and small. Smaller veins are called venules, and the smallest the post-capillary venules are microscopic that make up the veins of the microcirculation. Veins are often closer to the skin than arteries. Veins have less smooth muscle and connective tissue and wider internal diameters than arteries. Because of their thinner walls and wider lumens they are able to expand and hold more blood. This greater capacity gives them the term of capacitance vessels. At any time, nearly 70% of the total volume of blood in the human body is in the veins. In medium and large sized veins the flow of blood is maintained by one-way (unidirectional) venous valves to prevent backflow. In the lower limbs this is also aided by muscle pumps, also known as venous pumps that exert pressure on intramuscular veins when they contract and drive blood back to the heart. Structure There are three sizes of vein, large, medium, and small. Smaller veins are called venules. The smallest veins are the post-capillary venules. Veins have a similar three-layered structure to arteries. The layers known as tunicae have a concentric arrangement that forms the wall of the vessel. The outer layer, is a thick layer of connective tissue called the tunica externa or adventitia; this layer is absent in the post-capillary venules. The middle layer, consists of bands of smooth muscle and is known as the tunica media. The inner layer, is a thin lining of endothelium known as the tunica intima. The tunica media in the veins is mu Document 3::: The pulmonary circulation is a division of the circulatory system in all vertebrates. The circuit begins with deoxygenated blood returned from the body to the right atrium of the heart where it is pumped out from the right ventricle to the lungs. In the lungs the blood is oxygenated and returned to the left atrium to complete the circuit. The other division of the circulatory system is the systemic circulation that begins with receiving the oxygenated blood from the pulmonary circulation into the left atrium. From the atrium the oxygenated blood enters the left ventricle where it is pumped out to the rest of the body, returning as deoxygenated blood back to the pulmonary circulation. The blood vessels of the pulmonary circulation are the pulmonary arteries and the pulmonary veins. A separate circulatory circuit known as the bronchial circulation supplies oxygenated blood to the tissue of the larger airways of the lung. Structure De-oxygenated blood leaves the heart, goes to the lungs, and then enters back into the heart. De-oxygenated blood leaves through the right ventricle through the pulmonary artery. From the right atrium, the blood is pumped through the tricuspid valve (or right atrioventricular valve) into the right ventricle. Blood is then pumped from the right ventricle through the pulmonary valve and into the pulmonary artery. Lungs The pulmonary arteries carry deoxygenated blood to the lungs, where carbon dioxide is released and oxygen is picked up during respiration. Arteries are further divided into very fine capillaries which are extremely thin-walled. The pulmonary veins return oxygenated blood to the left atrium of the heart. Veins Oxygenated blood leaves the lungs through pulmonary veins, which return it to the left part of the heart, completing the pulmonary cycle. This blood then enters the left atrium, which pumps it through the mitral valve into the left ventricle. From the left ventricle, the blood passes through the aortic valve to the Document 4::: Pulmocutaneous circulation is part of the amphibian circulatory system. It is responsible for directing blood to the skin and lungs. Blood flows from the ventricle into an artery called the conus arteriosus and from there into either the left or right truncus arteriosus. They in turn each split the ventricle's output into the pulmocutaneous circuit and the systemic circuit. See also Double circulatory system The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Which veins return oxygen-rich blood from the lungs to the heart? A. respiratory B. varicose C. jugular D. pulmonary Answer:
sciq-5959
multiple_choice
What scale measures acidity?
[ "ph scale", "frequency scale", "salinity scale", "richter scale" ]
A
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Test equating traditionally refers to the statistical process of determining comparable scores on different forms of an exam. It can be accomplished using either classical test theory or item response theory. In item response theory, equating is the process of placing scores from two or more parallel test forms onto a common score scale. The result is that scores from two different test forms can be compared directly, or treated as though they came from the same test form. When the tests are not parallel, the general process is called linking. It is the process of equating the units and origins of two scales on which the abilities of students have been estimated from results on different tests. The process is analogous to equating degrees Fahrenheit with degrees Celsius by converting measurements from one scale to the other. The determination of comparable scores is a by-product of equating that results from equating the scales obtained from test results. Purpose Suppose that Dick and Jane both take a test to become licensed in a certain profession. Because the high stakes (you get to practice the profession if you pass the test) may create a temptation to cheat, the organization that oversees the test creates two forms. If we know that Dick scored 60% on form A and Jane scored 70% on form B, do we know for sure which one has a better grasp of the material? What if form A is composed of very difficult items, while form B is relatively easy? Equating analyses are performed to address this very issue, so that scores are as fair as possible. Equating in item response theory In item response theory, person "locations" (measures of some quality being assessed by a test) are estimated on an interval scale; i.e., locations are estimated in relation to a unit and origin. It is common in educational assessment to employ tests in order to assess different groups of students with the intention of establishing a common scale by equating the origins, and when appropri Document 2::: GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test. Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95. After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17. Content specification Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below: Biochemistry (36%) A Chemical and Physical Foundations Thermodynamics and kinetics Redox states Water, pH, acid-base reactions and buffers Solutions and equilibria Solute-solvent interactions Chemical interactions and bonding Chemical reaction mechanisms B Structural Biology: Structure, Assembly, Organization and Dynamics Small molecules Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids) Supramolecular complexes (e.g. Document 3::: The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591. On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education. Format This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions. The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test. Preparation The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a Document 4::: A criterion-referenced test is a style of test which uses test scores to generate a statement about the behavior that can be expected of a person with that score. Most tests and quizzes that are written by school teachers can be considered criterion-referenced tests. In this case, the objective is simply to see whether the student has learned the material. Criterion-referenced assessment can be contrasted with norm-referenced assessment and ipsative assessment. Criterion-referenced testing was a major focus of psychometric research in the 1970s. Definition of criterion A common misunderstanding regarding the term is the meaning of criterion. Many, if not most, criterion-referenced tests involve a cutscore, where the examinee passes if their score exceeds the cutscore and fails if it does not (often called a mastery test). The criterion is not the cutscore; the criterion is the domain of subject matter that the test is designed to assess. For example, the criterion may be "Students should be able to correctly add two single-digit numbers," and the cutscore may be that students should correctly answer a minimum of 80% of the questions to pass. The criterion-referenced interpretation of a test score identifies the relationship to the subject matter. In the case of a mastery test, this does mean identifying whether the examinee has "mastered" a specified level of the subject matter by comparing their score to the cutscore. However, not all criterion-referenced tests have a cutscore, and the score can simply refer to a person's standing on the subject domain. The ACT is an example of this; there is no cutscore, it simply is an assessment of the student's knowledge of high-school level subject matter. Because of this common misunderstanding, criterion-referenced tests have also been called standards-based assessments by some education agencies, as students are assessed with regards to standards that define what they "should" know, as defined by the state. Co The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What scale measures acidity? A. ph scale B. frequency scale C. salinity scale D. richter scale Answer:
sciq-8302
multiple_choice
Groundwater dissolves minerals and rocks into what?
[ "sand", "gravel", "grit", "ions" ]
D
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Natural occurrence Iron dissolved in groundwater is in the reduced iron II form. If this groundwater comes in c Document 2::: The Géotechnique lecture is an biennial lecture on the topic of soil mechanics, organised by the British Geotechnical Association named after its major scientific journal Géotechnique. This should not be confused with the annual BGA Rankine Lecture. List of Géotechnique Lecturers See also Named lectures Rankine Lecture Terzaghi Lecture External links ICE Géotechnique journal British Geotechnical Association Document 3::: In soil, macropores are defined as cavities that are larger than 75 μm. Functionally, pores of this size host preferential soil solution flow and rapid transport of solutes and colloids. Macropores increase the hydraulic conductivity of soil, allowing water to infiltrate and drain quickly, and shallow groundwater to move relatively rapidly via lateral flow. In soil, macropores are created by plant roots, soil cracks, soil fauna, and by aggregation of soil particles into peds. Macropores may be defined differently in other contexts. Within the context of porous solids (i.e., not porous aggregations such as soil), colloid and surface chemists define macropores as cavities that are larger than 50 nm. See also Characterisation of pore space in soil Nanoporous materials Document 4::: FEFLOW (Finite Element subsurface FLOW system) is a computer program for simulating groundwater flow, mass transfer and heat transfer in porous media and fractured media. The program uses finite element analysis to solve the groundwater flow equation of both saturated and unsaturated conditions as well as mass and heat transport, including fluid density effects and chemical kinetics for multi-component reaction systems. History The software was firstly introduced by Hans-Jörg G. Diersch in 1979, see and. He developed the software in the Institute of Mechanics of the German Academy of Sciences Berlin up to 1990. In 1990 he was one of the founders of WASY GmbH of Berlin, Germany (the acronym WASY translates from German to Institute for Water Resources Planning and Systems Research), where FEFLOW has been developed further, continuously improved and extended as a commercial simulation package. In 2007 the shares of WASY GmbH were purchased by DHI. The WASY company has been fused and FEFLOW became part of the DHI Group software portfolio. FEFLOW is being further developed at DHI by an international team. Software distribution and services are worldwide. Technology The program is offered in both 32-bit and 64-bit versions for Microsoft Windows and Linux operating systems. FEFLOW's theoretical basis is fully described in the comprehensive FEFLOW book. It covers a wide range of physical and computational issues in the field of porous/fractured-media modeling. The book starts with a more general theory for all relevant flow and transport phenomena on the basis of the continuum mechanics, systematically develops the basic framework for important classes of problems (e.g., multiphase/multispecies non-isothermal flow and transport phenomena, variably saturated porous media, free-surface groundwater flow, aquifer-averaged equations, discrete feature elements), introduces finite element methods for solving the basic multidimensional balance equations, in detail discusses a The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Groundwater dissolves minerals and rocks into what? A. sand B. gravel C. grit D. ions Answer:
sciq-10739
multiple_choice
Which part of the body of amphibians, easily absorbs substances from the environment?
[ "skin", "scales", "liver", "Eyes" ]
A
Relavent Documents: Document 0::: H2.00.04.4.01001: Lymphoid tissue H2.00.05.0.00001: Muscle tissue H2.00.05.1.00001: Smooth muscle tissue H2.00.05.2.00001: Striated muscle tissue H2.00.06.0.00001: Nerve tissue H2.00.06.1.00001: Neuron H2.00.06.2.00001: Synapse H2.00.06.2.00001: Neuroglia h3.01: Bones h3.02: Joints h3.03: Muscles h3.04: Alimentary system h3.05: Respiratory system h3.06: Urinary system h3.07: Genital system h3.08: Document 1::: Myomeres are blocks of skeletal muscle tissue arranged in sequence, commonly found in aquatic chordates. Myomeres are separated from adjacent myomeres by connective fascia (myosepta) and most easily seen in larval fishes or in the olm. Myomere counts are sometimes used for identifying specimens, since their number corresponds to the number of vertebrae in the adults. Location varies, with some species containing these only near the tails, while some have them located near the scapular or pelvic girdles. Depending on the species, myomeres could be arranged in an epaxial or hypaxial manner. Hypaxial refers to ventral muscles and related structures while epaxial refers to more dorsal muscles. The horizontal septum divides these two regions in vertebrates from cyclostomes to gnathostomes. In terrestrial chordates, the myomeres become fused as well as indistinct, due to the disappearance of myosepta. Shape The shape of myomeres varies by species. Myomeres are commonly zig-zag, "V" (lancelets), "W" (fishes), or straight (tetrapods)– shaped muscle fibers. Generally, cyclostome myomeres are arranged in vertical strips while those of jawed fishes are folded in a complex matter due to swimming capability evolution. Specifically, myomeres of elasmobranchs and eels are “W”-shaped. Contrastingly, myomeres of tetrapods run vertically and do not display complex folding. Another species with simply-lain myomeres are mudpuppies. Myomeres overlap each other in succession, meaning myomere activation also allows neighboring myomeres to activate. Myomeres are made up of myoglobin-rich dark muscle as well as white muscle. Dark muscle, generally, functions as slow-twitch muscle fibers while white muscle is composed of fast-twitch fibers. Function Specifically, three types of myomeres in fish-like chordates include amphioxine (lancelet), cyclostomine (jawless fish), and gnathostomine (jawed fish). A common function shared by all of these is that they function to flex the body lateral Document 2::: The human body is the structure of a human being. It is composed of many different types of cells that together create tissues and subsequently organs and then organ systems. They ensure homeostasis and the viability of the human body. It comprises a head, hair, neck, torso (which includes the thorax and abdomen), arms and hands, legs and feet. The study of the human body includes anatomy, physiology, histology and embryology. The body varies anatomically in known ways. Physiology focuses on the systems and organs of the human body and their functions. Many systems and mechanisms interact in order to maintain homeostasis, with safe levels of substances such as sugar and oxygen in the blood. The body is studied by health professionals, physiologists, anatomists, and artists to assist them in their work. Composition The human body is composed of elements including hydrogen, oxygen, carbon, calcium and phosphorus. These elements reside in trillions of cells and non-cellular components of the body. The adult male body is about 60% water for a total water content of some . This is made up of about of extracellular fluid including about of blood plasma and about of interstitial fluid, and about of fluid inside cells. The content, acidity and composition of the water inside and outside cells is carefully maintained. The main electrolytes in body water outside cells are sodium and chloride, whereas within cells it is potassium and other phosphates. Cells The body contains trillions of cells, the fundamental unit of life. At maturity, there are roughly 3037trillion cells in the body, an estimate arrived at by totaling the cell numbers of all the organs of the body and cell types. The body is also host to about the same number of non-human cells as well as multicellular organisms which reside in the gastrointestinal tract and on the skin. Not all parts of the body are made from cells. Cells sit in an extracellular matrix that consists of proteins such as collagen, Document 3::: Catch connective tissue (also called mutable collagenous tissue) is a kind of connective tissue found in echinoderms (such as starfish and sea cucumbers) which can change its mechanical properties in a few seconds or minutes through nervous control rather than by muscular means. Connective tissue, including dermis, tendons and ligaments, is one of four main animal tissues. Usual connective tissue does not change its stiffness except in the slow process of aging. Catch connective tissue, however, shows rapid, large and reversible stiffness changes in response to stimulation under nervous control. This connective tissue is specific to echinoderms in which it works in posture maintenance and mechanical defense with low energy expenditure, and in body fission and autotomy. The stiffness changes of this tissue are due to the changes in the stiffness of extracellular materials. The small amount of muscle cells that are sometimes found scattered in this tissue has little influence on the stiffness-change mechanisms. Tissue distribution Catch connective tissue is found in all the extant classes of echinoderms. Sea lilies and feather stars: ligaments connecting ossicles of arms, stalks and cirri. Starfish: body-wall dermis; walls of tube feet. Brittle stars: intervertebral ligaments; autotomy tendons of arm muscles. Sea urchins: ligaments or catch apparatus, connecting spines to tests of sea urchins; tooth ligaments; compass depressor "muscles", which are in fact mostly made of connective tissues. Sea cucumbers: body-wall dermis. Early echinoderms were sessile organisms that fed on suspended particles carried by water currents. Their body was covered with imbricate small skeletal plates. The arrangement of plates suggests that plates worked as sliding joints so as animals to be able to change their body shape: they could possibly take an extended feeding posture and a flat "hiding" posture. The body plates might be connected with catch connective tissue that allowed early Document 4::: Batrachology is the branch of zoology concerned with the study of amphibians including frogs and toads, salamanders, newts, and caecilians. It is a sub-discipline of herpetology, which also includes non-avian reptiles (snakes, lizards, amphisbaenids, turtles, terrapins, tortoises, crocodilians, and the tuatara). Batrachologists may study the evolution, ecology, ethology, or anatomy of amphibians. Amphibians are cold blooded vertebrates largely found in damp habitats although many species have special behavioural adaptations that allow them to live in deserts, trees, underground and in regions with wide seasonal variations in temperature. There are over 7250 species of amphibians. Notable batrachologists Jean Marius René Guibé Gabriel Bibron Oskar Boettger George Albert Boulenger Edward Drinker Cope François Marie Daudin Franz Werner Leszek Berger The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Which part of the body of amphibians, easily absorbs substances from the environment? A. skin B. scales C. liver D. Eyes Answer:
sciq-9482
multiple_choice
What is the time interval required for one complete wave to pass a point called?
[ "cycle", "period", "half-life", "minute" ]
B
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: The actuarial credentialing and exam process usually requires passing a rigorous series of professional examinations, most often taking several years in total, before one can become recognized as a credentialed actuary. In some countries, such as Denmark, most study takes place in a university setting. In others, such as the U.S., most study takes place during employment through a series of examinations. In the UK, and countries based on its process, there is a hybrid university-exam structure. Australia The education system in Australia is divided into three components: an exam-based curriculum; a professionalism course; and work experience. The system is governed by the Institute of Actuaries of Australia. The exam-based curriculum is in three parts. Part I relies on exemptions from an accredited under-graduate degree from either Bond University, Monash University, Macquarie University, University of New South Wales, University of Melbourne, Australian National University or Curtin University. The courses cover subjects including finance, financial mathematics, economics, contingencies, demography, models, probability and statistics. Students may also gain exemptions by passing the exams of the Institute of Actuaries in London. Part II is the Actuarial control cycle and is also offered by each of the universities above. Part III consists of four half-year courses of which two are compulsory and the other two allow specialization. To become an Associate, one needs to complete Part I and Part II of the accreditation process, perform 3 years of recognized work experience, and complete a professionalism course. To become a Fellow, candidates must complete Part I, II, III, and take a professionalism course. Work experience is not required, however, as the Institute deems that those who have successfully completed Part III have shown enough level of professionalism. China Actuarial exams were suspended in 2014 but reintroduced in 2023. Denmark In Denmark it normal Document 2::: Single Best Answer (SBA or One Best Answer) is a written examination form of multiple choice questions used extensively in medical education. Structure A single question is posed with typically five alternate answers, from which the candidate must choose the best answer. This method avoids the problems of past examinations of a similar form described as Single Correct Answer. The older form can produce confusion where more than one of the possible answers has some validity. The newer form makes it explicit that more than one answer may have elements that are correct, but that one answer will be superior. Prior to the widespread introduction of SBAs into medical education, the typical form of examination was true-false multiple choice questions. But during the 2000s, educators found that SBAs would be superior. Document 3::: Progress tests are longitudinal, feedback oriented educational assessment tools for the evaluation of development and sustainability of cognitive knowledge during a learning process. A progress test is a written knowledge exam (usually involving multiple choice questions) that is usually administered to all students in the "A" program at the same time and at regular intervals (usually twice to four times yearly) throughout the entire academic program. The test samples the complete knowledge domain expected of new graduates upon completion of their courses, regardless of the year level of the student). The differences between students’ knowledge levels show in the test scores; the further a student has progressed in the curriculum the higher the scores. As a result, these resultant scores provide a longitudinal, repeated measures, curriculum-independent assessment of the objectives (in knowledge) of the entire programme. History Since its inception in the late 1970s at both Maastricht University and the University of Missouri–Kansas City independently, the progress test of applied knowledge has been increasingly used in medical and health sciences programs across the globe. They are well established and increasingly used in medical education in both undergraduate and postgraduate medical education. They are used formatively and summatively. Use in academic programs The progress test is currently used by national progress test consortia in the United Kingdom, Italy, The Netherlands, in Germany (including Austria), and in individual schools in Africa, Saudi Arabia, South East Asia, the Caribbean, Australia, New Zealand, Sweden, Finland, UK, and the USA. The National Board of Medical Examiners in the USA also provides progress testing in various countries The feasibility of an international approach to progress testing has been recently acknowledged and was first demonstrated by Albano et al. in 1996, who compared test scores across German, Dutch and Italian medi Document 4::: A crest point on a wave is the maximum value of upward displacement within a cycle. A crest is a point on a surface wave where the displacement of the medium is at a maximum. A trough is the opposite of a crest, so the minimum or lowest point in a cycle. When the crests and troughs of two sine waves of equal amplitude and frequency intersect or collide, while being in phase with each other, the result is called constructive interference and the magnitudes double (above and below the line). When in antiphase – 180° out of phase – the result is destructive interference: the resulting wave is the undisturbed line having zero amplitude. See also Crest factor Superposition principle Wave The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the time interval required for one complete wave to pass a point called? A. cycle B. period C. half-life D. minute Answer:
sciq-4630
multiple_choice
What type of vents are giant tube worms found at?
[ "Heated", "hydrothermal", "oxygen", "temperature" ]
B
Relavent Documents: Document 0::: Worms are many different distantly related bilateral animals that typically have a long cylindrical tube-like body, no limbs, and no eyes (though not always). Worms vary in size from microscopic to over in length for marine polychaete worms (bristle worms); for the African giant earthworm, Microchaetus rappi; and for the marine nemertean worm (bootlace worm), Lineus longissimus. Various types of worm occupy a small variety of parasitic niches, living inside the bodies of other animals. Free-living worm species do not live on land but instead live in marine or freshwater environments or underground by burrowing. In biology, "worm" refers to an obsolete taxon, vermes, used by Carolus Linnaeus and Jean-Baptiste Lamarck for all non-arthropod invertebrate animals, now seen to be paraphyletic. The name stems from the Old English word wyrm. Most animals called "worms" are invertebrates, but the term is also used for the amphibian caecilians and the slowworm Anguis, a legless burrowing lizard. Invertebrate animals commonly called "worms" include annelids (earthworms and marine polychaete or bristle worms), nematodes (roundworms), platyhelminthes (flatworms), marine nemertean worms ("bootlace worms"), marine Chaetognatha (arrow worms), priapulid worms, and insect larvae such as grubs and maggots. Worms may also be called helminths—particularly in medical terminology—when referring to parasitic worms, especially the Nematoda (roundworms) and Cestoda (tapeworms) which reside in the intestines of their host. When an animal or human is said to "have worms", it means that it is infested with parasitic worms, typically roundworms or tapeworms. Lungworm is also a common parasitic worm found in various animal species such as fish and cats. History In taxonomy, "worm" refers to an obsolete grouping, Vermes, used by Carl Linnaeus and Jean-Baptiste Lamarck for all non-arthropod invertebrate animals, now seen to be polyphyletic. In 1758, Linnaeus created the first hierarchical Document 1::: Thrombolites (from Ancient Greek θρόμβος thrómbos meaning "clot" and λῐ́θος líthos meaning "stone") are clotted accretionary structures formed in shallow water by the trapping, binding, and cementation of sedimentary grains by biofilms of microorganisms, especially cyanobacteria. Structures Thrombolites have a clotted structure without the laminae of stromatolites. Each clot within a thrombolite mound is a separate cyanobacterial colony. The clots are on the scale of millimetres to centimetres and may be interspersed with sand, mud or sparry carbonate. Clots that make up thrombolites are called thromboids to avoid confusion with other clotted textures. The larger clots make up more than 40% of a thrombolite's volume and each clot has a complex internal structure of cells and rimmed lobes resulting primarily from calcification of the cyanobacterial colony. Very little sediment is found within the clots because the main growth method is calcification rather than sediment trapping. There is active debate about the size of thromboids, with some seeing thromboids as a macrostructural feature (domical hemispheroid) and others viewing thromboids as a mesostructural feature (random polylobate and subspherical mesoclots). Types There are two main types of thrombolites: Calcified microbe thrombolites This type of thrombolites contain clots that are dominantly composed of calcified microfossil components. These clots do not have a fixed form or size and can expand vertically. Furthermore, burrows and trilobite fragments can exist in these thrombolites. Coarse agglutinated thrombolites This type of thrombolites is composed of small openings that trap fine-grained sediments. They are also known "thrombolitic-stromatolites" due to their close relation with the same composition of stromatolites. Because they trap sediment, their formation is linked to the rise of algal-cyanobacterial mats. Differences from stromatolites Thrombolites can be distinguished from microbialite Document 2::: Dahlella caldariensis is a species of leptostracan crustacean which lives on hydrothermal vents in the Pacific Ocean. Description Dahlella may reach a length of from the base of the rostrum to the end of the abdomen. Much of the animal is covered by a large, hinged carapace. Dahlella can be distinguished from other animals in the same family by the presence of a row of denticles (small teeth) on the eyestalks, which it is believed are used to scrape surfaces for food. A similar character is found in Paranebalia (Paranebaliidae), but the form of the eyestalk is very different in the two taxa. Distribution D. caldariensis has been recorded from a small number of sites around hydrothermal vents in the eastern Pacific Ocean near the Galápagos Islands and on the East Pacific Rise. It is one of the deepest-living species of Leptostraca, having been found at depths of over . Etymology The generic name Dahlella commemorates the biologist Erik Dahl of the University of Lund. The specific epithet comes from the Latin word meaning hot bath, and is a reference to the natural habitat of D. caldariensis. Document 3::: Pseudoplanktonic organisms are those that attach themselves to planktonic organisms or other floating objects, such as drifting wood, buoyant shells of organisms such as Spirula, or man-made flotsam. Examples include goose barnacles and the bryozoan Jellyella. By themselves these animals cannot float, which contrasts them with true planktonic organisms, such as Velella and the Portuguese Man o' War, which are buoyant. Pseudoplankton are often found in the guts of filtering zooplankters. Document 4::: Expedition: Being an Account in Words and Artwork of the 2358 A.D. Voyage to Darwin IV is a 1990 speculative evolution and science fiction book written and illustrated by the American artist and writer Wayne Barlowe. Written as a first-person account of a 24th-century crewed expedition to the fictional exoplanet of Darwin IV, Expedition describes and discusses an imaginary extraterrestrial ecosystem as if it were real. The extraterrestrial or alien organisms of Darwin IV were designed to be "truly alien", with Barlowe having grown dissatisfied with the common science fiction trope of alien life being similar to life on Earth, especially the notion of intelligent alien humanoids. None of Darwin IV’s wildlife have eyes, external ears, hair, or jaws, and they bear little resemblance to Earthlings. Various sources of inspiration were used for the creature designs, including dinosaurs, modern beasts and different types of vehicles. Expedition garnered very favorable reviews, being praised particularly for its many illustrations and for the level of detail in the text, which serves to maintain the illusion of realism. Several reviewers also criticized the life forms, finding some of them to be implausible or doubting that Darwin IV could actually function as an ecosystem. In 2005, Expedition was adapted into a TV special for the Discovery Channel titled Alien Planet. Barlowe served as the design consultant and one of the executive producers of the adaptation. Premise Expedition is written as though it is published in the year 2366, five years after Barlowe partook in a crewed expedition to the planet Darwin IV. In the 24th century, the exploitation of the Earth's ecosystem has created an environment so toxic that mass extinctions have wiped out nearly most of its nonhuman animal population. Most of the remaining fauna, with the exception of humans themselves, have suffered horrible mutations and can only be found in zoos. Aided by a benevolent and technologically supe The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What type of vents are giant tube worms found at? A. Heated B. hydrothermal C. oxygen D. temperature Answer:
sciq-10454
multiple_choice
What is defined as the velocity of the object at a given moment?
[ "specific gravity", "inertia", "instantaneous velocity", "relativistic velocity" ]
C
Relavent Documents: Document 0::: Velocity is the speed in combination with the direction of motion of an object. Velocity is a fundamental concept in kinematics, the branch of classical mechanics that describes the motion of bodies. Velocity is a physical vector quantity: both magnitude and direction are needed to define it. The scalar absolute value (magnitude) of velocity is called , being a coherent derived unit whose quantity is measured in the SI (metric system) as metres per second (m/s or m⋅s−1). For example, "5 metres per second" is a scalar, whereas "5 metres per second east" is a vector. If there is a change in speed, direction or both, then the object is said to be undergoing an acceleration. Constant velocity vs acceleration To have a constant velocity, an object must have a constant speed in a constant direction. Constant direction constrains the object to motion in a straight path thus, a constant velocity means motion in a straight line at a constant speed. For example, a car moving at a constant 20 kilometres per hour in a circular path has a constant speed, but does not have a constant velocity because its direction changes. Hence, the car is considered to be undergoing an acceleration. Difference between speed and velocity While the terms speed and velocity are often colloquially used interchangeably to connote how fast an object is moving, in scientific terms they are different. Speed, the scalar magnitude of a velocity vector, denotes only how fast an object is moving, while velocity indicates both an objects speed and direction. Equation of motion Average velocity Velocity is defined as the rate of change of position with respect to time, which may also be referred to as the instantaneous velocity to emphasize the distinction from the average velocity. In some applications the average velocity of an object might be needed, that is to say, the constant velocity that would provide the same resultant displacement as a variable velocity in the same time interval, , over some Document 1::: Linear motion, also called rectilinear motion, is one-dimensional motion along a straight line, and can therefore be described mathematically using only one spatial dimension. The linear motion can be of two types: uniform linear motion, with constant velocity (zero acceleration); and non-uniform linear motion, with variable velocity (non-zero acceleration). The motion of a particle (a point-like object) along a line can be described by its position , which varies with (time). An example of linear motion is an athlete running a 100-meter dash along a straight track. Linear motion is the most basic of all motion. According to Newton's first law of motion, objects that do not experience any net force will continue to move in a straight line with a constant velocity until they are subjected to a net force. Under everyday circumstances, external forces such as gravity and friction can cause an object to change the direction of its motion, so that its motion cannot be described as linear. One may compare linear motion to general motion. In general motion, a particle's position and velocity are described by vectors, which have a magnitude and direction. In linear motion, the directions of all the vectors describing the system are equal and constant which means the objects move along the same axis and do not change direction. The analysis of such systems may therefore be simplified by neglecting the direction components of the vectors involved and dealing only with the magnitude. Background Displacement The motion in which all the particles of a body move through the same distance in the same time is called translatory motion. There are two types of translatory motions: rectilinear motion; curvilinear motion. Since linear motion is a motion in a single dimension, the distance traveled by an object in particular direction is the same as displacement. The SI unit of displacement is the metre. If is the initial position of an object and is the final position, then mat Document 2::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 3::: The invariant speed or observer invariant speed is a speed which is measured to be the same in all reference frames by all observers. The invariance of the speed of light is one of the postulates of special relativity, and the terms speed of light and invariant speed are often considered synonymous. In non-relativistic classical mechanics, or Newtonian mechanics, finite invariant speed does not exist (the only invariant speed predicted by Newtonian mechanics is infinity). See also Variable speed of light Red Queen's race Minkowski diagram Speed of gravity Document 4::: Specific kinetic energy is the kinetic energy of an object per unit of mass. It is defined as . Where is the specific kinetic energy and is velocity. It has units of J/kg, which is equivalent to m2/s2. Energy (physics) The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is defined as the velocity of the object at a given moment? A. specific gravity B. inertia C. instantaneous velocity D. relativistic velocity Answer:
sciq-4947
multiple_choice
What did rockets help launch into space during their beginning?
[ "shuttles", "satellites", "sensors", "rovers" ]
B
Relavent Documents: Document 0::: A payload specialist (PS) was an individual selected and trained by commercial or research organizations for flights of a specific payload on a NASA Space Shuttle mission. People assigned as payload specialists included individuals selected by the research community, a company or consortium flying a commercial payload aboard the spacecraft, and non-NASA astronauts designated by international partners. The term refers to both the individual and to the position on the Shuttle crew. History The National Aeronautics and Space Act of 1958 states that NASA should provide the "widest practicable and appropriate dissemination of information concerning its activities and the results thereof". The Naugle panel of 1982 concluded that carrying civilians—those not part of the NASA Astronaut Corps—on the Space Shuttle was part of "the purpose of adding to the public's understanding of space flight". Payload specialists usually fly for a single specific mission. Chosen outside the standard NASA mission specialist selection process, they are exempt from certain NASA requirements such as colorblindness. Roger Crouch and Ulf Merbold are examples of those who flew in space despite not meeting NASA physical requirements; the agency's director of crew training Jim Bilodeau said in April 1981 "we'll be able to take everybody but the walking wounded". Payload specialists were not required to be United States citizens, but had to be approved by NASA and undergo rigorous but shorter training. In contrast, a Space Shuttle mission specialist was selected as a NASA astronaut first and then assigned to a mission. Payload specialists on early missions were technical experts to join specific payloads such as a commercial or scientific satellite. On Spacelab and other missions with science components, payload specialists were scientists with expertise in specific experiments. The term also applied to representatives from partner nations who were given the opportunity of a first flight on boar Document 1::: Astronauts hold a variety of ranks and positions. Each of these roles carries responsibilities that are essential to the operation of a spacecraft. A spacecraft's cockpit, filled with sophisticated equipment, requires skills differing from those used to manage the scientific equipment on board, and so on. NASA ranks and positions Ranks Members of the NASA Astronaut Corps hold one of two ranks. Astronaut Candidate is the rank of those training to be NASA astronauts. Upon graduation, candidates are promoted to Astronaut and receive their Astronaut Pin. The pin is issued in two grades, silver and gold, with the silver pin awarded to candidates who have successfully completed astronaut training and the gold pin to astronauts who have flown in space. Chief of the Astronaut Office is a position, not a rank. Positions Roscosmos and Soviet space program ranks and positions Ranks Cosmonauts are professional space travellers from Russia. After initial training, cosmonauts are assigned as either a test-cosmonaut (космонавт-испытатель, kosmonavt-ispytatel') or a research-cosmonaut (космонавт-исследователь, kosmonavt-issledovatel'). A test-cosmonaut has a more difficult preparation than a research-cosmonaut and can be the commander or the flight engineer of a spacecraft, while a research-cosmonaut cannot. Higher ranks include pilot-cosmonaut, test-cosmonaut instructor, and research-cosmonaut instructor. Pilot-Cosmonaut of the Russian Federation is a title that is presented to all cosmonauts who fly for the Russian space program. Positions China National Space Administration positions Ranks Similarly to NASA, members of the China National Space Administration (CNSA) hold one of two ranks. Astronaut Candidate is the rank of those training to be CNSA astronauts. The positions of Spacecraft Pilot, Flight Engineer, and Mission Payload Specialist were listed in the announcement for the Group 3 selection. Upon graduation, candidates are promoted to Astronaut. Position Document 2::: Rockets and People Between 1994 and 1999 Boris Chertok, with support from his wife Yekaterina Golubkina, created the four-volume book series about the history of the Soviet space industry. The series was originally published in Russian, in 1999. Черток Б.Е. Ракеты и люди — М.: Машиностроение, 1999. (B. Chertok, Rockets and People) Черток Б.Е. Ракеты и люди. Фили — Подлипки — Тюратам — М.: Машиностроение, 1999. (B. Chertok, Rockets and People. Fili — Podlipki — Tyuratam) Черток Б.Е. Ракеты и люди. Горячие дни холодной войны — М.: Машиностроение, 1999. (B. Chertok, Rockets and People. Hot Days of the Cold War) Черток Б.Е. Ракеты и люди. Лунная гонка — М.: Машиностроение, 1999. (B. Chertok, Rockets and People. The Moon Race) Translation into English NASA's History Division published four translated and somewhat edited volumes of the s Document 3::: Amateur rocketry, sometimes known as experimental rocketry or amateur experimental rocketry, is a hobby in which participants experiment with fuels and make their own rocket motors, launching a wide variety of types and sizes of rockets. Amateur rocketeers have been responsible for significant research into hybrid rocket motors, and have built and flown a variety of solid, liquid, and hybrid propellant motors. History Amateur rocketry was an especially popular hobby in the late 1950s and early 1960s following the launch of Sputnik, as described in Homer Hickam's 1998 memoir Rocket Boys. One of the first organizations set up in the US to engage in amateur rocketry was the Pacific Rocket Society established in California in the early 1950s. The group did their research on rockets from a launch site deep in the Mojave Desert. In the summer of 1956, 17-year-old Jimmy Blackmon of Charlotte, North Carolina, built a 6-foot rocket in his basement. The rocket was designed to be powered by combined liquid nitrogen, gasoline, and liquid oxygen. On learning that Blackmon wanted to launch his rocket from a nearby farm, the Civil Aeronautics Administration notified the U.S. Army. Blackmon's rocket was examined at Redstone Arsenal and eventually grounded on the basis that some of the material he had used was too weak to control the flow and mixing of the fuel. Interest in the rocketry hobby was spurred to a great extent by the publication of a Scientific American article in June 1957 that described the design, propellant formulations, and launching techniques utilized by typical amateur rocketry groups of the time (including the Reaction Research Society of California). The subsequent publication, in 1960, of a book entitled Rocket Manual for Amateurs by Bertrand R. Brinley provided even more detailed information regarding the hobby, and further contributed to its burgeoning popularity. At this time, amateur rockets nearly always employed either black powder, zinc-sulfur (a Document 4::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What did rockets help launch into space during their beginning? A. shuttles B. satellites C. sensors D. rovers Answer:
sciq-10255
multiple_choice
An organism's unique role in the ecosystem is called its what?
[ "adaptation", "focus", "niche", "purpose" ]
C
Relavent Documents: Document 0::: This glossary of biology terms is a list of definitions of fundamental terms and concepts used in biology, the study of life and of living organisms. It is intended as introductory material for novices; for more specific and technical definitions from sub-disciplines and related fields, see Glossary of cell biology, Glossary of genetics, Glossary of evolutionary biology, Glossary of ecology, Glossary of environmental science and Glossary of scientific naming, or any of the organism-specific glossaries in :Category:Glossaries of biology. A B C D E F G H I J K L M N O P R S T U V W X Y Z Related to this search Index of biology articles Outline of biology Glossaries of sub-disciplines and related fields: Glossary of botany Glossary of ecology Glossary of entomology Glossary of environmental science Glossary of genetics Glossary of ichthyology Glossary of ornithology Glossary of scientific naming Glossary of speciation Glossary of virology Document 1::: This glossary of ecology is a list of definitions of terms and concepts in ecology and related fields. For more specific definitions from other glossaries related to ecology, see Glossary of biology, Glossary of evolutionary biology, and Glossary of environmental science. A B C D E F G H I J K L M N O P Q R S T U V W X Y Z See also Outline of ecology History of ecology Document 2::: In ecology, habitat refers to the array of resources, physical and biotic factors that are present in an area, such as to support the survival and reproduction of a particular species. A species habitat can be seen as the physical manifestation of its ecological niche. Thus "habitat" is a species-specific term, fundamentally different from concepts such as environment or vegetation assemblages, for which the term "habitat-type" is more appropriate. The physical factors may include (for example): soil, moisture, range of temperature, and light intensity. Biotic factors include the availability of food and the presence or absence of predators. Every species has particular habitat requirements, with habitat generalist species able to thrive in a wide array of environmental conditions while habitat specialist species requiring a very limited set of factors to survive. The habitat of a species is not necessarily found in a geographical area, it can be the interior of a stem, a rotten log, a rock or a clump of moss; a parasitic organism has as its habitat the body of its host, part of the host's body (such as the digestive tract), or a single cell within the host's body. Habitat types are environmental categorizations of different environments based on the characteristics of a given geographical area, particularly vegetation and climate. Thus habitat types do not refer to a single species but to multiple species living in the same area. For example, terrestrial habitat types include forest, steppe, grassland, semi-arid or desert. Fresh-water habitat types include marshes, streams, rivers, lakes, and ponds; marine habitat types include salt marshes, the coast, the intertidal zone, estuaries, reefs, bays, the open sea, the sea bed, deep water and submarine vents. Habitat types may change over time. Causes of change may include a violent event (such as the eruption of a volcano, an earthquake, a tsunami, a wildfire or a change in oceanic currents); or change may occur mo Document 3::: Microbial population biology is the application of the principles of population biology to microorganisms. Distinguishing from other biological disciplines Microbial population biology, in practice, is the application of population ecology and population genetics toward understanding the ecology and evolution of bacteria, archaebacteria, microscopic fungi (such as yeasts), additional microscopic eukaryotes (e.g., "protozoa" and algae), and viruses. Microbial population biology also encompasses the evolution and ecology of community interactions (community ecology) between microorganisms, including microbial coevolution and predator-prey interactions. In addition, microbial population biology considers microbial interactions with more macroscopic organisms (e.g., host-parasite interactions), though strictly this should be more from the perspective of the microscopic rather than the macroscopic organism. A good deal of microbial population biology may be described also as microbial evolutionary ecology. On the other hand, typically microbial population biologists (unlike microbial ecologists) are less concerned with questions of the role of microorganisms in ecosystem ecology, which is the study of nutrient cycling and energy movement between biotic as well as abiotic components of ecosystems. Microbial population biology can include aspects of molecular evolution or phylogenetics. Strictly, however, these emphases should be employed toward understanding issues of microbial evolution and ecology rather than as a means of understanding more universal truths applicable to both microscopic and macroscopic organisms. The microorganisms in such endeavors consequently should be recognized as organisms rather than simply as molecular or evolutionary reductionist model systems. Thus, the study of RNA in vitro evolution is not microbial population biology and nor is the in silico generation of phylogenies of otherwise non-microbial sequences, even if aspects of either may Document 4::: Functional ecology is a branch of ecology that focuses on the roles, or functions, that species play in the community or ecosystem in which they occur. In this approach, physiological, anatomical, and life history characteristics of the species are emphasized. The term "function" is used to emphasize certain physiological processes rather than discrete properties, describe an organism's role in a trophic system, or illustrate the effects of natural selective processes on an organism. This sub-discipline of ecology represents the crossroads between ecological patterns and the processes and mechanisms that underlie them. It focuses on traits represented in large number of species and can be measured in two ways – the first being screening, which involves measuring a trait across a number of species, and the second being empiricism, which provides quantitative relationships for the traits measured in screening. Functional ecology often emphasizes an integrative approach, using organism traits and activities to understand community dynamics and ecosystem processes, particularly in response to the rapid global changes occurring in earth's environment. Functional ecology sits at the nexus of several disparate disciplines and serves as the unifying principle between evolutionary ecology, evolutionary biology, genetics and genomics, and traditional ecological studies. It explores such areas as "[species'] competitive abilities, patterns of species co-occurrence, community assembly, and the role of different traits on ecosystem functioning". History The notion that ecosystems' functions can be affected by their constituent parts has its origins in the 19th century. Charles Darwin's On The Origin of Species is one of the first texts to directly comment on the effect of biodiversity on ecosystem health by noting a positive correlation between plant density and ecosystem productivity. In his influential 1927 work, Animal Ecology, Charles Elton proposed classifying an ecosyst The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. An organism's unique role in the ecosystem is called its what? A. adaptation B. focus C. niche D. purpose Answer:
sciq-7928
multiple_choice
Each myofibril is made up of two types of proteins, called actin and what?
[ "dynein", "elongation", "Fatty Acid", "myosin" ]
D
Relavent Documents: Document 0::: Myofilaments are the three protein filaments of myofibrils in muscle cells. The main proteins involved are myosin, actin, and titin. Myosin and actin are the contractile proteins and titin is an elastic protein. The myofilaments act together in muscle contraction, and in order of size are a thick one of mostly myosin, a thin one of mostly actin, and a very thin one of mostly titin. Types of muscle tissue are striated skeletal muscle and cardiac muscle, obliquely striated muscle (found in some invertebrates), and non-striated smooth muscle. Various arrangements of myofilaments create different muscles. Striated muscle has transverse bands of filaments. In obliquely striated muscle, the filaments are staggered. Smooth muscle has irregular arrangements of filaments. Structure There are three different types of myofilaments: thick, thin, and elastic filaments. Thick filaments consist primarily of a type of myosin, a motor protein – myosin II. Each thick filament is approximately 15 nm in diameter, and each is made of several hundred molecules of myosin. A myosin molecule is shaped like a golf club, with a tail formed of two intertwined chains and a double globular head projecting from it at an angle. Half of the myosin heads angle to the left and half of them angle to the right, creating an area in the middle of the filament known as the M-region or bare zone. Thin filaments, are 7 nm in diameter, and consist primarily of the protein actin, specifically filamentous F-actin. Each F-actin strand is composed of a string of subunits called globular G-actin. Each G-actin has an active site that can bind to the head of a myosin molecule. Each thin filament also has approximately 40 to 60 molecules of tropomyosin, the protein that blocks the active sites of the thin filaments when the muscle is relaxed. Each tropomyosin molecule has a smaller calcium-binding protein called troponin bound to it. All thin filaments are attached to the Z-line. Elastic filaments, 1 nm in Document 1::: Myotilin is a protein that in humans is encoded by the MYOT gene. Myotilin (myofibrillar titin-like protein) also known as TTID (TiTin Immunoglobulin Domain) is a muscle protein that is found within the Z-disc of sarcomeres. Structure Myotilin is a 55.3 kDa protein composed of 496 amino acids. Myotilin was originally identified as a novel alpha-actinin binding partner with two Ig-like domains, that localized to the Z-disc. The I-type Ig-like domains reside at the C-terminal half, and are most homologous to Ig domains 2-3 of palladin and Ig domains 4-5 of myopalladin and more distantly related to Z-disc Ig domains 7 and 8 of titin. The C-terminal region hosts the binding sites for Z-band proteins, and 2 Ig domains are the site of homodimerization for myotilin. By contrast, the N-terminal part of myotilin is unique, consisting of a serine-rich region with no homology to known proteins. Several disease-associated mutations involve serine residues within the serine-rich domain. Myotilin expression in human tissues is mainly restricted to striated muscles and nerves. In muscles, myotilin is predominantly found within the Z-discs. Myotilin forms homodimers and binds alpha-actinin, actin, Filamin C, FATZ-1, FATZ-2 and ZASP. Document 2::: The sliding filament theory explains the mechanism of muscle contraction based on muscle proteins that slide past each other to generate movement. According to the sliding filament theory, the myosin (thick filaments) of muscle fibers slide past the actin (thin filaments) during muscle contraction, while the two groups of filaments remain at relatively constant length. The theory was independently introduced in 1954 by two research teams, one consisting of Andrew Huxley and Rolf Niedergerke from the University of Cambridge, and the other consisting of Hugh Huxley and Jean Hanson from the Massachusetts Institute of Technology. It was originally conceived by Hugh Huxley in 1953. Andrew Huxley and Niedergerke introduced it as a "very attractive" hypothesis. Before the 1950s there were several competing theories on muscle contraction, including electrical attraction, protein folding, and protein modification. The novel theory directly introduced a new concept called cross-bridge theory (classically swinging cross-bridge, now mostly referred to as cross-bridge cycle) which explains the molecular mechanism of sliding filament. Cross-bridge theory states that actin and myosin form a protein complex (classically called actomyosin) by attachment of myosin head on the actin filament, thereby forming a sort of cross-bridge between the two filaments. The sliding filament theory is a widely accepted explanation of the mechanism that underlies muscle contraction. History Early works The first muscle protein discovered was myosin by a German scientist Willy Kühne, who extracted and named it in 1864. In 1939 a Russian husband and wife team Vladimir Alexandrovich Engelhardt and Militsa Nikolaevna Lyubimova discovered that myosin had an enzymatic (called ATPase) property that can breakdown ATP to release energy. Albert Szent-Györgyi, a Hungarian physiologist, turned his focus on muscle physiology after winning the Nobel Prize in Physiology or Medicine in 1937 for his works on v Document 3::: Myosins () are a superfamily of motor proteins best known for their roles in muscle contraction and in a wide range of other motility processes in eukaryotes. They are ATP-dependent and responsible for actin-based motility. The first myosin (M2) to be discovered was in 1864 by Wilhelm Kühne. Kühne had extracted a viscous protein from skeletal muscle that he held responsible for keeping the tension state in muscle. He called this protein myosin. The term has been extended to include a group of similar ATPases found in the cells of both striated muscle tissue and smooth muscle tissue. Following the discovery in 1973 of enzymes with myosin-like function in Acanthamoeba castellanii, a global range of divergent myosin genes have been discovered throughout the realm of eukaryotes. Although myosin was originally thought to be restricted to muscle cells (hence myo-(s) + -in), there is no single "myosin"; rather it is a very large superfamily of genes whose protein products share the basic properties of actin binding, ATP hydrolysis (ATPase enzyme activity), and force transduction. Virtually all eukaryotic cells contain myosin isoforms. Some isoforms have specialized functions in certain cell types (such as muscle), while other isoforms are ubiquitous. The structure and function of myosin is globally conserved across species, to the extent that rabbit muscle myosin II will bind to actin from an amoeba. Structure and functions Domains Most myosin molecules are composed of a head, neck, and tail domain. The head domain binds the filamentous actin, and uses ATP hydrolysis to generate force and to "walk" along the filament towards the barbed (+) end (with the exception of myosin VI, which moves towards the pointed (-) end). the neck domain acts as a linker and as a lever arm for transducing force generated by the catalytic motor domain. The neck domain can also serve as a binding site for myosin light chains which are distinct proteins that form part of a macromolecula Document 4::: A sarcomere (Greek σάρξ sarx "flesh", μέρος meros "part") is the smallest functional unit of striated muscle tissue. It is the repeating unit between two Z-lines. Skeletal muscles are composed of tubular muscle cells (called muscle fibers or myofibers) which are formed during embryonic myogenesis. Muscle fibers contain numerous tubular myofibrils. Myofibrils are composed of repeating sections of sarcomeres, which appear under the microscope as alternating dark and light bands. Sarcomeres are composed of long, fibrous proteins as filaments that slide past each other when a muscle contracts or relaxes. The costamere is a different component that connects the sarcomere to the sarcolemma. Two of the important proteins are myosin, which forms the thick filament, and actin, which forms the thin filament. Myosin has a long, fibrous tail and a globular head, which binds to actin. The myosin head also binds to ATP, which is the source of energy for muscle movement. Myosin can only bind to actin when the binding sites on actin are exposed by calcium ions. Actin molecules are bound to the Z-line, which forms the borders of the sarcomere. Other bands appear when the sarcomere is relaxed. The myofibrils of smooth muscle cells are not arranged into sarcomeres. Bands The sarcomeres give skeletal and cardiac muscle their striated appearance, which was first described by Van Leeuwenhoek. A sarcomere is defined as the segment between two neighbouring Z-lines (or Z-discs). In electron micrographs of cross-striated muscle, the Z-line (from the German "zwischen" meaning between) appears in between the I-bands as a dark line that anchors the actin myofilaments. Surrounding the Z-line is the region of the I-band (for isotropic). I-band is the zone of thin filaments that is not superimposed by thick filaments (myosin). Following the I-band is the A-band (for anisotropic). Named for their properties under a polarized light microscope. An A-band contains the entire length of a si The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Each myofibril is made up of two types of proteins, called actin and what? A. dynein B. elongation C. Fatty Acid D. myosin Answer:
sciq-9344
multiple_choice
Along with muscles, what helps the body move with relatively little force?
[ "nerves", "limbs", "glands", "joints" ]
D
Relavent Documents: Document 0::: Proprioception ( ), also called kinaesthesia (or kinesthesia), is the sense of self-movement, force, and body position. Proprioception is mediated by proprioceptors, mechanosensory neurons located within muscles, tendons, and joints. Most animals possess multiple subtypes of proprioceptors, which detect distinct kinematic parameters, such as joint position, movement, and load. Although all mobile animals possess proprioceptors, the structure of the sensory organs can vary across species. Proprioceptive signals are transmitted to the central nervous system, where they are integrated with information from other sensory systems, such as the visual system and the vestibular system, to create an overall representation of body position, movement, and acceleration. In many animals, sensory feedback from proprioceptors is essential for stabilizing body posture and coordinating body movement. System overview In vertebrates, limb movement and velocity (muscle length and the rate of change) are encoded by one group of sensory neurons (type Ia sensory fiber) and another type encode static muscle length (group II neurons). These two types of sensory neurons compose muscle spindles. There is a similar division of encoding in invertebrates; different subgroups of neurons of the Chordotonal organ encode limb position and velocity. To determine the load on a limb, vertebrates use sensory neurons in the Golgi tendon organs: type Ib afferents. These proprioceptors are activated at given muscle forces, which indicate the resistance that muscle is experiencing. Similarly, invertebrates have a mechanism to determine limb load: the Campaniform sensilla. These proprioceptors are active when a limb experiences resistance. A third role for proprioceptors is to determine when a joint is at a specific position. In vertebrates, this is accomplished by Ruffini endings and Pacinian corpuscles. These proprioceptors are activated when the joint is at a threshold position, usually at the extre Document 1::: Motor control is the regulation of movement in organisms that possess a nervous system. Motor control includes reflexes as well as directed movement. To control movement, the nervous system must integrate multimodal sensory information (both from the external world as well as proprioception) and elicit the necessary signals to recruit muscles to carry out a goal. This pathway spans many disciplines, including multisensory integration, signal processing, coordination, biomechanics, and cognition, and the computational challenges are often discussed under the term sensorimotor control. Successful motor control is crucial to interacting with the world to carry out goals as well as for posture, balance, and stability. Some researchers (mostly neuroscientists studying movement, such as Daniel Wolpert and Randy Flanagan) argue that motor control is the reason brains exist at all. Neural control of muscle force All movements, e.g. touching your nose, require motor neurons to fire action potentials that results in contraction of muscles. In humans, ~150,000 motor neurons control the contraction of ~600 muscles. To produce movements, a subset of 600 muscles must contract in a temporally precise pattern to produce the right force at the right time. Motor units and force production A single motor neuron and the muscle fibers it innervates are called a motor unit. For example, the rectus femoris contains approximately 1 million muscle fibers, which are controlled by around 1000 motor neurons. Activity in the motor neuron causes contraction in all of the innervated muscle fibers so that they function as a unit. Increasing action potential frequency (spike rate) in the motor neuron increases the muscle fiber contraction force, up to the maximal force. The maximal force depends on the contractile properties of the muscle fibers. Within a motor unit, all the muscle fibers are of the same type (e.g. type I (slow twitch) or Type II fibers (fast twitch)), and motor units of mult Document 2::: Kinesiology () is the scientific study of human body movement. Kinesiology addresses physiological, anatomical, biomechanical, pathological, neuropsychological principles and mechanisms of movement. Applications of kinesiology to human health include biomechanics and orthopedics; strength and conditioning; sport psychology; motor control; skill acquisition and motor learning; methods of rehabilitation, such as physical and occupational therapy; and sport and exercise physiology. Studies of human and animal motion include measures from motion tracking systems, electrophysiology of muscle and brain activity, various methods for monitoring physiological function, and other behavioral and cognitive research techniques. Basics Kinesiology studies the science of human movement, performance, and function by applying the fundamental sciences of Cell Biology, Molecular Biology, Chemistry, Biochemistry, Biophysics, Biomechanics, Biomathematics, Biostatistics, Anatomy, Physiology, Exercise Physiology, Pathophysiology, Neuroscience, and Nutritional science. A bachelor's degree in kinesiology can provide strong preparation for graduate study in biomedical research, as well as in professional programs, such as medicine, dentistry, physical therapy, and occupational therapy. The term "kinesiologist" is not a licensed nor professional designation in many countries, with the notable exception of Canada. Individuals with training in this area can teach physical education, work as personal trainers and sport coaches, provide consulting services, conduct research and develop policies related to rehabilitation, human motor performance, ergonomics, and occupational health and safety. In North America, kinesiologists may study to earn a Bachelor of Science, Master of Science, or Doctorate of Philosophy degree in Kinesiology or a Bachelor of Kinesiology degree, while in Australia or New Zealand, they are often conferred an Applied Science (Human Movement) degree (or higher). Many doctor Document 3::: Normal aging movement control in humans is about the changes in the muscles, motor neurons, nerves, sensory functions, gait, fatigue, visual and manual responses, in men and women as they get older but who do not have neurological, muscular (atrophy, dystrophy...) or neuromuscular disorder. With aging, neuromuscular movements are impaired, though with training or practice, some aspects may be prevented. Force production For voluntary force production, action potentials occur in the cortex. They propagate in the spinal cord, the motor neurons and the set of muscle fibers they innervate. This results in a twitch which properties are driven by two mechanisms: motor unit recruitment and rate coding. Both mechanisms are affected with aging. For instance, the number of motor units may decrease, the size of the motor units, i.e. the number of muscle fibers they innervate may increase, the frequency at which the action potentials are triggered may be reduced. Consequently, force production is generally impaired in old adults. Aging is associated with decreases in muscle mass and strength. These decreases may be partially due to losses of alpha motor neurons. By the age of 70, these losses occur in both proximal and distal muscles. In biceps brachii and brachialis, old adults show decreased strength (by 1/3) correlated with a reduction in the number of motor units (by 1/2). Old adults show evidence that remaining motor units may become larger as motor units innervate collateral muscle fibers. In first dorsal interosseus, almost all motor units are recruited at moderate rate coding, leading to 30-40% of maximal voluntary contraction (MVC). Motor unit discharge rates measured at 50% MVC are not significantly different in the young subjects from those observed in the old adults. However, for the maximal effort contractions, there is an appreciable difference in discharge rates between the two age groups. Discharge rates obtained at 100% of MVC are 64% smaller in the old adul Document 4::: A motor skill is a function that involves specific movements of the body's muscles to perform a certain task. These tasks could include walking, running, or riding a bike. In order to perform this skill, the body's nervous system, muscles, and brain have to all work together. The goal of motor skill is to optimize the ability to perform the skill at the rate of success, precision, and to reduce the energy consumption required for performance. Performance is an act of executing a motor skill or task. Continuous practice of a specific motor skill will result in a greatly improved performance, which leads to motor learning. Motor learning is a relatively permanent change in the ability to perform a skill as a result of continuous practice or experience. A fundamental movement skill is a developed ability to move the body in coordinated ways to achieve consistent performance at demanding physical tasks, such as found in sports, combat or personal locomotion, especially those unique to humans, such as ice skating, skateboarding, kayaking, or horseback riding. Movement skills generally emphasize stability, balance, and a coordinated muscular progression from prime movers (legs, hips, lower back) to secondary movers (shoulders, elbow, wrist) when conducting explosive movements, such as throwing a baseball. In most physical training, development of core musculature is a central focus. In the athletic context, fundamental movement skills draw upon human physiology and sport psychology. Types of motor skills Motor skills are movements and actions of the muscles. There are two major groups of motor skills: Gross motor skills – require the use of large muscle groups in our legs, torso, and arms to perform tasks such as: walking, balancing, and crawling. The skill required is not extensive and therefore are usually associated with continuous tasks. Much of the development of these skills occurs during early childhood. We use our gross motor skills on a daily basis without putt The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Along with muscles, what helps the body move with relatively little force? A. nerves B. limbs C. glands D. joints Answer:
sciq-10132
multiple_choice
What type of biomes are found in the salt water of the ocean?
[ "marine", "major", "active", "surreal" ]
A
Relavent Documents: Document 0::: Aquatic science is the study of the various bodies of water that make up our planet including oceanic and freshwater environments. Aquatic scientists study the movement of water, the chemistry of water, aquatic organisms, aquatic ecosystems, the movement of materials in and out of aquatic ecosystems, and the use of water by humans, among other things. Aquatic scientists examine current processes as well as historic processes, and the water bodies that they study can range from tiny areas measured in millimeters to full oceans. Moreover, aquatic scientists work in Interdisciplinary groups. For example, a physical oceanographer might work with a biological oceanographer to understand how physical processes, such as tropical cyclones or rip currents, affect organisms in the Atlantic Ocean. Chemists and biologists, on the other hand, might work together to see how the chemical makeup of a certain body of water affects the plants and animals that reside there. Aquatic scientists can work to tackle global problems such as global oceanic change and local problems, such as trying to understand why a drinking water supply in a certain area is polluted. There are two main fields of study that fall within the field of aquatic science. These fields of study include oceanography and limnology. Oceanography Oceanography refers to the study of the physical, chemical, and biological characteristics of oceanic environments. Oceanographers study the history, current condition, and future of the planet's oceans. They also study marine life and ecosystems, ocean circulation, plate tectonics, the geology of the seafloor, and the chemical and physical properties of the ocean. Oceanography is interdisciplinary. For example, there are biological oceanographers and marine biologists. These scientists specialize in marine organisms. They study how these organisms develop, their relationship with one another, and how they interact and adapt to their environment. Biological oceanographers Document 1::: Marine ecosystems are the largest of Earth's aquatic ecosystems and exist in waters that have a high salt content. These systems contrast with freshwater ecosystems, which have a lower salt content. Marine waters cover more than 70% of the surface of the Earth and account for more than 97% of Earth's water supply and 90% of habitable space on Earth. Seawater has an average salinity of 35 parts per thousand of water. Actual salinity varies among different marine ecosystems. Marine ecosystems can be divided into many zones depending upon water depth and shoreline features. The oceanic zone is the vast open part of the ocean where animals such as whales, sharks, and tuna live. The benthic zone consists of substrates below water where many invertebrates live. The intertidal zone is the area between high and low tides. Other near-shore (neritic) zones can include mudflats, seagrass meadows, mangroves, rocky intertidal systems, salt marshes, coral reefs, lagoons. In the deep water, hydrothermal vents may occur where chemosynthetic sulfur bacteria form the base of the food web. Marine ecosystems are characterized by the biological community of organisms that they are associated with and their physical environment. Classes of organisms found in marine ecosystems include brown algae, dinoflagellates, corals, cephalopods, echinoderms, and sharks. Marine ecosystems are important sources of ecosystem services and food and jobs for significant portions of the global population. Human uses of marine ecosystems and pollution in marine ecosystems are significantly threats to the stability of these ecosystems. Environmental problems concerning marine ecosystems include unsustainable exploitation of marine resources (for example overfishing of certain species), marine pollution, climate change, and building on coastal areas. Moreover, much of the carbon dioxide causing global warming and heat captured by global warming are absorbed by the ocean, ocean chemistry is changing through Document 2::: A marine coastal ecosystem is a marine ecosystem which occurs where the land meets the ocean. Marine coastal ecosystems include many very different types of marine habitats, each with their own characteristics and species composition. They are characterized by high levels of biodiversity and productivity. For example, estuaries are areas where freshwater rivers meet the saltwater of the ocean, creating an environment that is home to a wide variety of species, including fish, shellfish, and birds. Salt marshes are coastal wetlands which thrive on low-energy shorelines in temperate and high-latitude areas, populated with salt-tolerant plants such as cordgrass and marsh elder that provide important nursery areas for many species of fish and shellfish. Mangrove forests survive in the intertidal zones of tropical or subtropical coasts, populated by salt-tolerant trees that protect habitat for many marine species, including crabs, shrimp, and fish. Further examples are coral reefs and seagrass meadows, which are both found in warm, shallow coastal waters. Coral reefs thrive in nutrient-poor waters on high-energy shorelines that are agitated by waves. They are underwater ecosystem made up of colonies of tiny animals called coral polyps. These polyps secrete hard calcium carbonate skeletons that builds up over time, creating complex and diverse underwater structures. These structures function as some of the most biodiverse ecosystems on the planet, providing habitat and food for a huge range of marine organisms. Seagrass meadows can be adjacent to coral reefs. These meadows are underwater grasslands populated by marine flowering plants that provide nursery habitats and food sources for many fish species, crabs and sea turtles, as well as dugongs. In slightly deeper waters are kelp forests, underwater ecosystems found in cold, nutrient-rich waters, primarily in temperate regions. These are dominated by a large brown algae called kelp, a type of seaweed that grows several m Document 3::: AquaMaps is a collaborative project with the aim of producing computer-generated (and ultimately, expert reviewed) predicted global distribution maps for marine species on a 0.5 x 0.5 degree grid of the oceans based on data available through online species databases such as FishBase and SeaLifeBase and species occurrence records from OBIS or GBIF and using an environmental envelope model (see niche modelling) in conjunction with expert input. The underlying model represents a modified version of the relative environmental suitability (RES) model developed by Kristin Kaschner to generate global predictions of marine mammal occurrences. According to the AquaMaps website in August 2013, the project held standardized distribution maps for over 17,300 species of fishes, marine mammals and invertebrates. The project is also expanding to incorporate freshwater species, with more than 600 biodiversity maps for freshwater fishes of the Americas available as at November 2009. AquaMaps predictions have been validated successfully for a number of species using independent data sets and the model was shown to perform equally well or better than other standard species distribution models, when faced with the currently existing suboptimal input data sets. In addition to displaying individual maps per species, AquaMaps provides tools to generate species richness maps by higher taxon, plus a spatial search for all species overlapping a specified grid square. There is also the facility to create custom maps for any species via the web by modifying the input parameters and re-running the map generating algorithm in real time, and a variety of other tools including the investigation of effects of climate change on species distributions (see relevant section of the AquaMaps search page). Coordination The project is coordinated by Dr Rainer Froese of IFM-GEOMAR and involves contributions from other research institutes including the Evolutionary Biology and Ecology Lab, Albert-Ludwigs Document 4::: The Biogeography of Deep-Water Chemosynthetic Ecosystems is a field project of the Census of Marine Life programme (CoML). The main aim of ChEss is to determine the biogeography of deep-water chemosynthetic ecosystems at a global scale and to understand the processes driving these ecosystems. ChEss addresses the main questions of CoML on diversity, abundance and distribution of marine species, focusing on deep-water reducing environments such as hydrothermal vents, cold seeps, whale falls, sunken wood and areas of low oxygen that intersect with continental margins and seamounts. Background Deep-sea hydrothermal vents and their associated fauna were first discovered along the Galapagos Rift in the eastern Pacific in 1977. Vents are now known to occur along all active mid ocean ridges and back-arc spreading centres, from fast to ultra-slow spreading ridges. The interest in chemosynthetic environments was strengthened by the discovery of chemosynthetic-based fauna at cold seeps along the base of the Florida Escarpment in 1983. Cold seeps occur along active and passive continental margins. More recently, the study of chemosynthetic fauna has extended to the communities that develop in other reducing habitats such as whale falls, sunken wood and areas of oxygen minima when they intersect with the margin or seamounts. Since the first discovery of hydrothermal vents, more than 600 species have been described from vents and seeps. This is equivalent of 1 new description every 2 weeks(!). As biologists, geochemists, and physicists combine research efforts in these systems, new species will certainly be discovered. Moreover, because of the extreme conditions of the vent and seep habitat, certain species may have specific physiological adaptations with interesting results for the biochemical and medical industry. These globally distributed, ephemeral and insular habitats that support endemic faunas offer natural laboratories for studies on dispersal, isolation and evolutio The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What type of biomes are found in the salt water of the ocean? A. marine B. major C. active D. surreal Answer:
sciq-4930
multiple_choice
What is the most common type of muscle in the human body?
[ "skeletal", "digestive", "fetal", "internal" ]
A
Relavent Documents: Document 0::: Myology is the study of the muscular system, including the study of the structure, function and diseases of muscle. The muscular system consists of skeletal muscle, which contracts to move or position parts of the body (e.g., the bones that articulate at joints), smooth and cardiac muscle that propels, expels or controls the flow of fluids and contained substance. See also Myotomy Oral myology Document 1::: Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research. Americas Human Biology major at Stanford University, Palo Alto (since 1970) Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government. Human and Social Biology (Caribbean) Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment. Human Biology Program at University of Toronto The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications. Asia BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002) BSc (honours) Human Biology at AIIMS (New Document 2::: In an isotonic contraction, tension remains the same, whilst the muscle's length changes. Isotonic contractions differ from isokinetic contractions in that in isokinetic contractions the muscle speed remains constant. While superficially identical, as the muscle's force changes via the length-tension relationship during a contraction, an isotonic contraction will keep force constant while velocity changes, but an isokinetic contraction will keep velocity constant while force changes. A near isotonic contraction is known as Auxotonic contraction. There are two types of isotonic contractions: (1) concentric and (2) eccentric. In a concentric contraction, the muscle tension rises to meet the resistance, then remains the same as the muscle shortens. In eccentric, the muscle lengthens due to the resistance being greater than the force the muscle is producing. Concentric This type is typical of most exercise. The external force on the muscle is less than the force the muscle is generating - a shortening contraction. The effect is not visible during the classic biceps curl, which is in fact auxotonic because the resistance (torque due to the weight being lifted) does not remain the same through the exercise. Tension is highest at a parallel to the floor level, and eases off above and below this point. Therefore, tension changes as well as muscle length. Eccentric There are two main features to note regarding eccentric contractions. First, the absolute tensions achieved can be very high relative to the muscle's maximum tetanic tension generating capacity (you can set down a much heavier object than you can lift). Second, the absolute tension is relatively independent of lengthening velocity. Muscle injury and soreness are selectively associated with eccentric contraction. Muscle strengthening using exercises that involve eccentric contractions is lower than using concentric exercises. However because higher levels of tension are easier to attain during exercises th Document 3::: In muscle physiology, physiological cross-sectional area (PCSA) is the area of the cross section of a muscle perpendicular to its fibers, generally at its largest point. It is typically used to describe the contraction properties of pennate muscles. It is not the same as the anatomical cross-sectional area (ACSA), which is the area of the crossection of a muscle perpendicular to its longitudinal axis. In a non-pennate muscle the fibers are parallel to the longitudinal axis, and therefore PCSA and ACSA coincide. Definition One advantage of pennate muscles is that more muscle fibers can be packed in parallel, thus allowing the muscle to produce more force, although the fiber angle to the direction of action means that the maximum force in that direction is somewhat less than the maximum force in the fiber direction. The muscle cross-sectional area (blue line in figure 1, also known as anatomical cross-section area, or ACSA) does not accurately represent the number of muscle fibers in the muscle. A better estimate is provided by the total area of the cross-sections perpendicular to the muscle fibers (green lines in figure 1). This measure is known as the physiological cross-sectional area (PCSA), and is commonly calculated and defined by the following formula, developed in 1975 by Alexander and Vernon: where ρ is the density of the muscle: PCSA increases with pennation angle, and with muscle length. In a pennate muscle, PCSA is always larger than ACSA. In a non-pennate muscle, it coincides with ACSA. Estimating muscle force from PCSA The total force exerted by the fibers in their oblique direction is proportional to PCSA. If the specific tension of the muscle fibers is known (force exerted by the fibers per unit of PCSA), it can be computed as follows: However, only a component of that force can be used to pull the tendon in the desired direction. This component, which is the true muscle force (also called tendon force), is exerted along the direction of acti Document 4::: Kinesiology () is the scientific study of human body movement. Kinesiology addresses physiological, anatomical, biomechanical, pathological, neuropsychological principles and mechanisms of movement. Applications of kinesiology to human health include biomechanics and orthopedics; strength and conditioning; sport psychology; motor control; skill acquisition and motor learning; methods of rehabilitation, such as physical and occupational therapy; and sport and exercise physiology. Studies of human and animal motion include measures from motion tracking systems, electrophysiology of muscle and brain activity, various methods for monitoring physiological function, and other behavioral and cognitive research techniques. Basics Kinesiology studies the science of human movement, performance, and function by applying the fundamental sciences of Cell Biology, Molecular Biology, Chemistry, Biochemistry, Biophysics, Biomechanics, Biomathematics, Biostatistics, Anatomy, Physiology, Exercise Physiology, Pathophysiology, Neuroscience, and Nutritional science. A bachelor's degree in kinesiology can provide strong preparation for graduate study in biomedical research, as well as in professional programs, such as medicine, dentistry, physical therapy, and occupational therapy. The term "kinesiologist" is not a licensed nor professional designation in many countries, with the notable exception of Canada. Individuals with training in this area can teach physical education, work as personal trainers and sport coaches, provide consulting services, conduct research and develop policies related to rehabilitation, human motor performance, ergonomics, and occupational health and safety. In North America, kinesiologists may study to earn a Bachelor of Science, Master of Science, or Doctorate of Philosophy degree in Kinesiology or a Bachelor of Kinesiology degree, while in Australia or New Zealand, they are often conferred an Applied Science (Human Movement) degree (or higher). Many doctor The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the most common type of muscle in the human body? A. skeletal B. digestive C. fetal D. internal Answer:
sciq-8277
multiple_choice
Which factor ruins renewable resources like soil and water?
[ "careless human action", "resentful human action", "hateful human action", "melancholic human action" ]
A
Relavent Documents: Document 0::: Resource refers to all the materials available in our environment which are technologically accessible, economically feasible and culturally sustainable and help us to satisfy our needs and wants. Resources can broadly be classified upon their availability — they are classified into renewable and non-renewable resources. They can also be classified as actual and potential on the basis of the level of development and use, on the basis of origin they can be classified as biotic and abiotic, and on the basis of their distribution, as ubiquitous and localised (private, community-owned, national and international resources). An item becomes a resource with time and developing technology. The benefits of resource utilization may include increased wealth, proper functioning of a system, or enhanced well-being. From a human perspective, a natural resource is anything obtained from the environment to satisfy human needs and wants. From a broader biological or ecological perspective, a resource satisfies the needs of a living organism (see biological resource). The concept of resources has been developed across many established areas of work, in economics, biology and ecology, computer science, management, and human resources for example - linked to the concepts of competition, sustainability, conservation, and stewardship. In application within human society, commercial or non-commercial factors require resource allocation through resource management. The concept of a resource can also be tied to the direction of leadership over resources, this can include the things leaders have responsibility for over the human resources, with management, help, support or direction such as in charge of a professional group, technical experts, innovative leaders, archiving expertise, academic management, association management, business management, healthcare management, military management, public administration, spiritual leadership and social networking administrator. individuals exp Document 1::: Ian Gordon Simmons (born 22 January 1937) is a British geographer. He retired as Professor of Geography from the University of Durham in 2001. He has made significant contributions to environmental history and prehistoric archaeology. Background Simmons grew up in East London and then East Lincolnshire until the age of 12. He studied physical geography (BSc) and holds a PhD from the University of London (early 1960s) on the vegetation history of Dartmoor. He began university lecturing in his early 20s and was Lecturer and then Reader in Geography at the University of Durham from 1962 to 1977, then Professor of Geography at the University of Bristol from 1977 to 1981 before returning to a Chair in Geography at Durham, where he worked until retiring in 2001. In 1972–73, he taught biogeography for a year at York University, Canada and has held other appointments including Visiting Scholar, St. John's College, University of Oxford in the 1990s. Previously, he had been an ACLS postdoctoral fellow at the University of California, Berkeley. Scholarship His research includes the study of the later Mesolithic and early Neolithic in their environmental setting on English uplands, where he has demonstrated the role of these early human communities in initiating some of Britain's characteristic landscape elements. His work also encompasses the long-term effects of human manipulation of the natural environment and its consequences for resource use and environmental change. This line of work resulted in his last three books, which looked at environmental history on three nested scales: the moorlands of England and Wales, Great Britain, and the Globe. Each dealt with the last 10,000 years and tried to encompassboth conventional science-based data with the insights of the social sciences and humanities. Simmons has authored several books on environmental thought and culture over the ages as well as contemporary resource management and environmental problems. Since retireme Document 2::: Energy quality is a measure of the ease with which a form of energy can be converted to useful work or to another form of energy: i.e. its content of thermodynamic free energy. A high quality form of energy has a high content of thermodynamic free energy, and therefore a high proportion of it can be converted to work; whereas with low quality forms of energy, only a small proportion can be converted to work, and the remainder is dissipated as heat. The concept of energy quality is also used in ecology, where it is used to track the flow of energy between different trophic levels in a food chain and in thermoeconomics, where it is used as a measure of economic output per unit of energy. Methods of evaluating energy quality often involve developing a ranking of energy qualities in hierarchical order. Examples: Industrialization, Biology The consideration of energy quality was a fundamental driver of industrialization from the 18th through 20th centuries. Consider for example the industrialization of New England in the 18th century. This refers to the construction of textile mills containing power looms for weaving cloth. The simplest, most economical and straightforward source of energy was provided by water wheels, extracting energy from a millpond behind a dam on a local creek. If another nearby landowner also decided to build a mill on the same creek, the construction of their dam would lower the overall hydraulic head to power the existing waterwheel, thus hurting power generation and efficiency. This eventually became an issue endemic to the entire region, reducing the overall profitability of older mills as newer ones were built. The search for higher quality energy was a major impetus throughout the 19th and 20th centuries. For example, burning coal to make steam to generate mechanical energy would not have been imaginable in the 18th century; by the end of the 19th century, the use of water wheels was long outmoded. Similarly, the quality of energy from elec Document 3::: Deep Green Resistance (DGR) is a radical environmental movement that views mainstream environmental activism as being ineffective. The group, which perceives the existence of industrial civilization itself as the greatest threat to the natural environment, strives for community organizing to build alternative food, housing, and medical institutions.<ref>"About Us". Deep Green Resistance. 2022.</ref> The organization advocates sabotage against infrastructure, which it views as necessary tactics to achieve its goal of dismantling industrial civilization. Religious and ecological scholar Todd LeVasseur classifies it as an apocalyptic or millenarian movement. Beliefs In the 2011 book Deep Green Resistance, the authors Lierre Keith, Derrick Jensen and Aric McBay state that civilization, particularly industrial civilization, is fundamentally unsustainable and must be actively and urgently dismantled in order to secure a future for all species on the planet. The movement differentiates itself from bright green environmentalism, which is characterized by a focus on personal, technological, or government and corporate solutions, in that it holds these solutions as inadequate. DGR believes that lifestyle changes, such as using travel mugs and reusable bags and taking shorter showers, are too small for the large-scale environmental problems the world faces. It also states that the recent surge in environmentalism has become commercial in nature, and thus in itself has been industrialized. The movement asserts that per capita industrial waste produced is orders of magnitude greater than personal waste produced; therefore, it is industrialism that must be ended, and with that, lifestyle changes will follow. DGR calls for the dismantling of industrial civilization, and the return to a pre-agricultural lifestyle. In a piece for Earth Island, Max Wilbert, who says DGR believes 'agriculture is theft', welcomes the collapse of global grid power, and views electricity, whether gene Document 4::: The indirect land use change impacts of biofuels, also known as ILUC or iLUC (pronounced as i-luck), relates to the unintended consequence of releasing more carbon emissions due to land-use changes around the world induced by the expansion of croplands for ethanol or biodiesel production in response to the increased global demand for biofuels. As farmers worldwide respond to higher crop prices in order to maintain the global food supply-and-demand balance, pristine lands are cleared to replace the food crops that were diverted elsewhere to biofuels' production. Because natural lands, such as rainforests and grasslands, store carbon in their soil and biomass as plants grow each year, clearance of wilderness for new farms translates to a net increase in greenhouse gas emissions. Due to this off-site change in the carbon stock of the soil and the biomass, indirect land use change has consequences in the greenhouse gas (GHG) balance of a biofuel. Other authors have also argued that indirect land use changes produce other significant social and environmental impacts, affecting biodiversity, water quality, food prices and supply, land tenure, worker migration, and community and cultural stability. History The estimates of carbon intensity for a given biofuel depend on the assumptions regarding several variables. As of 2008, multiple full life cycle studies had found that corn ethanol, cellulosic ethanol and Brazilian sugarcane ethanol produce lower greenhouse gas emissions than gasoline. None of these studies, however, considered the effects of indirect land-use changes, and though land use impacts were acknowledged, estimation was considered too complex and difficult to model. A controversial paper published in February 2008 in Sciencexpress by a team led by Searchinger from Princeton University concluded that such effects offset the (positive) direct effects of both corn and cellulosic ethanol and that Brazilian sugarcane performed better, but still resulted in a sma The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Which factor ruins renewable resources like soil and water? A. careless human action B. resentful human action C. hateful human action D. melancholic human action Answer:
sciq-8628
multiple_choice
Pulmonary ventilation in mammals occurs via what?
[ "inflammation", "osmosis", "ingestion", "inhalation" ]
D
Relavent Documents: Document 0::: The control of ventilation is the physiological mechanisms involved in the control of breathing, which is the movement of air into and out of the lungs. Ventilation facilitates respiration. Respiration refers to the utilization of oxygen and balancing of carbon dioxide by the body as a whole, or by individual cells in cellular respiration. The most important function of breathing is the supplying of oxygen to the body and balancing of the carbon dioxide levels. Under most conditions, the partial pressure of carbon dioxide (PCO2), or concentration of carbon dioxide, controls the respiratory rate. The peripheral chemoreceptors that detect changes in the levels of oxygen and carbon dioxide are located in the arterial aortic bodies and the carotid bodies. Central chemoreceptors are primarily sensitive to changes in the pH of the blood, (resulting from changes in the levels of carbon dioxide) and they are located on the medulla oblongata near to the medullar respiratory groups of the respiratory center. Information from the peripheral chemoreceptors is conveyed along nerves to the respiratory groups of the respiratory center. There are four respiratory groups, two in the medulla and two in the pons. The two groups in the pons are known as the pontine respiratory group. Dorsal respiratory group – in the medulla Ventral respiratory group – in the medulla Pneumotaxic center – various nuclei of the pons Apneustic center – nucleus of the pons From the respiratory center, the muscles of respiration, in particular the diaphragm, are activated to cause air to move in and out of the lungs. Control of respiratory rhythm Ventilatory pattern Breathing is normally an unconscious, involuntary, automatic process. The pattern of motor stimuli during breathing can be divided into an inhalation stage and an exhalation stage. Inhalation shows a sudden, ramped increase in motor discharge to the respiratory muscles (and the pharyngeal constrictor muscles). Before the end of inh Document 1::: Speech science refers to the study of production, transmission and perception of speech. Speech science involves anatomy, in particular the anatomy of the oro-facial region and neuroanatomy, physiology, and acoustics. Speech production The production of speech is a highly complex motor task that involves approximately 100 orofacial, laryngeal, pharyngeal, and respiratory muscles. Precise and expeditious timing of these muscles is essential for the production of temporally complex speech sounds, which are characterized by transitions as short as 10 ms between frequency bands and an average speaking rate of approximately 15 sounds per second. Speech production requires airflow from the lungs (respiration) to be phonated through the vocal folds of the larynx (phonation) and resonated in the vocal cavities shaped by the jaw, soft palate, lips, tongue and other articulators (articulation). Respiration Respiration is the physical process of gas exchange between an organism and its environment involving four steps (ventilation, distribution, perfusion and diffusion) and two processes (inspiration and expiration). Respiration can be described as the mechanical process of air flowing into and out of the lungs on the principle of Boyle's law, stating that, as the volume of a container increases, the air pressure will decrease. This relatively negative pressure will cause air to enter the container until the pressure is equalized. During inspiration of air, the diaphragm contracts and the lungs expand drawn by pleurae through surface tension and negative pressure. When the lungs expand, air pressure becomes negative compared to atmospheric pressure and air will flow from the area of higher pressure to fill the lungs. Forced inspiration for speech uses accessory muscles to elevate the rib cage and enlarge the thoracic cavity in the vertical and lateral dimensions. During forced expiration for speech, muscles of the trunk and abdomen reduce the size of the thoracic cavity by Document 2::: Breathing (spiration or ventilation) is the process of moving air into and from the lungs to facilitate gas exchange with the internal environment, mostly to flush out carbon dioxide and bring in oxygen. All aerobic creatures need oxygen for cellular respiration, which extracts energy from the reaction of oxygen with molecules derived from food and produces carbon dioxide as a waste product. Breathing, or external respiration, brings air into the lungs where gas exchange takes place in the alveoli through diffusion. The body's circulatory system transports these gases to and from the cells, where cellular respiration takes place. The breathing of all vertebrates with lungs consists of repetitive cycles of inhalation and exhalation through a highly branched system of tubes or airways which lead from the nose to the alveoli. The number of respiratory cycles per minute is the breathing or respiratory rate, and is one of the four primary vital signs of life. Under normal conditions the breathing depth and rate is automatically, and unconsciously, controlled by several homeostatic mechanisms which keep the partial pressures of carbon dioxide and oxygen in the arterial blood constant. Keeping the partial pressure of carbon dioxide in the arterial blood unchanged under a wide variety of physiological circumstances, contributes significantly to tight control of the pH of the extracellular fluids (ECF). Over-breathing (hyperventilation) and under-breathing (hypoventilation), which decrease and increase the arterial partial pressure of carbon dioxide respectively, cause a rise in the pH of ECF in the first case, and a lowering of the pH in the second. Both cause distressing symptoms. Breathing has other important functions. It provides a mechanism for speech, laughter and similar expressions of the emotions. It is also used for reflexes such as yawning, coughing and sneezing. Animals that cannot thermoregulate by perspiration, because they lack sufficient sweat glands, may Document 3::: Pulmonary pathology is the subspecialty of surgical pathology which deals with the diagnosis and characterization of neoplastic and non-neoplastic diseases of the lungs and thoracic pleura. Diagnostic specimens are often obtained via bronchoscopic transbronchial biopsy, CT-guided percutaneous biopsy, or video-assisted thoracic surgery (VATS). The diagnosis of inflammatory or fibrotic diseases of the lungs is considered by many pathologists to be particularly challenging. Anatomical pathology Document 4::: Mucociliary clearance (MCC), mucociliary transport, or the mucociliary escalator, describes the self-clearing mechanism of the airways in the respiratory system. It is one of the two protective processes for the lungs in removing inhaled particles including pathogens before they can reach the delicate tissue of the lungs. The other clearance mechanism is provided by the cough reflex. Mucociliary clearance has a major role in pulmonary hygiene. MCC effectiveness relies on the correct properties of the airway surface liquid produced, both of the periciliary sol layer and the overlying mucus gel layer, and of the number and quality of the cilia present in the lining of the airways. An important factor is the rate of mucin secretion. The ion channels CFTR and ENaC work together to maintain the necessary hydration of the airway surface liquid. Any disturbance in the closely regulated functioning of the cilia can cause a disease. Disturbances in the structural formation of the cilia can cause a number of ciliopathies, notably primary ciliary dyskinesia. Cigarette smoke exposure can cause shortening of the cilia. Function In the upper part of the respiratory tract the nasal hair in the nostrils traps large particles, and the sneeze reflex may also be triggered to expel them. The nasal mucosa also traps particles preventing their entry further into the tract. In the rest of the respiratory tract, particles of different sizes become deposited along different parts of the airways. Larger particles are trapped higher up in the larger bronchi. As the airways become narrower only smaller particles can pass. The branchings of the airways cause turbulence in the airflow at all of their junctions where particles can then be deposited and they never reach the alveoli. Only very small pathogens are able to gain entry to the alveoli. Mucociliary clearance functions to remove these particulates and also to trap and remove pathogens from the airways, in order to protect the delicate The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Pulmonary ventilation in mammals occurs via what? A. inflammation B. osmosis C. ingestion D. inhalation Answer:
scienceQA-10321
multiple_choice
What do these two changes have in common? cutting an apple ice melting in a glass
[ "Both are only physical changes.", "Both are caused by cooling.", "Both are chemical changes.", "Both are caused by heating." ]
A
Step 1: Think about each change. Cutting an apple is a physical change. The apple gets a different shape. But it is still made of the same type of matter as the uncut apple. Ice melting in a glass is a change of state. So, it is a physical change. The solid ice becomes liquid, but it is still made of water. A different type of matter is not made. Step 2: Look at each answer choice. Both are only physical changes. Both changes are physical changes. No new matter is created. Both are chemical changes. Both changes are physical changes. They are not chemical changes. Both are caused by heating. Ice melting is caused by heating. But cutting an apple is not. Both are caused by cooling. Neither change is caused by cooling.
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds. Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate. A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density. An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge. Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change. Examples Heating and cooling Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation. Magnetism Ferro-magnetic materials can become magnetic. The process is reve Document 2::: Adaptive comparative judgement is a technique borrowed from psychophysics which is able to generate reliable results for educational assessment – as such it is an alternative to traditional exam script marking. In the approach, judges are presented with pairs of student work and are then asked to choose which is better, one or the other. By means of an iterative and adaptive algorithm, a scaled distribution of student work can then be obtained without reference to criteria. Introduction Traditional exam script marking began in Cambridge 1792 when, with undergraduate numbers rising, the importance of proper ranking of students was growing. So in 1792 the new Proctor of Examinations, William Farish, introduced marking, a process in which every examiner gives a numerical score to each response by every student, and the overall total mark puts the students in the final rank order. Francis Galton (1869) noted that, in an unidentified year about 1863, the Senior Wrangler scored 7,634 out of a maximum of 17,000, while the Second Wrangler scored 4,123. (The 'Wooden Spoon' scored only 237.) Prior to 1792, a team of Cambridge examiners convened at 5pm on the last day of examining, reviewed the 19 papers each student had sat – and published their rank order at midnight. Marking solved the problems of numbers and prevented unfair personal bias, and its introduction was a step towards modern objective testing, the format it is best suited to. But the technology of testing that followed, with its major emphasis on reliability and the automatisation of marking, has been an uncomfortable partner for some areas of educational achievement: assessing writing or speaking, and other kinds of performance need something more qualitative and judgemental. The technique of Adaptive Comparative Judgement is an alternative to marking. It returns to the pre-1792 idea of sorting papers according to their quality, but retains the guarantee of reliability and fairness. It is by far the most rel Document 3::: Thermofluids is a branch of science and engineering encompassing four intersecting fields: Heat transfer Thermodynamics Fluid mechanics Combustion The term is a combination of "thermo", referring to heat, and "fluids", which refers to liquids, gases and vapors. Temperature, pressure, equations of state, and transport laws all play an important role in thermofluid problems. Phase transition and chemical reactions may also be important in a thermofluid context. The subject is sometimes also referred to as "thermal fluids". Heat transfer Heat transfer is a discipline of thermal engineering that concerns the transfer of thermal energy from one physical system to another. Heat transfer is classified into various mechanisms, such as heat conduction, convection, thermal radiation, and phase-change transfer. Engineers also consider the transfer of mass of differing chemical species, either cold or hot, to achieve heat transfer. Sections include : Energy transfer by heat, work and mass Laws of thermodynamics Entropy Refrigeration Techniques Properties and nature of pure substances Applications Engineering : Predicting and analysing the performance of machines Thermodynamics Thermodynamics is the science of energy conversion involving heat and other forms of energy, most notably mechanical work. It studies and interrelates the macroscopic variables, such as temperature, volume and pressure, which describe physical, thermodynamic systems. Fluid mechanics Fluid Mechanics the study of the physical forces at work during fluid flow. Fluid mechanics can be divided into fluid kinematics, the study of fluid motion, and fluid kinetics, the study of the effect of forces on fluid motion. Fluid mechanics can further be divided into fluid statics, the study of fluids at rest, and fluid dynamics, the study of fluids in motion. Some of its more interesting concepts include momentum and reactive forces in fluid flow and fluid machinery theory and performance. Sections include: Flu Document 4::: Test equating traditionally refers to the statistical process of determining comparable scores on different forms of an exam. It can be accomplished using either classical test theory or item response theory. In item response theory, equating is the process of placing scores from two or more parallel test forms onto a common score scale. The result is that scores from two different test forms can be compared directly, or treated as though they came from the same test form. When the tests are not parallel, the general process is called linking. It is the process of equating the units and origins of two scales on which the abilities of students have been estimated from results on different tests. The process is analogous to equating degrees Fahrenheit with degrees Celsius by converting measurements from one scale to the other. The determination of comparable scores is a by-product of equating that results from equating the scales obtained from test results. Purpose Suppose that Dick and Jane both take a test to become licensed in a certain profession. Because the high stakes (you get to practice the profession if you pass the test) may create a temptation to cheat, the organization that oversees the test creates two forms. If we know that Dick scored 60% on form A and Jane scored 70% on form B, do we know for sure which one has a better grasp of the material? What if form A is composed of very difficult items, while form B is relatively easy? Equating analyses are performed to address this very issue, so that scores are as fair as possible. Equating in item response theory In item response theory, person "locations" (measures of some quality being assessed by a test) are estimated on an interval scale; i.e., locations are estimated in relation to a unit and origin. It is common in educational assessment to employ tests in order to assess different groups of students with the intention of establishing a common scale by equating the origins, and when appropri The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What do these two changes have in common? cutting an apple ice melting in a glass A. Both are only physical changes. B. Both are caused by cooling. C. Both are chemical changes. D. Both are caused by heating. Answer:
sciq-7012
multiple_choice
All the atoms of a given element have the same number of what in their nucleus, though they may have different numbers of neutrons?
[ "compounds", "protons", "molecules", "electrons" ]
B
Relavent Documents: Document 0::: Isotopes are distinct nuclear species (or nuclides, as technical term) of the same chemical element. They have the same atomic number (number of protons in their nuclei) and position in the periodic table (and hence belong to the same chemical element), but differ in nucleon numbers (mass numbers) due to different numbers of neutrons in their nuclei. While all isotopes of a given element have almost the same chemical properties, they have different atomic masses and physical properties. The term isotope is formed from the Greek roots isos (ἴσος "equal") and topos (τόπος "place"), meaning "the same place"; thus, the meaning behind the name is that different isotopes of a single element occupy the same position on the periodic table. It was coined by Scottish doctor and writer Margaret Todd in 1913 in a suggestion to the British chemist Frederick Soddy. The number of protons within the atom's nucleus is called its atomic number and is equal to the number of electrons in the neutral (non-ionized) atom. Each atomic number identifies a specific element, but not the isotope; an atom of a given element may have a wide range in its number of neutrons. The number of nucleons (both protons and neutrons) in the nucleus is the atom's mass number, and each isotope of a given element has a different mass number. For example, carbon-12, carbon-13, and carbon-14 are three isotopes of the element carbon with mass numbers 12, 13, and 14, respectively. The atomic number of carbon is 6, which means that every carbon atom has 6 protons so that the neutron numbers of these isotopes are 6, 7, and 8 respectively. Isotope vs. nuclide A nuclide is a species of an atom with a specific number of protons and neutrons in the nucleus, for example, carbon-13 with 6 protons and 7 neutrons. The nuclide concept (referring to individual nuclear species) emphasizes nuclear properties over chemical properties, whereas the isotope concept (grouping all atoms of each element) emphasizes chemical over Document 1::: An atom is a particle that consists of a nucleus of protons and neutrons surrounded by an electromagnetically-bound cloud of electrons. The atom is the basic particle of the chemical elements, and the chemical elements are distinguished from each other by the number of protons that are in their atoms. For example, any atom that contains 11 protons is sodium, and any atom that contains 29 protons is copper. The number of neutrons defines the isotope of the element. Atoms are extremely small, typically around 100 picometers across. A human hair is about a million carbon atoms wide. This is smaller than the shortest wavelength of visible light, which means humans cannot see atoms with conventional microscopes. Atoms are so small that accurately predicting their behavior using classical physics is not possible due to quantum effects. More than 99.94% of an atom's mass is in the nucleus. Each proton has a positive electric charge, while each electron has a negative charge, and the neutrons, if any are present, have no electric charge. If the numbers of protons and electrons are equal, as they normally are, then the atom is electrically neutral. If an atom has more electrons than protons, then it has an overall negative charge, and is called a negative ion (or anion). Conversely, if it has more protons than electrons, it has a positive charge, and is called a positive ion (or cation). The electrons of an atom are attracted to the protons in an atomic nucleus by the electromagnetic force. The protons and neutrons in the nucleus are attracted to each other by the nuclear force. This force is usually stronger than the electromagnetic force that repels the positively charged protons from one another. Under certain circumstances, the repelling electromagnetic force becomes stronger than the nuclear force. In this case, the nucleus splits and leaves behind different elements. This is a form of nuclear decay. Atoms can attach to one or more other atoms by chemical bonds to Document 2::: The periodic table is an arrangement of the chemical elements, structured by their atomic number, electron configuration and recurring chemical properties. In the basic form, elements are presented in order of increasing atomic number, in the reading sequence. Then, rows and columns are created by starting new rows and inserting blank cells, so that rows (periods) and columns (groups) show elements with recurring properties (called periodicity). For example, all elements in group (column) 18 are noble gases that are largely—though not completely—unreactive. The history of the periodic table reflects over two centuries of growth in the understanding of the chemical and physical properties of the elements, with major contributions made by Antoine-Laurent de Lavoisier, Johann Wolfgang Döbereiner, John Newlands, Julius Lothar Meyer, Dmitri Mendeleev, Glenn T. Seaborg, and others. Early history Nine chemical elements – carbon, sulfur, iron, copper, silver, tin, gold, mercury, and lead, have been known since before antiquity, as they are found in their native form and are relatively simple to mine with primitive tools. Around 330 BCE, the Greek philosopher Aristotle proposed that everything is made up of a mixture of one or more roots, an idea originally suggested by the Sicilian philosopher Empedocles. The four roots, which the Athenian philosopher Plato called elements, were earth, water, air and fire. Similar ideas about these four elements existed in other ancient traditions, such as Indian philosophy. A few extra elements were known in the age of alchemy: zinc, arsenic, antimony, and bismuth. Platinum was also known to pre-Columbian South Americans, but knowledge of it did not reach Europe until the 16th century. First categorizations The history of the periodic table is also a history of the discovery of the chemical elements. The first person in recorded history to discover a new element was Hennig Brand, a bankrupt German merchant. Brand tried to discover Document 3::: The subatomic scale is the domain of physical size that encompasses objects smaller than an atom. It is the scale at which the atomic constituents, such as the nucleus containing protons and neutrons, and the electrons in their orbitals, become apparent. The subatomic scale includes the many thousands of times smaller subnuclear scale, which is the scale of physical size at which constituents of the protons and neutrons - particularly quarks - become apparent. See also Astronomical scale the opposite end of the spectrum Subatomic particles Document 4::: The atomic number of a material exhibits a strong and fundamental relationship with the nature of radiation interactions within that medium. There are numerous mathematical descriptions of different interaction processes that are dependent on the atomic number, . When dealing with composite media (i.e. a bulk material composed of more than one element), one therefore encounters the difficulty of defining . An effective atomic number in this context is equivalent to the atomic number but is used for compounds (e.g. water) and mixtures of different materials (such as tissue and bone). This is of most interest in terms of radiation interaction with composite materials. For bulk interaction properties, it can be useful to define an effective atomic number for a composite medium and, depending on the context, this may be done in different ways. Such methods include (i) a simple mass-weighted average, (ii) a power-law type method with some (very approximate) relationship to radiation interaction properties or (iii) methods involving calculation based on interaction cross sections. The latter is the most accurate approach (Taylor 2012), and the other more simplified approaches are often inaccurate even when used in a relative fashion for comparing materials. In many textbooks and scientific publications, the following - simplistic and often dubious - sort of method is employed. One such proposed formula for the effective atomic number, , is as follows: where is the fraction of the total number of electrons associated with each element, and is the atomic number of each element. An example is that of water (H2O), made up of two hydrogen atoms (Z=1) and one oxygen atom (Z=8), the total number of electrons is 1+1+8 = 10, so the fraction of electrons for the two hydrogens is (2/10) and for the one oxygen is (8/10). So the for water is: The effective atomic number is important for predicting how photons interact with a substance, as certain types of photon interactions The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. All the atoms of a given element have the same number of what in their nucleus, though they may have different numbers of neutrons? A. compounds B. protons C. molecules D. electrons Answer:
sciq-9259
multiple_choice
What is the clear, curved structure that works with the cornea to help focus light at the back of the eye?
[ "lens", "pupil", "iris", "retina" ]
A
Relavent Documents: Document 0::: The pupil is a hole located in the center of the iris of the eye that allows light to strike the retina. It appears black because light rays entering the pupil are either absorbed by the tissues inside the eye directly, or absorbed after diffuse reflections within the eye that mostly miss exiting the narrow pupil. The size of the pupil is controlled by the iris, and varies depending on many factors, the most significant being the amount of light in the environment. The term "pupil" was coined by Gerard of Cremona. In humans, the pupil is circular, but its shape varies between species; some cats, reptiles, and foxes have vertical slit pupils, goats have horizontally oriented pupils, and some catfish have annular types. In optical terms, the anatomical pupil is the eye's aperture and the iris is the aperture stop. The image of the pupil as seen from outside the eye is the entrance pupil, which does not exactly correspond to the location and size of the physical pupil because it is magnified by the cornea. On the inner edge lies a prominent structure, the collarette, marking the junction of the embryonic pupillary membrane covering the embryonic pupil. Function The iris is a contractile structure, consisting mainly of smooth muscle, surrounding the pupil. Light enters the eye through the pupil, and the iris regulates the amount of light by controlling the size of the pupil. This is known as the pupillary light reflex. The iris contains two groups of smooth muscles; a circular group called the sphincter pupillae, and a radial group called the dilator pupillae. When the sphincter pupillae contract, the iris decreases or constricts the size of the pupil. The dilator pupillae, innervated by sympathetic nerves from the superior cervical ganglion, cause the pupil to dilate when they contract. These muscles are sometimes referred to as intrinsic eye muscles. The sensory pathway (rod or cone, bipolar, ganglion) is linked with its counterpart in the other eye by a partial Document 1::: In 2-dimensional geometry, a lens is a convex region bounded by two circular arcs joined to each other at their endpoints. In order for this shape to be convex, both arcs must bow outwards (convex-convex). This shape can be formed as the intersection of two circular disks. It can also be formed as the union of two circular segments (regions between the chord of a circle and the circle itself), joined along a common chord. Types If the two arcs of a lens have equal radius, it is called a symmetric lens, otherwise is an asymmetric lens. The vesica piscis is one form of a symmetric lens, formed by arcs of two circles whose centers each lie on the opposite arc. The arcs meet at angles of 120° at their endpoints. Area Symmetric The area of a symmetric lens can be expressed in terms of the radius R and arc lengths θ in radians: Asymmetric The area of an asymmetric lens formed from circles of radii R and r with distance d between their centers is where is the area of a triangle with sides d, r, and R. The two circles overlap if . For sufficiently large , the coordinate of the lens centre lies between the coordinates of the two circle centers: For small the coordinate of the lens centre lies outside the line that connects the circle centres: By eliminating y from the circle equations and the abscissa of the intersecting rims is . The sign of x, i.e., being larger or smaller than , distinguishes the two cases shown in the images. The ordinate of the intersection is . Negative values under the square root indicate that the rims of the two circles do not touch because the circles are too far apart or one circle lies entirely within the other. The value under the square root is a biquadratic polynomial of d. The four roots of this polynomial are associated with y=0 and with the four values of d where the two circles have only one point in common. The angles in the blue triangle of sides d, r and R are where y is the ordinate of the intersection. Th Document 2::: Holochroal eyes are compound eyes with many tiny lenses (sometimes more than 15,000, each 30-100μm, rarely larger). They are the oldest and most common type of trilobite eye, and found in all orders of trilobite from the Cambrian to the Permian periods. Lenses (composed of calcite) covered a curved, kidney-shaped visual surface in a hexagonal close packing system, with a single corneal membrane covering all lenses. Unlike in schizochroal eyes, adjacent lenses were in direct contact with one another. Lens shape generally depended on cuticle thickness. The lenses of trilobites with thin cuticles were thin and biconvex, whereas those with thick cuticles had thick lenses, which in extreme cases, could be thick columns with the outer surface flattened and the inner surface hemispherical. Regardless of lens thickness, however, the point at which light was focused was roughly the same distance below the lens. Document 3::: The eyes begin to develop as a pair of diverticula (pouches) from the lateral aspects of the forebrain. These diverticula make their appearance before the closure of the anterior end of the neural tube; after the closure of the tube around the 4th week of development, they are known as the optic vesicles. Previous studies of optic vesicles suggest that the surrounding extraocular tissues – the surface ectoderm and extraocular mesenchyme – are necessary for normal eye growth and differentiation. They project toward the sides of the head, and the peripheral part of each expands to form a hollow bulb, while the proximal part remains narrow and constitutes the optic stalk, which goes on to form the optic nerve. Additional images See also Eye development Document 4::: Retinal mosaic is the name given to the distribution of any particular type of neuron across any particular layer in the retina. Typically such distributions are somewhat regular; it is thought that this is so that each part of the retina is served by each type of neuron in processing visual information. The regularity of retinal mosaics can be quantitatively studied by modelling the mosaic as a spatial point pattern. This is done by treating each cell as a single point and using spatial statistics such as the Effective Radius, Packing Factor and Regularity Index. Using adaptive optics, it is nowadays possible to image the photoreceptor mosaic (i.e. the distribution of rods and cones) in living humans, enabling the detailed study of photoreceptor density and arrangement across the retina. In the fovea (where photoreceptor density is highest) the spacing between adjacent receptors is about 6-8 micrometer. This corresponds to an angular resolution of approximately 0.5 arc minute, effectively the upper limit of human visual acuity. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the clear, curved structure that works with the cornea to help focus light at the back of the eye? A. lens B. pupil C. iris D. retina Answer:
sciq-7260
multiple_choice
What is the natural movement called within your intestines?
[ "peristalsis", "fibroblasts", "proteolysis", "progress" ]
A
Relavent Documents: Document 0::: The gastrointestinal wall of the gastrointestinal tract is made up of four layers of specialised tissue. From the inner cavity of the gut (the lumen) outwards, these are: Mucosa Submucosa Muscular layer Serosa or adventitia The mucosa is the innermost layer of the gastrointestinal tract. It surrounds the lumen of the tract and comes into direct contact with digested food (chyme). The mucosa itself is made up of three layers: the epithelium, where most digestive, absorptive and secretory processes occur; the lamina propria, a layer of connective tissue, and the muscularis mucosae, a thin layer of smooth muscle. The submucosa contains nerves including the submucous plexus (also called Meissner's plexus), blood vessels and elastic fibres with collagen, that stretches with increased capacity but maintains the shape of the intestine. The muscular layer surrounds the submucosa. It comprises layers of smooth muscle in longitudinal and circular orientation that also helps with continued bowel movements (peristalsis) and the movement of digested material out of and along the gut. In between the two layers of muscle lies the myenteric plexus (also called Auerbach's plexus). The serosa/adventitia are the final layers. These are made up of loose connective tissue and coated in mucus so as to prevent any friction damage from the intestine rubbing against other tissue. The serosa is present if the tissue is within the peritoneum, and the adventitia if the tissue is retroperitoneal. Structure When viewed under the microscope, the gastrointestinal wall has a consistent general form, but with certain parts differing along its course. Mucosa The mucosa is the innermost layer of the gastrointestinal tract. It surrounds the cavity (lumen) of the tract and comes into direct contact with digested food (chyme). The mucosa is made up of three layers: The epithelium is the innermost layer. It is where most digestive, absorptive and secretory processes occur. The lamina propr Document 1::: The gastrocolic reflex or gastrocolic response is a physiological reflex that controls the motility, or peristalsis, of the gastrointestinal tract following a meal. It involves an increase in motility of the colon consisting primarily of giant migrating contractions, or migrating motor complexes, in response to stretch in the stomach following ingestion and byproducts of digestion entering the small intestine. Thus, this reflex is responsible for the urge to defecate following a meal. The small intestine also shows a similar motility response. The gastrocolic reflex's function in driving existing intestinal contents through the digestive system helps make way for ingested food. The reflex was demonstrated by myoelectric recordings in the colons of animals and humans, which showed an increase in electrical activity within as little as 15 minutes after eating. The recordings also demonstrated that the gastrocolic reflex is uneven in its distribution throughout the colon. The sigmoid colon is more greatly affected than the rest of the colon in terms of a phasic response, recurring periods of contraction followed by relaxation, in order to propel food distally into the rectum; however, the tonic response across the colon is uncertain. These contractions are generated by the muscularis externa stimulated by the myenteric plexus. When pressure within the rectum becomes increased, the gastrocolic reflex acts as a stimulus for defecation. A number of neuropeptides have been proposed as mediators of the gastrocolic reflex. These include serotonin, neurotensin, cholecystokinin, prostaglandin E1, and gastrin. Coffee can induce a significant response, with 29% of subjects in a study reporting an urge to defecate after ingestion, and manometry showing a reaction typically between 4 and 30 minutes after consumption and potentially lasting for more than 30 minutes. Decaffeinated coffee is also capable of generating a similar effect, albeit slightly weaker. Essentially, this m Document 2::: The Joan Mott Prize Lecture is a prize lecture awarded annually by The Physiological Society in honour of Joan Mott. Laureates Laureates of the award have included: - Intestinal absorption of sugars and peptides: from textbook to surprises See also Physiological Society Annual Review Prize Lecture Document 3::: The esophagus (American English) or oesophagus (British English, see spelling differences; both ; : (o)esophagi or (o)esophaguses), colloquially known also as the food pipe or gullet, is an organ in vertebrates through which food passes, aided by peristaltic contractions, from the pharynx to the stomach. The esophagus is a fibromuscular tube, about long in adults, that travels behind the trachea and heart, passes through the diaphragm, and empties into the uppermost region of the stomach. During swallowing, the epiglottis tilts backwards to prevent food from going down the larynx and lungs. The word oesophagus is from Ancient Greek οἰσοφάγος (oisophágos), from οἴσω (oísō), future form of φέρω (phérō, “I carry”) + ἔφαγον (éphagon, “I ate”). The wall of the esophagus from the lumen outwards consists of mucosa, submucosa (connective tissue), layers of muscle fibers between layers of fibrous tissue, and an outer layer of connective tissue. The mucosa is a stratified squamous epithelium of around three layers of squamous cells, which contrasts to the single layer of columnar cells of the stomach. The transition between these two types of epithelium is visible as a zig-zag line. Most of the muscle is smooth muscle although striated muscle predominates in its upper third. It has two muscular rings or sphincters in its wall, one at the top and one at the bottom. The lower sphincter helps to prevent reflux of acidic stomach content. The esophagus has a rich blood supply and venous drainage. Its smooth muscle is innervated by involuntary nerves (sympathetic nerves via the sympathetic trunk and parasympathetic nerves via the vagus nerve) and in addition voluntary nerves (lower motor neurons) which are carried in the vagus nerve to innervate its striated muscle. The esophagus passes through the thoracic cavity into the diaphragm into the stomach. Document 4::: The muscular layer (muscular coat, muscular fibers, muscularis propria, muscularis externa) is a region of muscle in many organs in the vertebrate body, adjacent to the submucosa. It is responsible for gut movement such as peristalsis. The Latin, tunica muscularis, may also be used. Structure It usually has two layers of smooth muscle: inner and "circular" outer and "longitudinal" However, there are some exceptions to this pattern. In the stomach there are three layers to the muscular layer. Stomach contains an additional oblique muscle layer just interior to circular muscle layer. In the upper esophagus, part of the externa is skeletal muscle, rather than smooth muscle. In the vas deferens of the spermatic cord, there are three layers: inner longitudinal, middle circular, and outer longitudinal. In the ureter the smooth muscle orientation is opposite that of the GI tract. There is an inner longitudinal and an outer circular layer. The inner layer of the muscularis externa forms a sphincter at two locations of the gastrointestinal tract: in the pylorus of the stomach, it forms the pyloric sphincter. in the anal canal, it forms the internal anal sphincter. In the colon, the fibres of the external longitudinal smooth muscle layer are collected into three longitudinal bands, the teniae coli. The thickest muscularis layer is found in the stomach (triple layered) and thus maximum peristalsis occurs in the stomach. Thinnest muscularis layer in the alimentary canal is found in the rectum, where minimum peristalsis occurs. Function The muscularis layer is responsible for the peristaltic movements and segmental contractions in and the alimentary canal. The Auerbach's nerve plexus (myenteric nerve plexus) is found between longitudinal and circular muscle layers, it starts muscle contractions to initiate peristalsis. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the natural movement called within your intestines? A. peristalsis B. fibroblasts C. proteolysis D. progress Answer:
sciq-11439
multiple_choice
What is the term for a sac filled with fluid or other material?
[ "blister", "lesion", "tumor", "cyst" ]
D
Relavent Documents: Document 0::: In histology, a lacuna is a small space, containing an osteocyte in bone, or chondrocyte in cartilage. Bone The lacunae are situated between the lamellae, and consist of a number of oblong spaces. In an ordinary microscopic section, viewed by transmitted light, they appear as fusiform opaque spots. Each lacuna is occupied during life by a branched cell, termed an osteocyte, bone-cell or bone-corpuscle. Lacunae are connected to one another by small canals called canaliculi. A lacuna never contains more than one osteocyte. Sinuses are an example of lacuna. Cartilage The cartilage cells or chondrocytes are contained in cavities in the matrix, called cartilage lacunae; around these, the matrix is arranged in concentric lines as if it had been formed in successive portions around the cartilage cells. This constitutes the so-called capsule of the space. Each lacuna is generally occupied by a single cell, but during the division of the cells, it may contain two, four, or eight cells. Lacunae are found between narrow sheets of calcified matrix that are known as lamellae ( ). See also Lacunar stroke Document 1::: Serous glands secrete serous fluid. They contain serous acini, a grouping of serous cells that secrete serous fluid, isotonic with blood plasma, that contains enzymes such as alpha-amylase. Serous glands are most common in the parotid gland and lacrimal gland but are also present in the submandibular gland and, to a far lesser extent, the sublingual gland. Document 2::: Thrombolites (from Ancient Greek θρόμβος thrómbos meaning "clot" and λῐ́θος líthos meaning "stone") are clotted accretionary structures formed in shallow water by the trapping, binding, and cementation of sedimentary grains by biofilms of microorganisms, especially cyanobacteria. Structures Thrombolites have a clotted structure without the laminae of stromatolites. Each clot within a thrombolite mound is a separate cyanobacterial colony. The clots are on the scale of millimetres to centimetres and may be interspersed with sand, mud or sparry carbonate. Clots that make up thrombolites are called thromboids to avoid confusion with other clotted textures. The larger clots make up more than 40% of a thrombolite's volume and each clot has a complex internal structure of cells and rimmed lobes resulting primarily from calcification of the cyanobacterial colony. Very little sediment is found within the clots because the main growth method is calcification rather than sediment trapping. There is active debate about the size of thromboids, with some seeing thromboids as a macrostructural feature (domical hemispheroid) and others viewing thromboids as a mesostructural feature (random polylobate and subspherical mesoclots). Types There are two main types of thrombolites: Calcified microbe thrombolites This type of thrombolites contain clots that are dominantly composed of calcified microfossil components. These clots do not have a fixed form or size and can expand vertically. Furthermore, burrows and trilobite fragments can exist in these thrombolites. Coarse agglutinated thrombolites This type of thrombolites is composed of small openings that trap fine-grained sediments. They are also known "thrombolitic-stromatolites" due to their close relation with the same composition of stromatolites. Because they trap sediment, their formation is linked to the rise of algal-cyanobacterial mats. Differences from stromatolites Thrombolites can be distinguished from microbialite Document 3::: Tubular glands are glands with a tube-like shape throughout their length, in contrast with alveolar glands, which have a saclike secretory portion. Tubular glands are further classified as one of the following types: Additional images See also skin - glands in skin structure hair follicles - for hair growth Document 4::: Alveolar glands, also called saccular glands, are glands with a saclike secretory portion, in contrast with tubular glands. They typically have an enlarged lumen (cavity), hence the name: they have a shape similar to alveoli, the very small air sacs in the lungs. Some sources draw a clear distinction between acinar and alveolar glands, based upon the size of the lumen. A further complication in the case of the alveolar glands may occur in the form of still smaller saccular diverticuli growing out from the main sacculi. The term "racemose gland" is used to describe a "compound alveolar gland" or "compound acinar gland." Branched alveolar glands are classified as follows: Additional images See also Acinus The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the term for a sac filled with fluid or other material? A. blister B. lesion C. tumor D. cyst Answer:
sciq-4449
multiple_choice
What is the measure of an individual’s weight-to-height ratio called?
[ "body mass index (bmi)", "body matter index (bmi)", "density index (di)", "body density index (bdi)" ]
A
Relavent Documents: Document 0::: Body mass index (BMI) is a value derived from the mass (weight) and height of a person. The BMI is defined as the body mass divided by the square of the body height, and is expressed in units of kg/m2, resulting from mass in kilograms (kg) and height in metres (m). The BMI may be determined first by measuring its components by means of a weighing scale and a stadiometer. The multiplication and division may be carried out directly, by hand or using a calculator, or indirectly using a lookup table (or chart). The table displays BMI as a function of mass and height and may show other units of measurement (converted to metric units for the calculation). The table may also show contour lines or colours for different BMI categories. The BMI is a convenient rule of thumb used to broadly categorize a person as based on tissue mass (muscle, fat, and bone) and height. Major adult BMI classifications are underweight (under 18.5 kg/m2), normal weight (18.5 to 24.9), overweight (25 to 29.9), and obese (30 or more). When used to predict an individual's health, rather than as a statistical measurement for groups, the BMI has limitations that can make it less useful than some of the alternatives, especially when applied to individuals with abdominal obesity, short stature, or high muscle mass. BMIs under 20 and over 25 have been associated with higher all-cause mortality, with the risk increasing with distance from the 20–25 range. History Adolphe Quetelet, a Belgian astronomer, mathematician, statistician, and sociologist, devised the basis of the BMI between 1830 and 1850 as he developed what he called "social physics". Quetelet himself never intended for the index, then called the Quetelet Index, to be used as a means of medical assessment. Instead, it was a component of his study of , or the average man. Quetelet thought of the average man as a social ideal, and developed the body mass index as a means of discovering the socially ideal human person. According to Lars Grue Document 1::: Size in general is the magnitude or dimensions of a thing. More specifically, geometrical size (or spatial size) can refer to three geometrical measures: length, area, or volume. Length can be generalized to other linear dimensions (width, height, diameter, perimeter). Size can also be measured in terms of mass, especially when assuming a density range. In mathematical terms, "size is a concept abstracted from the process of measuring by comparing a longer to a shorter". Size is determined by the process of comparing or measuring objects, which results in the determination of the magnitude of a quantity, such as length or mass, relative to a unit of measurement. Such a magnitude is usually expressed as a numerical value of units on a previously established spatial scale, such as meters or inches. The sizes with which humans tend to be most familiar are body dimensions (measures of anthropometry), which include measures such as human height and human body weight. These measures can, in the aggregate, allow the generation of commercially useful distributions of products that accommodate expected body sizes, as with the creation of clothing sizes and shoe sizes, and with the standardization of door frame dimensions, ceiling heights, and bed sizes. The human experience of size can lead to a psychological tendency towards size bias, wherein the relative importance or perceived complexity of organisms and other objects is judged based on their size relative to humans, and particularly whether this size makes them easy to observe without aid. Human perception Humans most frequently perceive the size of objects through visual cues. One common means of perceiving size is to compare the size of a newly observed object with the size of a familiar object whose size is already known. Binocular vision gives humans the capacity for depth perception, which can be used to judge which of several objects is closer, and by how much, which allows for some estimation of the size of t Document 2::: A Body Shape Index (ABSI) or simply body shape index (BSI) is a metric for assessing the health implications of a given human body height, mass and waist circumference (WC). The inclusion of WC is believed to make the BSI a better indicator of risk of mortality from excess weight than the standard body mass index. ABSI correlates only slightly with height, weight and BMI, indicating that it is independent of other anthropometric variables in predicting mortality. A criticism of BMI is that it does not distinguish between muscle and fat mass and so may be elevated in people with increased BMI due to muscle development rather than fat accumulation from overeating. A higher muscle mass may actually reduce the risk of premature death. A high ABSI appears to correspond to a higher proportion of central obesity, or abdominal fat. In a sample of Americans in the National Health and Nutrition Examination Survey, death rates in some subjects were high for both high and low BMI and WC, a familiar conundrum associated with BMI. In contrast, death rates increased  proportionally with increased values of ABSI. The linear relationship was unaffected by adjustments for other risk factors including smoking, diabetes, elevated blood pressure and serum cholesterol. The equation for ABSI is based on statistical analysis and is derived from an allometric regression. With waist and height in meters and weight in kg), . Studies have associated ABSI with total mortality and cardiovascular risk, indicating that it is useful in assessing cardio-metabolic risks. If the ABSI is above 0.083, an increased risk is assumed; a value of 0.091 is said to represent a doubling of the relative risk. The ABSI is classified into risk classes by means of the ABSI-z value (z-Value) derived from the ABSI. The ABSI-z is calculated from the deviation of the ABSI from the ABSI mean in relation to the standard deviation. The ABSI means and standard deviations are age- and sex-dependent empirically determ Document 3::: In physical fitness, body composition refers to quantifying the different components (or "compartments") of a human body. The selection of compartments varies by model but may include fat, bone, water, and muscle. Two people of the same gender, height, and body weight may have completely different body types as a consequence of having different body compositions. This may be explained by a person having low or high body fat, dense muscles, or big bones. Compartment models Body composition models typically use between 2 and 6 compartments to describe the body. Common models include: 2 compartment: Fat mass (FM) and fat-free mass (FFM) 3 compartment: Fat mass (FM), water, and fat-free dry mass 4 compartment: Fat mass (FM), water, protein, and mineral 5 compartment: Fat mass (FM), water, protein, bone mineral content, and non-osseous mineral content 6 compartment: Fat mass (FM), water, protein, bone mineral content, non-osseous mineral content, and glycogen As a rule, the compartments must sum to the body weight. The proportion of each compartment as a percent is often reported, found by dividing the compartment weight by the body weight. Individual compartments may be estimated based on population averages or measured directly or indirectly. Many measurement methods exist with varying levels of accuracy. Typically, the higher compartment models are more accurate, as they require more data and thus account for more variation across individuals. The four compartment model is considered the reference model for assessment of body composition as it is robust to most variation and each of its components can be measured directly. Measurement methods A wide variety of body composition measurement methods exist. The "gold standard" measurement technique for the 4-compartment model consists of a weight measurement, body density measurement using hydrostatic weighing or air displacement plethysmography, total body water calculation using isotope dilution analysis, a Document 4::: The body fat percentage (BFP) of a human or other living being is the total mass of fat divided by total body mass, multiplied by 100; body fat includes essential body fat and storage body fat. Essential body fat is necessary to maintain life and reproductive functions. The percentage of essential body fat for women is greater than that for men, due to the demands of childbearing and other hormonal functions. Storage body fat consists of fat accumulation in adipose tissue, part of which protects internal organs in the chest and abdomen. A number of methods are available for determining body fat percentage, such as measurement with calipers or through the use of bioelectrical impedance analysis. The body fat percentage is a measure of fitness level, since it is the only body measurement which directly calculates a person's relative body composition without regard to height or weight. The widely used body mass index (BMI) provides a measure that allows the comparison of the adiposity of individuals of different heights and weights. While BMI largely increases as adiposity increases, due to differences in body composition, other indicators of body fat give more accurate results; for example, individuals with greater muscle mass or larger bones will have higher BMIs. As such, BMI is a useful indicator of overall fitness for a large group of people, but a poor tool for determining the health of an individual. Typical body fat amounts Epidemiologically, the percentage of body fat in an individual varies according to sex and age. Various theoretical approaches exist on the relationships between body fat percentage, health, athletic capacity, etc. Different authorities have consequently developed different recommendations for ideal body fat percentages. This graph from the National Health and Nutrition Examination Survey (NHANES) in the United States charts the average body fat percentages of Americans from samples from 1999 to 2004: In males, mean percentage body fat The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the measure of an individual’s weight-to-height ratio called? A. body mass index (bmi) B. body matter index (bmi) C. density index (di) D. body density index (bdi) Answer:
sciq-5895
multiple_choice
How much larger can the most powerful light microscropes make an image?
[ "2000 times", "100 times", "500 times", "10 times" ]
A
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 2::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of Document 3::: Computerized adaptive testing (CAT) is a form of computer-based test that adapts to the examinee's ability level. For this reason, it has also been called tailored testing. In other words, it is a form of computer-administered test in which the next item or set of items selected to be administered depends on the correctness of the test taker's responses to the most recent items administered. How it works CAT successively selects questions for the purpose of maximizing the precision of the exam based on what is known about the examinee from previous questions. From the examinee's perspective, the difficulty of the exam seems to tailor itself to their level of ability. For example, if an examinee performs well on an item of intermediate difficulty, they will then be presented with a more difficult question. Or, if they performed poorly, they would be presented with a simpler question. Compared to static tests that nearly everyone has experienced, with a fixed set of items administered to all examinees, computer-adaptive tests require fewer test items to arrive at equally accurate scores. The basic computer-adaptive testing method is an iterative algorithm with the following steps: The pool of available items is searched for the optimal item, based on the current estimate of the examinee's ability The chosen item is presented to the examinee, who then answers it correctly or incorrectly The ability estimate is updated, based on all prior answers Steps 1–3 are repeated until a termination criterion is met Nothing is known about the examinee prior to the administration of the first item, so the algorithm is generally started by selecting an item of medium, or medium-easy, difficulty as the first item. As a result of adaptive administration, different examinees receive quite different tests. Although examinees are typically administered different tests, their ability scores are comparable to one another (i.e., as if they had received the same test, as is common Document 4::: The macroscopic scale is the length scale on which objects or phenomena are large enough to be visible with the naked eye, without magnifying optical instruments. It is the opposite of microscopic. Overview When applied to physical phenomena and bodies, the macroscopic scale describes things as a person can directly perceive them, without the aid of magnifying devices. This is in contrast to observations (microscopy) or theories (microphysics, statistical physics) of objects of geometric lengths smaller than perhaps some hundreds of micrometers. A macroscopic view of a ball is just that: a ball. A microscopic view could reveal a thick round skin seemingly composed entirely of puckered cracks and fissures (as viewed through a microscope) or, further down in scale, a collection of molecules in a roughly spherical shape (as viewed through an electron microscope). An example of a physical theory that takes a deliberately macroscopic viewpoint is thermodynamics. An example of a topic that extends from macroscopic to microscopic viewpoints is histology. Not quite by the distinction between macroscopic and microscopic, classical and quantum mechanics are theories that are distinguished in a subtly different way. At first glance one might think of them as differing simply in the size of objects that they describe, classical objects being considered far larger as to mass and geometrical size than quantal objects, for example a football versus a fine particle of dust. More refined consideration distinguishes classical and quantum mechanics on the basis that classical mechanics fails to recognize that matter and energy cannot be divided into infinitesimally small parcels, so that ultimately fine division reveals irreducibly granular features. The criterion of fineness is whether or not the interactions are described in terms of Planck's constant. Roughly speaking, classical mechanics considers particles in mathematically idealized terms even as fine as geometrical points wi The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. How much larger can the most powerful light microscropes make an image? A. 2000 times B. 100 times C. 500 times D. 10 times Answer:
ai2_arc-767
multiple_choice
The weight of heavy machinery compacts soil, especially when it is wet. Why do farmers avoid driving their machinery across wet ground?
[ "Compacted soil will absorb too much water.", "Plants cannot grow when soil is compacted.", "Minerals are destroyed when soil is compacted.", "Compacted soil increases soil acidity." ]
B
Relavent Documents: Document 0::: The Géotechnique lecture is an biennial lecture on the topic of soil mechanics, organised by the British Geotechnical Association named after its major scientific journal Géotechnique. This should not be confused with the annual BGA Rankine Lecture. List of Géotechnique Lecturers See also Named lectures Rankine Lecture Terzaghi Lecture External links ICE Géotechnique journal British Geotechnical Association Document 1::: Tilth is a physical condition of soil, especially in relation to its suitability for planting or growing a crop. Factors that determine tilth include the formation and stability of aggregated soil particles, moisture content, degree of aeration, soil biota, rate of water infiltration and drainage. Tilth can change rapidly, depending on environmental factors such as changes in moisture, tillage and soil amendments. The objective of tillage (mechanical manipulation of the soil) is to improve tilth, thereby increasing crop production; in the long term, however, conventional tillage, especially plowing, often has the opposite effect, causing the soil carbon sponge to oxidize, break down and become compacted. Soil with good tilth is spongy with large pore spaces for air infiltration and water movement. Roots only grow where the soil tilth allows for adequate levels of soil oxygen. Such soil also holds a reasonable supply of water and nutrients. Tillage, organic matter amendments, fertilization and irrigation can each improve tilth, but when used excessively, can have the opposite effect. Crop rotation and cover crops can rebuild the soil carbon sponge and positively impact tilth. A combined approach can produce the greatest improvement. Aggregation Good tilth shares a balanced relation between soil-aggregate tensile strength and friability, in which it has a stable mixture of aggregate soil particles that can be readily broken up by shallow non-abrasive tilling. A high tensile strength will result in large cemented clods of compacted soil with low friability. Proper management of agricultural soils can positively impact soil aggregation and improve tilth quality. Aggregation is positively associated with tilth. With finer-textured soils, aggregates may in turn be made up of smaller aggregates. Aggregation implies substantial pores between individual aggregates. Aggregation is important in the subsoil, the layer below tillage. Such aggregates involve larger (2- to 6 Document 2::: Gumbo soil is a mixture which often has some small amounts of sand and/or organic material, but is typically defined by the overwhelming presence of very fine particles of clay. Although gumbo soils are exceptional at water retention, they can be difficult to farm, as precipitation will turn gumbo into a unique muddy mess that is challenging to work using large commercial farming equipment. Avoiding tillage of this type of soil through no-till farming appears strongly correlated with higher yields, as compared to more traditional tilling practices. Document 3::: Critical state soil mechanics is the area of soil mechanics that encompasses the conceptual models that represent the mechanical behavior of saturated remolded soils based on the Critical State concept. Formulation The Critical State concept is an idealization of the observed behavior of saturated remoulded clays in triaxial compression tests, and it is assumed to apply to undisturbed soils. It states that soils and other granular materials, if continuously distorted (sheared) until they flow as a frictional fluid, will come into a well-defined critical state. At the onset of the critical state, shear distortions occur without any further changes in mean effective stress , deviatoric stress (or yield stress, , in uniaxial tension according to the von Mises yielding criterion), or specific volume : where, However, for triaxial conditions . Thus, All critical states, for a given soil, form a unique line called the Critical State Line (CSL) defined by the following equations in the space : where , , and are soil constants. The first equation determines the magnitude of the deviatoric stress needed to keep the soil flowing continuously as the product of a frictional constant (capital ) and the mean effective stress . The second equation states that the specific volume occupied by unit volume of flowing particles will decrease as the logarithm of the mean effective stress increases. History In an attempt to advance soil testing techniques, Kenneth Harry Roscoe of Cambridge University, in the late forties and early fifties, developed a simple shear apparatus in which his successive students attempted to study the changes in conditions in the shear zone both in sand and in clay soils. In 1958 a study of the yielding of soil based on some Cambridge data of the simple shear apparatus tests, and on much more extensive data of triaxial tests at Imperial College London from research led by Professor Sir Alec Skempton at Imperial College, led to the publication of th Document 4::: Soil classification deals with the systematic categorization of soils based on distinguishing characteristics as well as criteria that dictate choices in use. Overview Soil classification is a dynamic subject, from the structure of the system, to the definitions of classes, to the application in the field. Soil classification can be approached from the perspective of soil as a material and soil as a resource. Inscriptions at the temple of Horus at Edfu outline a soil classification used by Tanen to determine what kind of temple to build at which site. Ancient Greek scholars produced a number of classification based on several different qualities of the soil. Engineering Geotechnical engineers classify soils according to their engineering properties as they relate to use for foundation support or building material. Modern engineering classification systems are designed to allow an easy transition from field observations to basic predictions of soil engineering properties and behaviors. The most common engineering classification system for soils in North America is the Unified Soil Classification System (USCS). The USCS has three major classification groups: (1) coarse-grained soils (e.g. sands and gravels); (2) fine-grained soils (e.g. silts and clays); and (3) highly organic soils (referred to as "peat"). The USCS further subdivides the three major soil classes for clarification. It distinguishes sands from gravels by grain size, classifying some as "well-graded" and the rest as "poorly-graded". Silts and clays are distinguished by the soils' Atterberg limits, and thus the soils are separated into "high-plasticity" and "low-plasticity" soils. Moderately organic soils are considered subdivisions of silts and clays and are distinguished from inorganic soils by changes in their plasticity properties (and Atterberg limits) on drying. The European soil classification system (ISO 14688) is very similar, differing primarily in coding and in adding an "intermediate-p The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The weight of heavy machinery compacts soil, especially when it is wet. Why do farmers avoid driving their machinery across wet ground? A. Compacted soil will absorb too much water. B. Plants cannot grow when soil is compacted. C. Minerals are destroyed when soil is compacted. D. Compacted soil increases soil acidity. Answer:
sciq-5781
multiple_choice
The global biosphere includes all areas of what?
[ "life", "geography", "study", "science" ]
A
Relavent Documents: Document 0::: Earth system science (ESS) is the application of systems science to the Earth. In particular, it considers interactions and 'feedbacks', through material and energy fluxes, between the Earth's sub-systems' cycles, processes and "spheres"—atmosphere, hydrosphere, cryosphere, geosphere, pedosphere, lithosphere, biosphere, and even the magnetosphere—as well as the impact of human societies on these components. At its broadest scale, Earth system science brings together researchers across both the natural and social sciences, from fields including ecology, economics, geography, geology, glaciology, meteorology, oceanography, climatology, paleontology, sociology, and space science. Like the broader subject of systems science, Earth system science assumes a holistic view of the dynamic interaction between the Earth's spheres and their many constituent subsystems fluxes and processes, the resulting spatial organization and time evolution of these systems, and their variability, stability and instability. Subsets of Earth System science include systems geology and systems ecology, and many aspects of Earth System science are fundamental to the subjects of physical geography and climate science. Definition The Science Education Resource Center, Carleton College, offers the following description: "Earth System science embraces chemistry, physics, biology, mathematics and applied sciences in transcending disciplinary boundaries to treat the Earth as an integrated system. It seeks a deeper understanding of the physical, chemical, biological and human interactions that determine the past, current and future states of the Earth. Earth System science provides a physical basis for understanding the world in which we live and upon which humankind seeks to achieve sustainability". Earth System science has articulated four overarching, definitive and critically important features of the Earth System, which include: Variability: Many of the Earth System's natural 'modes' and variab Document 1::: Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research. Americas Human Biology major at Stanford University, Palo Alto (since 1970) Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government. Human and Social Biology (Caribbean) Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment. Human Biology Program at University of Toronto The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications. Asia BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002) BSc (honours) Human Biology at AIIMS (New Document 2::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 3::: A biophysical environment is a biotic and abiotic surrounding of an organism or population, and consequently includes the factors that have an influence in their survival, development, and evolution. A biophysical environment can vary in scale from microscopic to global in extent. It can also be subdivided according to its attributes. Examples include the marine environment, the atmospheric environment and the terrestrial environment. The number of biophysical environments is countless, given that each living organism has its own environment. The term environment can refer to a singular global environment in relation to humanity, or a local biophysical environment, e.g. the UK's Environment Agency. Life-environment interaction All life that has survived must have adapted to the conditions of its environment. Temperature, light, humidity, soil nutrients, etc., all influence the species within an environment. However, life in turn modifies, in various forms, its conditions. Some long-term modifications along the history of the planet have been significant, such as the incorporation of oxygen to the atmosphere. This process consisted of the breakdown of carbon dioxide by anaerobic microorganisms that used the carbon in their metabolism and released the oxygen to the atmosphere. This led to the existence of oxygen-based plant and animal life, the great oxygenation event. Related studies Environmental science is the study of the interactions within the biophysical environment. Part of this scientific discipline is the investigation of the effect of human activity on the environment. Ecology, a sub-discipline of biology and a part of environmental sciences, is often mistaken as a study of human-induced effects on the environment. Environmental studies is a broader academic discipline that is the systematic study of the interaction of humans with their environment. It is a broad field of study that includes: The natural environment Built environments Social envi Document 4::: A biome () is a biogeographical unit consisting of a biological community that has formed in response to the physical environment in which they are found and a shared regional climate. Biomes may span more than one continent. Biome is a broader term than habitat and can comprise a variety of habitats. While a biome can cover small areas, a microbiome is a mix of organisms that coexist in a defined space on a much smaller scale. For example, the human microbiome is the collection of bacteria, viruses, and other microorganisms that are present on or in a human body. A biota is the total collection of organisms of a geographic region or a time period, from local geographic scales and instantaneous temporal scales all the way up to whole-planet and whole-timescale spatiotemporal scales. The biotas of the Earth make up the biosphere. Etymology The term was suggested in 1916 by Clements, originally as a synonym for biotic community of Möbius (1877). Later, it gained its current definition, based on earlier concepts of phytophysiognomy, formation and vegetation (used in opposition to flora), with the inclusion of the animal element and the exclusion of the taxonomic element of species composition. In 1935, Tansley added the climatic and soil aspects to the idea, calling it ecosystem. The International Biological Program (1964–74) projects popularized the concept of biome. However, in some contexts, the term biome is used in a different manner. In German literature, particularly in the Walter terminology, the term is used similarly as biotope (a concrete geographical unit), while the biome definition used in this article is used as an international, non-regional, terminology—irrespectively of the continent in which an area is present, it takes the same biome name—and corresponds to his "zonobiome", "orobiome" and "pedobiome" (biomes determined by climate zone, altitude or soil). In Brazilian literature, the term "biome" is sometimes used as synonym of biogeographic pr The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. The global biosphere includes all areas of what? A. life B. geography C. study D. science Answer:
sciq-8737
multiple_choice
What happens when water is removed from the endospore?
[ "its metabolism halts", "hibernation", "suspension", "reversal" ]
A
Relavent Documents: Document 0::: Drinking is the act of ingesting water or other liquids into the body through the mouth, proboscis, or elsewhere. Humans drink by swallowing, completed by peristalsis in the esophagus. The physiological processes of drinking vary widely among other animals. Most animals drink water to maintain bodily hydration, although many can survive on the water gained from their food. Water is required for many physiological processes. Both inadequate and (less commonly) excessive water intake are associated with health problems. Methods of drinking In humans When a liquid enters a human mouth, the swallowing process is completed by peristalsis which delivers the liquid through the esophagus to the stomach; much of the activity is abetted by gravity. The liquid may be poured from the hands or drinkware may be used as vessels. Drinking can also be performed by acts of inhalation, typically when imbibing hot liquids or drinking from a spoon. Infants employ a method of suction wherein the lips are pressed tight around a source, as in breastfeeding: a combination of breath and tongue movement creates a vacuum which draws in liquid. In other land mammals By necessity, terrestrial animals in captivity become accustomed to drinking water, but most free-roaming animals stay hydrated through the fluids and moisture in fresh food, and learn to actively seek foods with high fluid content. When conditions impel them to drink from bodies of water, the methods and motions differ greatly among species. Cats, canines, and ruminants all lower the neck and lap in water with their powerful tongues. Cats and canines lap up water with the tongue in a spoon-like shape. Canines lap water by scooping it into their mouth with a tongue which has taken the shape of a ladle. However, with cats, only the tip of their tongue (which is smooth) touches the water, and then the cat quickly pulls its tongue back into its mouth which soon closes; this results in a column of liquid being pulled into the ca Document 1::: In ecology, pressure-volume curves describe the relationship between total water potential (Ψt) and relative water content (R) of living organisms. These values are widely used in research on plant-water relations, and provide valuable information on the turgor, osmotic and elastic properties of plant tissues. According to the Boyle–v'ant Hoff Relation, the product of osmotic potential and volume of solution should be a constant for any given amount of osmotically active solutes in an ideal osmotic system. = A constant is osmotic potential and is volume of solution. This can then be manipulated to a linear relation which describes the ideal situation: = A constant Membrane biology Document 2::: Tissue hydration is the process of absorbing and retaining water in biological tissues. Plants Land plants maintain adequate tissue hydration by means of an outer waterproof layer. In soft or green tissues, this is usually a waxy cuticle over the outer epidermis. In older, woody tissues, waterproofing chemicals are present in the secondary cell wall that limit or inhibit the flow of water. Vascular plants also possess an internal vascular system that distributes fluids throughout the plant. Some xerophytes, such as cacti and other desert plants, have mucilage in their tissues. This is a sticky substance that holds water within the plant, reducing the rate of dehydration. Some seeds and spores remain dormant until adequate moisture is present, at which time the seed or spore begins to germinate. Animals Animals maintain adequate tissue hydration by means of (1) an outer skin, shell, or cuticle; (2) a fluid-filled coelom cavity; and (3) a circulatory system. Hydration of fat free tissues, ratio of total body water to fat free body mass, is stable at 0.73 in mammals. In humans, a significant drop in tissue hydration can lead to the medical condition of dehydration. This may result from loss of water itself, loss of electrolytes, or a loss of blood plasma. Administration of hydrational fluids as part of sound dehydration management is necessary to avoid severe complications, and in some cases, death. Some invertebrates are able to survive extreme desiccation of their tissues by entering a state of cryptobiosis. See also Osmoregulation Document 3::: Hydraulic redistribution is a passive mechanism where water is transported from moist to dry soils via subterranean networks. It occurs in vascular plants that commonly have roots in both wet and dry soils, especially plants with both taproots that grow vertically down to the water table, and lateral roots that sit close to the surface. In the late 1980s, there was a movement to understand the full extent of these subterranean networks. Since then it was found that vascular plants are assisted by fungal networks which grow on the root system to promote water redistribution. Process Hot, dry periods, when the surface soil dries out to the extent that the lateral roots exude whatever water they contain, will result in the death of such lateral roots unless the water is replaced. Similarly, under extremely wet conditions when lateral roots are inundated by flood waters, oxygen deprivation will also lead to root peril. In plants that exhibit hydraulic redistribution, there are xylem pathways from the taproots to the laterals, such that the absence or abundance of water at the laterals creates a pressure potential analogous to that of transpirational pull. In drought conditions, ground water is drawn up through the taproot to the laterals and exuded into the surface soil, replenishing that which was lost. Under flooding conditions, plant roots perform a similar function in the opposite direction. Though often referred to as hydraulic lift, movement of water by the plant roots has been shown to occur in any direction. This phenomenon has been documented in over sixty plant species spanning a variety of plant types (from herbs and grasses to shrubs and trees) and over a range of environmental conditions (from the Kalahari Desert to the Amazon Rainforest). Causes The movement of this water can be explained by a water transport theory throughout a plant. This well-established water transport theory is called the cohesion-tension theory. In brief, it explains the movement Document 4::: Osmoregulation is the active regulation of the osmotic pressure of an organism's body fluids, detected by osmoreceptors, to maintain the homeostasis of the organism's water content; that is, it maintains the fluid balance and the concentration of electrolytes (salts in solution which in this case is represented by body fluid) to keep the body fluids from becoming too diluted or concentrated. Osmotic pressure is a measure of the tendency of water to move into one solution from another by osmosis. The higher the osmotic pressure of a solution, the more water tends to move into it. Pressure must be exerted on the hypertonic side of a selectively permeable membrane to prevent diffusion of water by osmosis from the side containing pure water. Although there may be hourly and daily variations in osmotic balance, an animal is generally in an osmotic steady state over the long term. Organisms in aquatic and terrestrial environments must maintain the right concentration of solutes and amount of water in their body fluids; this involves excretion (getting rid of metabolic nitrogen wastes and other substances such as hormones that would be toxic if allowed to accumulate in the blood) through organs such as the skin and the kidneys. Regulators and conformers Two major types of osmoregulation are osmoconformers and osmoregulators. Osmoconformers match their body osmolarity to their environment actively or passively. Most marine invertebrates are osmoconformers, although their ionic composition may be different from that of seawater. In a strictly osmoregulating animal, the amounts of internal salt and water are held relatively constant in the face of environmental changes. It requires that intake and outflow of water and salts be equal over an extended period of time. Organisms that maintain an internal osmolarity different from the medium in which they are immersed have been termed osmoregulators. They tightly regulate their body osmolarity, maintaining constant internal c The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What happens when water is removed from the endospore? A. its metabolism halts B. hibernation C. suspension D. reversal Answer:
sciq-6794
multiple_choice
When is a moving car said to be in dynamic equilibrium?
[ "when accelerating", "at rest", "zero net force", "at homeostasis" ]
C
Relavent Documents: Document 0::: Vehicle dynamics is the study of vehicle motion, e.g., how a vehicle's forward movement changes in response to driver inputs, propulsion system outputs, ambient conditions, air/surface/water conditions, etc. Vehicle dynamics is a part of engineering primarily based on classical mechanics. It may be applied for motorized vehicles (such as automobiles), bicycles and motorcycles, aircraft, and watercraft. Factors affecting vehicle dynamics The aspects of a vehicle's design which affect the dynamics can be grouped into drivetrain and braking, suspension and steering, distribution of mass, aerodynamics and tires. Drivetrain and braking Automobile layout (i.e. location of engine and driven wheels) Powertrain Braking system Suspension and steering Some attributes relate to the geometry of the suspension, steering and chassis. These include: Ackermann steering geometry Axle track Camber angle Caster angle Ride height Roll center Scrub radius Steering ratio Toe Wheel alignment Wheelbase Distribution of mass Some attributes or aspects of vehicle dynamics are purely due to mass and its distribution. These include: Center of mass Moment of inertia Roll moment Sprung mass Unsprung mass Weight distribution Aerodynamics Some attributes or aspects of vehicle dynamics are purely aerodynamic. These include: Automobile drag coefficient Automotive aerodynamics Center of pressure Downforce Ground effect in cars Tires Some attributes or aspects of vehicle dynamics can be attributed directly to the tires. These include: Camber thrust Circle of forces Contact patch Cornering force Ground pressure Pacejka's Magic Formula Pneumatic trail Radial Force Variation Relaxation length Rolling resistance Self aligning torque Skid Slip angle Slip (vehicle dynamics) Spinout Steering ratio Tire load sensitivity Vehicle behaviours Some attributes or aspects of vehicle dynamics are purely dynamic. These include: Body flex Body roll Bump Steer Bu Document 1::: Dynamic balance is the branch of mechanics that is concerned with the effects of forces on the motion of a body or system of bodies, especially of forces that do not originate within the system itself, which is also called kinetics. Dynamic balance is the ability of an object to balance while in motion or switching between positions. Document 2::: In classical mechanics, a particle is in mechanical equilibrium if the net force on that particle is zero. By extension, a physical system made up of many parts is in mechanical equilibrium if the net force on each of its individual parts is zero. In addition to defining mechanical equilibrium in terms of force, there are many alternative definitions for mechanical equilibrium which are all mathematically equivalent. In terms of momentum, a system is in equilibrium if the momentum of its parts is all constant. In terms of velocity, the system is in equilibrium if velocity is constant. In a rotational mechanical equilibrium the angular momentum of the object is conserved and the net torque is zero. More generally in conservative systems, equilibrium is established at a point in configuration space where the gradient of the potential energy with respect to the generalized coordinates is zero. If a particle in equilibrium has zero velocity, that particle is in static equilibrium. Since all particles in equilibrium have constant velocity, it is always possible to find an inertial reference frame in which the particle is stationary with respect to the frame. Stability An important property of systems at mechanical equilibrium is their stability. Potential energy stability test If we have a function which describes the system's potential energy, we can determine the system's equilibria using calculus. A system is in mechanical equilibrium at the critical points of the function describing the system's potential energy. We can locate these points using the fact that the derivative of the function is zero at these points. To determine whether or not the system is stable or unstable, we apply the second derivative test. With denoting the static equation of motion of a system with a single degree of freedom we can perform the following calculations: Second derivative < 0 The potential energy is at a local maximum, which means that the system is in an unstable equilibri Document 3::: Free drift mode refers to the state of motion of an object in orbit whereby constant attitude is not maintained. When attitude is lost, the object is said to be in free drift, thereby relying on its own inertia to avoid attitude drift. This mode is often engaged purposefully as it can be useful when modifying, upgrading, or repairing an object in space, such as the International Space Station. Additionally, it allows work on areas near the thrusters on the ISS that are generally used to maintain attitude. While in free drift it is not possible to fully use the solar arrays on the ISS. This can cause a drop in power generation, requiring the conservation of energy. This may affect many systems that otherwise require a lot of energy. The amount of time that an object such as the ISS can remain safely in free-drift varies depending on moment of inertia, perturbation torques, tidal gradients, etc. The ISS itself generally can last about 45 minutes in this mode. Notes Document 4::: Initial stability or primary stability is the resistance of a boat to small changes in the difference between the vertical forces applied on its two sides. The study of initial stability and secondary stability are part of naval architecture as applied to small watercraft (as distinct from the study of ship stability concerning large ships). Determination The Initial stability is determined by the angle of tilting on each side of the boat as its center of gravity (CG) moves sideways as a result of the passengers or cargo moving laterally or as a response to an external force (e.g., a wave). The wider the boat and the further its volume is distributed away from its center line (CL), the greater the initial stability. Examples Wide mono-hull small boats such as the johnboat have a great deal of initial stability and allow the occupants to stand upright to engage in fishing activities, and so do narrower small boats such as W-kayaks that feature a twin hull. Very narrow mono-hull boats such as canoes and kayaks have little initial stability, but twin-hull W-kayaks are considerably more stable due to the fact that their buoyancy is distributed at a greater distance from their center line and therefore acts more effectively to reduce tilting. For purposes of stability, it is advantageous to keep the centre of gravity as low as possible in small boats, so occupants are generally seated. Flatwater rowing shells, which have length-to-beam ratios of up to 30:1, are inherently unstable. Compared to secondary stability After approximately 10 degrees of lateral tilt, hull shape gains importance, and secondary stability becomes the dominant consideration in boat stability. Other types of ship stability Secondary stability Tertiary stability: For kayak rolling, tertiary stability, or the stability of an upside-down kayak, is also important (lower tertiary stability makes rolling up easier) See also Ship stability Kayak#Types of stability Limit of positive stabi The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. When is a moving car said to be in dynamic equilibrium? A. when accelerating B. at rest C. zero net force D. at homeostasis Answer:
sciq-1708
multiple_choice
Most diseases caused by bacteria can be cured by which medicines?
[ "hydroxides", "antioxidants", "antibiotics", "inhibitors" ]
C
Relavent Documents: Document 0::: 1972 – amoxicillin 1972 – cefradine 1972 – minocycline 1972 – pristinamycin 1973 – fosfomycin 1974 – talampicillin 1975 – tobramycin 1975 – bacampicillin 1975 – ticarcillin 1976 – amikacin 1977 – azlocillin 1977 – cefadroxil 1977 – cefamandole 1977 – cefoxitin 1977 – c Document 1::: Antimicrobial resistance (AMR) occurs when microbes evolve mechanisms that protect them from the effects of antimicrobials (drugs used to treat infections). All classes of microbes can evolve resistance where the drugs are no longer effective. Fungi evolve antifungal resistance. Viruses evolve antiviral resistance. Protozoa evolve antiprotozoal resistance, and bacteria evolve antibiotic resistance. Together all of these come under the umbrella of antimicrobial resistance. Microbes resistant to multiple antimicrobials are called multidrug resistant (MDR) and are sometimes referred to as superbugs. Although antimicrobial resistance is a naturally occurring process, it is often the result of improper usage of the drugs and management of the infections. Antibiotic resistance is a major subset of AMR, that applies specifically to bacteria that become resistant to antibiotics. Resistance in bacteria can arise naturally by genetic mutation, or by one species acquiring resistance from another. Resistance can appear spontaneously because of random mutations, but also arises through spreading of resistant genes through horizontal gene transfer. However, extended use of antibiotics appears to encourage selection for mutations which can render antibiotics ineffective. Antifungal resistance is a subset of AMR, that specifically applies to fungi that have become resistant to antifungals. Resistance to antifungals can arise naturally, for example by genetic mutation or through aneuploidy. Extended use of antifungals leads to development of antifungal resistance through various mechanisms. Clinical conditions due to infections caused by microbes containing AMR cause millions of deaths each year. In 2019 there were around 1.27 million deaths globally caused by bacterial AMR. Infections caused by resistant microbes are more difficult to treat, requiring higher doses of antimicrobial drugs, more expensive antibiotics, or alternative medications which may prove more toxic. These appr Document 2::: MicrobeLibrary is a permanent collection of over 1400 original peer-reviewed resources for teaching undergraduate microbiology. It is provided by the American Society for Microbiology, Washington DC, United States. Contents include curriculum activities; images and animations; reviews of books, websites and other resources; and articles from Focus on Microbiology Education, Microbiology Education and Microbe. Around 40% of the materials are free to educators and students, the remainder require a subscription. the service is suspended with the message to: "Please check back with us in 2017". External links MicrobeLibrary Microbiology Document 3::: Antimicrobials destroy bacteria, viruses, fungi, algae, and other microbes. The cells of bacteria (prokaryotes), such as salmonella, differ from those of higher-level organisms (eukaryotes), such as fish. Antibiotics are chemicals designed to either kill or inhibit the growth of pathogenic bacteria while exploiting the differences between prokaryotes and eukaryotes in order to make them relatively harmless in higher-level organisms. Antibiotics are constructed to act in one of three ways: by disrupting cell membranes of bacteria (rendering them unable to regulate themselves), by impeding DNA or protein synthesis, or by hampering the activity of certain enzymes unique to bacteria. Antibiotics are used in aquaculture to treat diseases caused by bacteria. Sometimes the antibiotics are used to treat diseases, but more commonly antibiotics are used to prevent diseases by treating the water or fish before disease occurs. While this prophylactic method of preventing disease is profitable because it prevents loss and allows fish to grow more quickly, there are several downsides. The overuse of antibiotics can create antibiotic-resistant bacteria. Antibiotic-resistant bacteria can spontaneously arise when selective pressure to survive results in changes to the DNA sequence of a bacterium allowing that bacterium to survive antibiotic treatments. Because some of the same antibiotics are used to treat fish that are used to treat human disease, pathogenic bacteria causing human disease can also become resistant to antibiotics as a result of treatment of fish with antibiotics. For this reason, the overuse of antibiotics in treatment of fish aquaculture (among other agricultural uses) could create public health issues. Overview The issue has two sides. In some countries, clean water supplies for aquaculture are extremely limited. Untreated animal manure and human waste are used as feed in shrimp farms and tilapia farms in China and Thailand, in addition to the collection Document 4::: Pharmaceutical microbiology is an applied branch of microbiology. It involves the study of microorganisms associated with the manufacture of pharmaceuticals e.g. minimizing the number of microorganisms in a process environment, excluding microorganisms and microbial byproducts like exotoxin and endotoxin from water and other starting materials, and ensuring the finished pharmaceutical product is sterile. Other aspects of pharmaceutical microbiology include the research and development of anti-infective agents, the use of microorganisms to detect mutagenic and carcinogenic activity in prospective drugs, and the use of microorganisms in the manufacture of pharmaceutical products like insulin and human growth hormone. Drug safety Drug safety is a major focus of pharmaceutical microbiology. Pathogenic bacteria, fungi (yeasts and moulds) and toxins produced by microorganisms are all possible contaminants of medicines- although stringent, regulated processes are in place to ensure the risk is minimal. Antimicrobial activity and disinfection Another major focus of pharmaceutical microbiology is to determine how a product will react in cases of contamination. For example: You have a bottle of cough medicine. Imagine you take the lid off, pour yourself a dose and forget to replace the lid. You come back to take your next dose and discover that you will indeed left the lid off for a few hours. What happens if a microorganism "fell in" whilst the lid was off? There are tests that look at that. The product is "challenged" with a known amount of specific microorganisms, such as E. coli and C. albicans and the anti-microbial activity monitored Pharmaceutical microbiology is additionally involved with the validation of disinfectants, either according to U.S. AOAC or European CEN standards, to evaluate the efficacy of disinfectants in suspension, on surfaces, and through field trials. Field trials help to establish the frequency of the application of detergents and disi The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Most diseases caused by bacteria can be cured by which medicines? A. hydroxides B. antioxidants C. antibiotics D. inhibitors Answer:
sciq-707
multiple_choice
What is a common age-related bone disease in which bone density and strength is decreased?
[ "fibrosis", "lupus", "arthritis", "osteoporosis" ]
D
Relavent Documents: Document 0::: Senile osteoporosis has been recently recognized as a geriatric syndrome with a particular pathophysiology. There are different classification of osteoporosis: primary, in which bone loss is a result of aging and secondary, in which bone loss occurs from various clinical and lifestyle factors. Primary, or involuntary osteoporosis, can further be classified into Type I or Type II. Type I refers to postmenopausal osteoporosis and is caused by the deficiency of estrogen. While senile osteoporosis is categorized as an involuntary, Type II, and primary osteoporosis, which affects both men and women over the age of 70 years. It is accompanied by vitamin D deficiency, body's failure to absorb calcium, and increased parathyroid hormone. Research over the years has shown that senile osteoporosis is the product of a skeleton in an advanced stage of life and can be caused by a deficiency caused by calcium. However, physicians are also coming to the conclusion that multiple mechanisms in the development stages of the disease interact together resulting in an osteoporotic bone, regardless of age. Still, elderly people make up the fastest growing population in the world. As bone mass declines with age, the risk of fractures increases. Annual incidence of osteoporotic fractures is more than 1.5 million in the US and notably 20% of people die during the first year after a hip fracture. It costs the US health system around $17 billion annually, with the cost projecting to $50 billion by 2040. These costs represent a higher burden compared to other disease states, such as breast cancer, stroke, diabetes, or chronic lung disease. Although there are cost effective and well-tolerated treatments, 23% of the diagnosed are women over 67 have received either bone mineral density (BMD) tests or prescription for treatment after fracture. The clinical and economic burdens indicate there should be more effort in assessment of risk, prevention, and early intervention when it comes to osteoporo Document 1::: FRAX (Fracture Risk Assessment Tool) is a diagnostic tool used to evaluate the 10-year probability of bone fracture risk. It was developed by the University of Sheffield. FRAX integrates clinical risk factors and bone mineral density at the femoral neck to calculate the 10-year probability of hip fracture and the 10-year probability of a major osteoporotic fracture (clinical spine, forearm, hip or shoulder fracture). The models used to develop the FRAX diagnostic tool were derived from studying patient populations in North America, Europe, Latin America, Asia and Australia. Components The parameters included in a FRAX assessment are: Country Age Sex Weight Height Previous fracture Hip fracture in the subject's mother or father Smoking Glucocorticoid treatment Rheumatoid arthritis Disease strongly associated with osteoporosis Alcohol intake of 3 or more standard drinks per day Bone mineral density (BMD) of the femoral neck Trabecular bone score (optional) Availability and usage FRAX is freely accessible online, and commercially available as a desktop application, in paper-form as a FRAX Pad, as an iPhone application, and as an Android application. The tool is compatible with 58 models for 53 countries, and is available in 28 languages. FRAX is incorporated into many national guidelines around the world, including those of Belgium, Canada, Japan, Netherlands, Poland, Sweden, Switzerland, UK (NOGG), and US (NOF). FRAX assessments are intended to provide guidance for determining access to treatment in healthcare systems. Adjustments Glucocorticoid use is included FRAX as a dichotomous variable, whereas the increased risk for fractures seen with glucocorticoid use is dependent on glucocorticoid dose and duration of use. Several methods have been proposed how to adjust FRAX accordingly. Though known to be a risk factor for fractures, Type 2 Diabetes is not included as such in FRAX. Some clinicians choose rheumatoid arthritis as an equivalent risk factor instead. Document 2::: The Winquist and Hansen classification is a system of categorizing femoral shaft fractures based upon the degree of comminution. Classification Document 3::: Arthritis of the knee is typically a particularly debilitating form of arthritis. The knee may become affected by almost any form of arthritis. The word arthritis refers to inflammation of the joints. Types of arthritis include those related to wear and tear of cartilage, such as osteoarthritis, to those associated with inflammation resulting from an overactive immune system (such as rheumatoid arthritis). Causes It is not always certain why arthritis of the knee develops. The knee may become affected by almost any form of arthritis, including those related to mechanical damage of the structures of the knee (osteoarthritis, and post-traumatic arthritis), various autoimmune forms of arthritis (including; rheumatoid arthritis, juvenile arthritis, and SLE-related arthritis, psoriatic arthritis, and ankylosing spondylitis), arthritis due to infectious causes (including Lyme disease-related arthritis), gouty arthritis, or reactive arthritis. Osteoarthritis of the knee The knee is one of the joints most commonly affected by osteoarthritis. Cartilage in the knee may begin to break down after sustained stress, leaving the bones of the knee rubbing against each other and resulting in osteoarthritis. Nearly a third of US citizens are affected by osteoarthritis of the knee by age 70. Obesity is a known and very significant risk factor for the development of osteoarthritis. Risk increases proportionally to body weight. Obesity contributes to OA development, not only by increasing the mechanical stress exerted upon the knees when standing, but also leads to increased production of compounds that may cause joint inflammation. Parity is associated with an increased risk of knee OA and likelihood of knee replacement. The risk increases in proportion to the number of children the woman has birthed. This may be due to weight gain after pregnancy, or increased body weight and consequent joint stress during pregnancy. Flat feet are a significant risk factor for the development Document 4::: Osteosclerosis is a disorder that is characterized by abnormal hardening of bone and an elevation in bone density. It may predominantly affect the medullary portion and/or cortex of bone. Plain radiographs are a valuable tool for detecting and classifying osteosclerotic disorders. It can manifest in localized or generalized osteosclerosis. Localized osteosclerosis can be caused by Legg–Calvé–Perthes disease, sickle-cell disease and osteoarthritis among others. Osteosclerosis can be classified in accordance with the causative factor into acquired and hereditary. Types Acquired osteosclerosis Osteogenic bone metastasis caused by carcinoma of prostate and breast Paget's disease of bone Myelofibrosis (primary disorder or secondary to intoxication or malignancy) Osteosclerosing types of chronic osteomyelitis Hypervitaminosis D hyperparathyroidism Schnitzler syndrome Mastocytosis Skeletal fluorosis Monoclonal IgM Kappa cryoglobulinemia Hepatitis C. Hereditary osteosclerosis Malignant infantile osteopetrosis Neuropathic infantile osteopetrosis Infantile osteopetrosis with renal tubular acidosis Infantile osteopetrosis with immunodeficiency IO with leukocyte adhesion deficiency syndrome (LAD-III) Intermediate osteopetrosis Autosomal dominant osteopetrosis (Albers-Schonberg) Pyknodysostosis (osteopetrosis acro-osteolytica) Osteopoikilosis (Buschke–Ollendorff syndrome) Osteopathia striata with cranial sclerosis Mixed sclerosing bone dysplasia Progressive diaphyseal dysplasia (Camurati–Engelmann disease) SOST-related sclerosing bone dysplasias Diagnosis Osteosclerosis can be detected with a simple radiography. There are white portions of the bone which appear due to the increased number of bone trabeculae. Animals In the animal kingdom, there also exists a non-pathological form of osteosclerosis, resulting in unusually solid bone structure with little to no marrow. It is often seen in aquatic vertebrates, especially those living in shallow waters The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is a common age-related bone disease in which bone density and strength is decreased? A. fibrosis B. lupus C. arthritis D. osteoporosis Answer:
scienceQA-11697
multiple_choice
What do these two changes have in common? making jam photosynthesis
[ "Both are chemical changes.", "Both are caused by cooling.", "Both are only physical changes.", "Both are caused by heating." ]
A
Step 1: Think about each change. Making jam is a chemical change. It involves mixing fruit, sugar, and a substance called pectin. When these ingredients are mixed and cooked, the chemical bonds in their molecules are broken. The atoms then link together to form different molecules that make up the jam. Photosynthesis is a chemical change. Plants make sugar using carbon dioxide, water, and energy from sunlight. Step 2: Look at each answer choice. Both are only physical changes. Both changes are chemical changes. They are not physical changes. Both are chemical changes. Both changes are chemical changes. The type of matter before and after each change is different. Both are caused by heating. The reaction that makes jam is caused by heating. But photosynthesis is not. Both are caused by cooling. Neither change is caused by cooling.
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds. Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate. A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density. An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge. Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change. Examples Heating and cooling Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation. Magnetism Ferro-magnetic materials can become magnetic. The process is reve Document 2::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 3::: Ecophysiology (from Greek , oikos, "house(hold)"; , physis, "nature, origin"; and , -logia), environmental physiology or physiological ecology is a biological discipline that studies the response of an organism's physiology to environmental conditions. It is closely related to comparative physiology and evolutionary physiology. Ernst Haeckel's coinage bionomy is sometimes employed as a synonym. Plants Plant ecophysiology is concerned largely with two topics: mechanisms (how plants sense and respond to environmental change) and scaling or integration (how the responses to highly variable conditions—for example, gradients from full sunlight to 95% shade within tree canopies—are coordinated with one another), and how their collective effect on plant growth and gas exchange can be understood on this basis. In many cases, animals are able to escape unfavourable and changing environmental factors such as heat, cold, drought or floods, while plants are unable to move away and therefore must endure the adverse conditions or perish (animals go places, plants grow places). Plants are therefore phenotypically plastic and have an impressive array of genes that aid in acclimating to changing conditions. It is hypothesized that this large number of genes can be partly explained by plant species' need to live in a wider range of conditions. Light Light is the food of plants, i.e. the form of energy that plants use to build themselves and reproduce. The organs harvesting light in plants are leaves and the process through which light is converted into biomass is photosynthesis. The response of photosynthesis to light is called light response curve of net photosynthesis (PI curve). The shape is typically described by a non-rectangular hyperbola. Three quantities of the light response curve are particularly useful in characterising a plant's response to light intensities. The inclined asymptote has a positive slope representing the efficiency of light use, and is called quantum Document 4::: Analysis (: analyses) is the process of breaking a complex topic or substance into smaller parts in order to gain a better understanding of it. The technique has been applied in the study of mathematics and logic since before Aristotle (384–322 B.C.), though analysis as a formal concept is a relatively recent development. The word comes from the Ancient Greek (analysis, "a breaking-up" or "an untying;" from ana- "up, throughout" and lysis "a loosening"). From it also comes the word's plural, analyses. As a formal concept, the method has variously been ascribed to Alhazen, René Descartes (Discourse on the Method), and Galileo Galilei. It has also been ascribed to Isaac Newton, in the form of a practical method of physical discovery (which he did not name). The converse of analysis is synthesis: putting the pieces back together again in a new or different whole. Applications Science The field of chemistry uses analysis in three ways: to identify the components of a particular chemical compound (qualitative analysis), to identify the proportions of components in a mixture (quantitative analysis), and to break down chemical processes and examine chemical reactions between elements of matter. For an example of its use, analysis of the concentration of elements is important in managing a nuclear reactor, so nuclear scientists will analyze neutron activation to develop discrete measurements within vast samples. A matrix can have a considerable effect on the way a chemical analysis is conducted and the quality of its results. Analysis can be done manually or with a device. Types of Analysis: A) Qualitative Analysis: It is concerned with which components are in a given sample or compound. Example: Precipitation reaction B) Quantitative Analysis: It is to determine the quantity of individual component present in a given sample or compound. Example: To find concentration by uv-spectrophotometer. Isotopes Chemists can use isotope analysis to assist analysts with i The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What do these two changes have in common? making jam photosynthesis A. Both are chemical changes. B. Both are caused by cooling. C. Both are only physical changes. D. Both are caused by heating. Answer:
sciq-9933
multiple_choice
Whisk ferns have yellow sporangia and no what?
[ "leaves", "roots", "stems", "flowers" ]
A
Relavent Documents: Document 0::: The following outline is provided as an overview of and topical guide to botany: Botany – biological discipline which involves the study of plants. Core concepts of botany Bud Cell wall Chlorophyll Chloroplast Flora Flower Fruit Forest Leaf Meristem Photosynthesis Plant Plant cell Pollen Seed Seedling Spore Tree Vine Wood Subdisciplines of botany Branches of botany Agronomy Bryology (mosses and liverworts) Dendrology (woody plants) Ethnobotany Lichenology (lichens) Mycology (fungi) Paleobotany Palynology (spores and pollen) Phycology (algae) Phytosociology Plant anatomy Plant ecology Plant evolution Plant morphology Plant pathology Plant physiology Plant taxonomy Pteridology (ferns) History of botany History of botany History of plant systematics Kinds of plants Major plant groups Algae Cyanobacteria Brown algae Charophyta Chlorophyta Desmid Diatom Red algae Green algae Bryophytes Anthocerotophyta (hornworts) Bryophyta (mosses) Marchantiophyta (liverworts) Pteridophytes Lycopodiophyta (club mosses) Pteridophyta (ferns & horsetails) Rhyniophyta (early plants) Gymnosperms Pteridospermatophyta (seed "ferns") Cycadophyta Ginkgophyta Gnetophyta Pinophyta (conifers) Angiosperms Dicotyledon Asteraceae (sunflower family) Cactaceae (cactus family) Fabaceae (legume family) Lamiaceae (mint family) Rosaceae (rose family) Monocotyledon Araceae (arum family) Arecaceae (palm family) Iridaceae (iris family) Orchidaceae (orchid family) Poaceae (grass family) Some well-known plants List of culinary fruits List of edible seeds List of culinary herbs and spices List of culinary nuts List of vegetables List of woods General plant species concepts Plant taxonomy Cultivated plant taxonomy List of systems of plant taxonomy Clades Monophyletic Polyphyletic Speciation Isolating mechanisms Concept of species Species problem Notable botanists In alphabetical order by surname: Aristotle Arthur C Document 1::: Edible plant stems are one part of plants that are eaten by humans. Most plants are made up of stems, roots, leaves, flowers, and produce fruits containing seeds. Humans most commonly eat the seeds (e.g. maize, wheat), fruit (e.g. tomato, avocado, banana), flowers (e.g. broccoli), leaves (e.g. lettuce, spinach, and cabbage), roots (e.g. carrots, beets), and stems (e.g. asparagus of many plants. There are also a few edible petioles (also known as leaf stems) such as celery or rhubarb. Plant stems have a variety of functions. Stems support the entire plant and have buds, leaves, flowers, and fruits. Stems are also a vital connection between leaves and roots. They conduct water and mineral nutrients through xylem tissue from roots upward, and organic compounds and some mineral nutrients through phloem tissue in any direction within the plant. Apical meristems, located at the shoot tip and axillary buds on the stem, allow plants to increase in length, surface, and mass. In some plants, such as cactus, stems are specialized for photosynthesis and water storage. Modified stems Typical stems are located above ground, but there are modified stems that can be found either above or below ground. Modified stems located above ground are phylloids, stolons, runners, or spurs. Modified stems located below ground are corms, rhizomes, and tubers. Detailed description of edible plant stems Asparagus The edible portion is the rapidly emerging stems that arise from the crowns in the Bamboo The edible portion is the young shoot (culm). Birch Trunk sap is drunk as a tonic or rendered into birch syrup, vinegar, beer, soft drinks, and other foods. Broccoli The edible portion is the peduncle stem tissue, flower buds, and some small leaves. Cauliflower The edible portion is proliferated peduncle and flower tissue. Cinnamon Many favor the unique sweet flavor of the inner bark of cinnamon, and it is commonly used as a spice. Fig The edible portion is stem tissue. The Document 2::: The British Pteridological Society is for fern enthusiasts of the British Isles, and was founded in England in 1891. The origins and early history of the BPS at the time of "Pteridomania" is described in the book The Victorian Fern Craze. The BPS celebrated its centenary in 1991; amongst other things, it was marked by the publication of the book, A World of Ferns. The British Pteridological Society is a registered charity: No. 1092399. The BPS has as its Patron the Prince of Wales. Publications The British Pteridological Society publishes a number of works, which promote pteridology: The Fern Gazette The Pteridologist The Bulletin Presidents of the Society John A. Wilson (1831-1914) was elected Chairman of the Society at the first meeting in 1891; subsequently Dr. F.W. Stansfield was invited to become the first President of the Society. He took office in 1892. Document 3::: Microspores are land plant spores that develop into male gametophytes, whereas megaspores develop into female gametophytes. The male gametophyte gives rise to sperm cells, which are used for fertilization of an egg cell to form a zygote. Megaspores are structures that are part of the alternation of generations in many seedless vascular cryptogams, all gymnosperms and all angiosperms. Plants with heterosporous life cycles using microspores and megaspores arose independently in several plant groups during the Devonian period. Microspores are haploid, and are produced from diploid microsporocytes by meiosis. Morphology The microspore has three different types of wall layers. The outer layer is called the perispore, the next is the exospore, and the inner layer is the endospore. The perispore is the thickest of the three layers while the exospore and endospore are relatively equal in width. Seedless vascular plants In heterosporous seedless vascular plants, modified leaves called microsporophylls bear microsporangia containing many microsporocytes that undergo meiosis, each producing four microspores. Each microspore may develop into a male gametophyte consisting of a somewhat spherical antheridium within the microspore wall. Either 128 or 256 sperm cells with flagella are produced in each antheridium. The only heterosporous ferns are aquatic or semi-aquatic, including the genera Marsilea, Regnellidium, Pilularia, Salvinia, and Azolla. Heterospory also occurs in the lycopods in the spikemoss genus Selaginella and in the quillwort genus Isoëtes. Types of seedless vascular plants: Water ferns Spikemosses Quillworts Gymnosperms In seed plants the microspores develop into pollen grains each containing a reduced, multicellular male gametophyte. The megaspores, in turn, develop into reduced female gametophytes that produce egg cells that, once fertilized, develop into seeds. Pollen cones or microstrobili usually develop toward the tips of the lower branches in cluste Document 4::: Non-vascular plants are plants without a vascular system consisting of xylem and phloem. Instead, they may possess simpler tissues that have specialized functions for the internal transport of water. Non-vascular plants include two distantly related groups: Bryophytes, an informal group that taxonomists treat as three separate land-plant divisions, namely: Bryophyta (mosses), Marchantiophyta (liverworts), and Anthocerotophyta (hornworts). In all bryophytes, the primary plants are the haploid gametophytes, with the only diploid portion being the attached sporophyte, consisting of a stalk and sporangium. Because these plants lack lignified water-conducting tissues, they cannot become as tall as most vascular plants. Algae, especially green algae. The algae consist of several unrelated groups. Only the groups included in the Viridiplantae are still considered relatives of land plants. These groups are sometimes called "lower plants", referring to their status as the earliest plant groups to evolve, but the usage is imprecise since both groups are polyphyletic and may be used to include vascular cryptogams, such as the ferns and fern allies that reproduce using spores. Non-vascular plants are often among the first species to move into new and inhospitable territories, along with prokaryotes and protists, and thus function as pioneer species. Non-vascular plants do not have a wide variety of specialized tissue types. Mosses and leafy liverworts have structures called phyllids that resemble leaves, but only consist of single sheets of cells with no internal air spaces, no cuticle or stomata, and no xylem or phloem. Consequently, phyllids are unable to control the rate of water loss from their tissues and are said to be poikilohydric. Some liverworts, such as Marchantia, have a cuticle, and the sporophytes of mosses have both cuticles and stomata, which were important in the evolution of land plants. All land plants have a life cycle with an alternation of generatio The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Whisk ferns have yellow sporangia and no what? A. leaves B. roots C. stems D. flowers Answer:
sciq-9984
multiple_choice
Translation is the second part of the central dogma of what?
[ "molecular gastronomy", "relativity", "string theory", "molecular biology" ]
D
Relavent Documents: Document 0::: Math in Moscow (MiM) is a one-semester study abroad program for North American and European undergraduates held at the Independent University of Moscow (IUM) in Moscow, Russia. The program consists mainly of math courses that are taught in English. The program was first offered in 2001, and since 2008 has been run jointly by the Independent University of Moscow, Moscow Center for Continuous Mathematical Education, and the Higher School of Economics (HSE). The program has hosted over 200 participants, including students from Harvard, Princeton, MIT, Harvey Mudd, Berkeley, Cornell, Yale, Wesleyan, McGill, Toronto, and Montreal. Features The MiM semester lasts fifteen weeks with fourteen weeks of teaching and one week of exams. Math courses are lectured by professors of the Independent University of Moscow and the Math Department of National Research University Higher School of Economics. The cultural elements of the program include organized trips to Saint Petersburg and to the Golden Ring towns of Vladimir and Suzdal. Students live in the dormitory of the Higher School of Economics. Each semester the American Mathematical Society offers up to five "Math in Moscow" scholarships provided by the National Science Foundation to US undergraduates, and the Canadian Mathematical Society offers one or two NSERC scholarships to Canadian students. The program is often reviewed favorably by North American students and their departments. Curriculum The primary curriculum is entirely mathematical, drawing from every major field of mathematics. All courses are taught jointly with the Higher School of Economics, and are often attended by students from the HSE master's program. Likewise, Math in Moscow participants may attend open lectures and seminars at the Higher School of Economics. The Math in Moscow courses are formally divided into three groups according to the expected prerequisites, however admitted students may choose to attend whichever and as many courses as they Document 1::: Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research. Americas Human Biology major at Stanford University, Palo Alto (since 1970) Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government. Human and Social Biology (Caribbean) Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment. Human Biology Program at University of Toronto The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications. Asia BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002) BSc (honours) Human Biology at AIIMS (New Document 2::: Translational research (also called translation research, translational science, or, when the context is clear, simply translation) is research aimed at translating (converting) results in basic research into results that directly benefit humans. The term is used in science and technology, especially in biology and medical science. As such, translational research forms a subset of applied research. The term has been used most commonly in life-sciences and biotechnology but applies across the spectrum of science and humanities. In the context of biomedicine, translational research is also known as bench to bedside. In the field of education, it is defined as research which translates concepts to classroom practice. Critics of translational medical research (to the exclusion of more basic research) point to examples of important drugs that arose from fortuitous discoveries in the course of basic research such as penicillin and benzodiazepines. Other problems have stemmed from the widespread irreproducibility thought to exist in translational research literature. Although translational research is relatively new, there are now several major research centers focused on it. In the U.S., the National Institutes of Health has implemented a major national initiative to leverage existing academic health center infrastructure through the Clinical and Translational Science Awards. Furthermore, some universities acknowledge translational research as its own field to study for a PhD or graduate certificate in. Definitions Translational research is aimed at solving particular problems; the term has been used most commonly in life-sciences and biotechnology but applies across the spectrum of science and humanities. In the field of education, it is defined for school-based education by the Education Futures Collaboration (www.meshguides.org) as research which translates concepts to classroom practice. Examples of translational research are commonly found in education subject Document 3::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 4::: Biology by Team in German Biologie im Team - is the first Austrian biology contest for upper secondary schools. Students at upper secondary schools who are especially interested in biology can deepen their knowledge and broaden their competence in experimental biology within the framework of this contest. Each year, a team of teachers choose modules of key themes on which students work in the form of a voluntary exercise. The evaluation focuses in particular on the practical work, and, since the school year 2004/05, also on teamwork. In April, a two-day closing competition takes place, in which six groups of students from participating schools are given various problems to solve. A jury (persons from the science and corporate communities) evaluate the results and how they are presented. The concept was developed by a team of teachers in co-operation with the AHS (Academic Secondary Schools) - Department of the Pedagogical Institute in Carinthia. Since 2008 it is situated at the Science departement of the University College of Teacher Training Carinthia. The first contest in the school year 2002/03 took place under the motto: Hell is loose in the ground under us. Other themes included Beautiful but dangerous, www-worldwide water 1 and 2, Expedition forest, Relationship boxes, Mole's view, Biological timetravel, Biology at the University, Ecce Homo, Biodiversity, Death in tin cans, Sex sells, Without a trace, Biologists see more, Quo vadis biology? , Biology without limits?, Diversity instead of simplicity, Grid square, Diversity instead of simplicity 0.2, www-worldwide water 3.The theme for the year 2023/24 is I hear something you don't see. Till now the following schools were participating: BG/BRG Mössingerstraße Klagenfurt Ingeborg-Bachmann-Gymnasium, Klagenfurt BG/BRG St. Martinerstraße Villach BG/BRG Peraustraße Villach International school Carinthia, Velden Österreichisches Gymnasium Prag Europagymnasium Klagenfurt BRG Viktring Klagenfurt BORG Wo The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. Translation is the second part of the central dogma of what? A. molecular gastronomy B. relativity C. string theory D. molecular biology Answer:
sciq-7789
multiple_choice
A tropism where light is the stimulus is known as what?
[ "Geotropism", "Thermotropism", "Atropism", "phototropism" ]
D
Relavent Documents: Document 0::: Biometeorology is the interdisciplinary field of science that studies the interactions between the biosphere and the Earth's atmosphere on time scales of the order of seasons or shorter (in contrast with bioclimatology). Examples of relevant processes Weather events influence biological processes on short time scales. For instance, as the Sun rises above the horizon in the morning, light levels become sufficient for the process of photosynthesis to take place in plant leaves. Later on, during the day, air temperature and humidity may induce the partial or total closure of the stomata, a typical response of many plants to limit the loss of water through transpiration. More generally, the daily evolution of meteorological variables controls the circadian rhythm of plants and animals alike. Living organisms, for their part, can collectively affect weather patterns. The rate of evapotranspiration of forests, or of any large vegetated area for that matter, contributes to the release of water vapor in the atmosphere. This local, relatively fast and continuous process may contribute significantly to the persistence of precipitations in a given area. As another example, the wilting of plants results in definite changes in leaf angle distribution and therefore modifies the rates of reflection, transmission and absorption of solar light in these plants. That, in turn, changes the albedo of the ecosystem as well as the relative importance of the sensible and latent heat fluxes from the surface to the atmosphere. For an example in oceanography, consider the release of dimethyl sulfide by biological activity in sea water and its impact on atmospheric aerosols. Human biometeorology The methods and measurements traditionally used in biometeorology are not different when applied to study the interactions between human bodies and the atmosphere, but some aspects or applications may have been explored more extensively. For instance, wind chill has been investigated to determine th Document 1::: A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field. Overview The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion. With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies. Example programs The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University. A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes Document 2::: In visual physiology, adaptation is the ability of the retina of the eye to adjust to various levels of light. Natural night vision, or scotopic vision, is the ability to see under low-light conditions. In humans, rod cells are exclusively responsible for night vision as cone cells are only able to function at higher illumination levels. Night vision is of lower quality than day vision because it is limited in resolution and colors cannot be discerned; only shades of gray are seen. In order for humans to transition from day to night vision they must undergo a dark adaptation period of up to two hours in which each eye adjusts from a high to a low luminescence "setting", increasing sensitivity hugely, by many orders of magnitude. This adaptation period is different between rod and cone cells and results from the regeneration of photopigments to increase retinal sensitivity. Light adaptation, in contrast, works very quickly, within seconds. Efficiency The human eye can function from very dark to very bright levels of light; its sensing capabilities reach across nine orders of magnitude. This means that the brightest and the darkest light signal that the eye can sense are a factor of roughly 1,000,000,000 apart. However, in any given moment of time, the eye can only sense a contrast ratio of 1,000. What enables the wider reach is that the eye adapts its definition of what is black. The eye takes approximately 20–30 minutes to fully adapt from bright sunlight to complete darkness and becomes 10,000 to 1,000,000 times more sensitive than at full daylight. In this process, the eye's perception of color changes as well (this is called the Purkinje effect). However, it takes approximately five minutes for the eye to adapt from darkness to bright sunlight. This is due to cones obtaining more sensitivity when first entering the dark for the first five minutes but the rods taking over after five or more minutes. Cone cells are able to regain maximum retinal sensitivity in 9 Document 3::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of Document 4::: Cosmic ray visual phenomena, or light flashes (LF), also known as Astronaut's Eye, are spontaneous flashes of light visually perceived by some astronauts outside the magnetosphere of the Earth, such as during the Apollo program. While LF may be the result of actual photons of visible light being sensed by the retina, the LF discussed here could also pertain to phosphenes, which are sensations of light produced by the activation of neurons along the visual pathway. Possible causes Researchers believe that the LF perceived specifically by astronauts in space are due to cosmic rays (high-energy charged particles from beyond the Earth's atmosphere), though the exact mechanism is unknown. Hypotheses include Cherenkov radiation created as the cosmic ray particles pass through the vitreous humour of the astronauts' eyes, direct interaction with the optic nerve, direct interaction with visual centres in the brain, retinal receptor stimulation, and a more general interaction of the retina with radiation. Conditions under which the light flashes were reported Astronauts who had recently returned from space missions to the Hubble Space Telescope, the International Space Station and Mir Space Station reported seeing the LF under different conditions. In order of decreasing frequency of reporting in a survey, they saw the LF in the dark, in dim light, in bright light and one reported that he saw them regardless of light level and light adaptation. They were seen mainly before sleeping. Types Some LF were reported to be clearly visible, while others were not. They manifested in different colors and shapes. How often each type was seen varied across astronauts' experiences, as evident in a survey of 59 astronauts. Colors On Lunar missions, astronauts almost always reported that the flashes were white, with one exception where the astronaut observed "blue with a white cast, like a blue diamond." On other space missions, astronauts reported seeing other colors such as yellow and The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. A tropism where light is the stimulus is known as what? A. Geotropism B. Thermotropism C. Atropism D. phototropism Answer:
sciq-1279
multiple_choice
In animals, what process occurs only in germ cells, which are in the ovaries or testes?
[ "meiosis", "electrolysis", "reproduction", "mitosis" ]
A
Relavent Documents: Document 0::: Spermatogenesis is the process by which haploid spermatozoa develop from germ cells in the seminiferous tubules of the testis. This process starts with the mitotic division of the stem cells located close to the basement membrane of the tubules. These cells are called spermatogonial stem cells. The mitotic division of these produces two types of cells. Type A cells replenish the stem cells, and type B cells differentiate into primary spermatocytes. The primary spermatocyte divides meiotically (Meiosis I) into two secondary spermatocytes; each secondary spermatocyte divides into two equal haploid spermatids by Meiosis II. The spermatids are transformed into spermatozoa (sperm) by the process of spermiogenesis. These develop into mature spermatozoa, also known as sperm cells. Thus, the primary spermatocyte gives rise to two cells, the secondary spermatocytes, and the two secondary spermatocytes by their subdivision produce four spermatozoa and four haploid cells. Spermatozoa are the mature male gametes in many sexually reproducing organisms. Thus, spermatogenesis is the male version of gametogenesis, of which the female equivalent is oogenesis. In mammals it occurs in the seminiferous tubules of the male testes in a stepwise fashion. Spermatogenesis is highly dependent upon optimal conditions for the process to occur correctly, and is essential for sexual reproduction. DNA methylation and histone modification have been implicated in the regulation of this process. It starts during puberty and usually continues uninterrupted until death, although a slight decrease can be discerned in the quantity of produced sperm with increase in age (see Male infertility). Spermatogenesis starts in the bottom part of seminiferous tubes and, progressively, cells go deeper into tubes and moving along it until mature spermatozoa reaches the lumen, where mature spermatozoa are deposited. The division happens asynchronically; if the tube is cut transversally one could observe different Document 1::: The germ cell nest (germ-line cyst) forms in the ovaries during their development. The nest consists of multiple interconnected oogonia formed by incomplete cell division. The interconnected oogonia are surrounded by somatic cells called granulosa cells. Later on in development, the germ cell nests break down through invasion of granulosa cells. The result is individual oogonia surrounded by a single layer of granulosa cells. There is also a comparative germ cell nest structure in the developing spermatogonia, with interconnected intracellular cytoplasmic bridges. Formation of germ cell nests Prior to meiosis primordial germ cells (PGCs) migrate to the gonads and mitotically divide along the genital ridge in clusters or nests of cells referred to as germline cysts or germ cell nests. The understating of germ cell nest formation is limited. However, invertebrate models, especially drosophila have provided insight into the mechanisms surrounding formation. In females, it is suggested that cysts form from dividing progenitor cells. During this cyst formation, 4 rounds of division with incomplete cytokinesis occur resulting in cystocytes that are joined by intercellular bridges, also known as ring canals. Rodent PGCs migrate to the gonads and mitotically divide at embryonic day (E) 10.5. It is at this stage they switch from complete to incomplete cytokinesis during the mitotic cycle from E10.5-E14.5. Germ cell nests emerge following consecutive divisions of progenitor cells resulting from cleavage furrows arresting and forming intercellular bridges. The intercellular bridges are crucial in maintaining effective communication. They ensure meiosis begins immediately after the mitotic cyst formation cycle is complete. In females, mitosis will end at E14.5 and meiosis will commence. However, It is possible that germ cells may travel to the gonads and cluster together forming nests after their arrival or form through cellular aggregation. Function Most of our understan Document 2::: Oogenesis, ovogenesis, or oögenesis is the differentiation of the ovum (egg cell) into a cell competent to further develop when fertilized. It is developed from the primary oocyte by maturation. Oogenesis is initiated in the embryonic stage. Oogenesis in non-human mammals In mammals, the first part of oogenesis starts in the germinal epithelium, which gives rise to the development of ovarian follicles, the functional unit of the ovary. Oogenesis consists of several sub-processes: oocytogenesis, ootidogenesis, and finally maturation to form an ovum (oogenesis proper). Folliculogenesis is a separate sub-process that accompanies and supports all three oogenetic sub-processes. Oogonium —(Oocytogenesis)—> Primary Oocyte —(Meiosis I)—> First Polar body (Discarded afterward) + Secondary oocyte —(Meiosis II)—> Second Polar Body (Discarded afterward) + Ovum Oocyte meiosis, important to all animal life cycles yet unlike all other instances of animal cell division, occurs completely without the aid of spindle-coordinating centrosomes. The creation of oogonia The creation of oogonia traditionally doesn't belong to oogenesis proper, but, instead, to the common process of gametogenesis, which, in the female human, begins with the processes of folliculogenesis, oocytogenesis, and ootidogenesis. Oogonia enter meiosis during embryonic development, becoming oocytes. Meiosis begins with DNA replication and meiotic crossing over. It then stops in early prophase. Maintenance of meiotic arrest Mammalian oocytes are maintained in meiotic prophase arrest for a very long time—months in mice, years in humans. Initially the arrest is due to lack of sufficient cell cycle proteins to allow meiotic progression. However, as the oocyte grows, these proteins are synthesized, and meiotic arrest becomes dependent on cyclic AMP. The cyclic AMP is generated by the oocyte by adenylyl cyclase in the oocyte membrane. The adenylyl cyclase is kept active by a constitutively active G-protein-coupled Document 3::: Reproductive biology includes both sexual and asexual reproduction. Reproductive biology includes a wide number of fields: Reproductive systems Endocrinology Sexual development (Puberty) Sexual maturity Reproduction Fertility Human reproductive biology Endocrinology Human reproductive biology is primarily controlled through hormones, which send signals to the human reproductive structures to influence growth and maturation. These hormones are secreted by endocrine glands, and spread to different tissues in the human body. In humans, the pituitary gland synthesizes hormones used to control the activity of endocrine glands. Reproductive systems Internal and external organs are included in the reproductive system. There are two reproductive systems including the male and female, which contain different organs from one another. These systems work together in order to produce offspring. Female reproductive system The female reproductive system includes the structures involved in ovulation, fertilization, development of an embryo, and birth. These structures include: Ovaries Oviducts Uterus Vagina Mammary Glands Estrogen is one of the sexual reproductive hormones that aid in the sexual reproductive system of the female. Male reproductive system The male reproductive system includes testes, rete testis, efferent ductules, epididymis, sex accessory glands, sex accessory ducts and external genitalia. Testosterone, an androgen, although present in both males and females, is relatively more abundant in males. Testosterone serves as one of the major sexual reproductive hormones in the male reproductive system However, the enzyme aromatase is present in testes and capable of synthesizing estrogens from androgens. Estrogens are present in high concentrations in luminal fluids of the male reproductive tract. Androgen and estrogen receptors are abundant in epithelial cells of the male reproductive tract. Animal Reproductive Biology Animal reproduction oc Document 4::: In cellular biology, a somatic cell (), or vegetal cell, is any biological cell forming the body of a multicellular organism other than a gamete, germ cell, gametocyte or undifferentiated stem cell. Somatic cells compose the body of an organism and divide through the process of binary fission and mitotic division. In contrast, gametes are cells that fuse during sexual reproduction and germ cells are cells that give rise to gametes. Stem cells also can divide through mitosis, but are different from somatic in that they differentiate into diverse specialized cell types. In mammals, somatic cells make up all the internal organs, skin, bones, blood and connective tissue, while mammalian germ cells give rise to spermatozoa and ova which fuse during fertilization to produce a cell called a zygote, which divides and differentiates into the cells of an embryo. There are approximately 220 types of somatic cell in the human body. Theoretically, these cells are not germ cells (the source of gametes); they transmit their mutations, to their cellular descendants (if they have any), but not to the organism's descendants. However, in sponges, non-differentiated somatic cells form the germ line and, in Cnidaria, differentiated somatic cells are the source of the germline. Mitotic cell division is only seen in diploid somatic cells. Only some cells like germ cells take part in reproduction. Evolution As multicellularity was theorized to be evolved many times, so did sterile somatic cells. The evolution of an immortal germline producing specialized somatic cells involved the emergence of mortality, and can be viewed in its simplest version in volvocine algae. Those species with a separation between sterile somatic cells and a germline are called Weismannists. Weismannist development is relatively rare (e.g., vertebrates, arthropods, Volvox), as many species have the capacity for somatic embryogenesis (e.g., land plants, most algae, and numerous invertebrates). Genetics and chrom The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. In animals, what process occurs only in germ cells, which are in the ovaries or testes? A. meiosis B. electrolysis C. reproduction D. mitosis Answer:
sciq-1135
multiple_choice
In how many states does water exist on earth?
[ "three matter states", "one matter states", "eight matter states", "two matter states" ]
A
Relavent Documents: Document 0::: Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas. Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below: During adiabatic expansion of an ideal gas, its temperatureincreases decreases stays the same Impossible to tell/need more information The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well. Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in Document 1::: In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH. Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid. Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model. Motivation Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate What the student can do and What the student is ready to learn. Model structure Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of Document 2::: GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test. Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95. After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17. Content specification Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below: Biochemistry (36%) A Chemical and Physical Foundations Thermodynamics and kinetics Redox states Water, pH, acid-base reactions and buffers Solutions and equilibria Solute-solvent interactions Chemical interactions and bonding Chemical reaction mechanisms B Structural Biology: Structure, Assembly, Organization and Dynamics Small molecules Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids) Supramolecular complexes (e.g. Document 3::: The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered. The 1995 version has 30 five-way multiple choice questions. Example question (question 4): Gender differences The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher. Document 4::: Single Best Answer (SBA or One Best Answer) is a written examination form of multiple choice questions used extensively in medical education. Structure A single question is posed with typically five alternate answers, from which the candidate must choose the best answer. This method avoids the problems of past examinations of a similar form described as Single Correct Answer. The older form can produce confusion where more than one of the possible answers has some validity. The newer form makes it explicit that more than one answer may have elements that are correct, but that one answer will be superior. Prior to the widespread introduction of SBAs into medical education, the typical form of examination was true-false multiple choice questions. But during the 2000s, educators found that SBAs would be superior. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. In how many states does water exist on earth? A. three matter states B. one matter states C. eight matter states D. two matter states Answer: