id
stringlengths 6
15
| question_type
stringclasses 1
value | question
stringlengths 15
683
| choices
listlengths 4
4
| answer
stringclasses 5
values | explanation
stringclasses 481
values | prompt
stringlengths 1.75k
10.9k
|
---|---|---|---|---|---|---|
scienceQA-532
|
multiple_choice
|
Select the plant.
|
[
"Wombats eat plants.",
"Wolves eat animals.",
"Dahlias can grow colorful flowers.",
"Dung beetles walk and run."
] |
C
|
A wombat is an animal. It eats plants.
Wombats have strong claws. They use their claws to dig tunnels called burrows.
A dung beetle is an animal. It walks and runs.
Dung beetles eat animal waste, which is called dung. They roll the dung into balls to store for later.
A dahlia is a plant. It can grow colorful flowers.
Dahlia plants grow in the wild in Central America. But people grow dahlias in gardens all over the world!
A wolf is an animal. It eats other animals.
Wolves often live in family groups. A wolf family group is called a pack.
|
Relavent Documents:
Document 0:::
What a Plant Knows is a popular science book by Daniel Chamovitz, originally published in 2012, discussing the sensory system of plants. A revised edition was published in 2017.
Release details / Editions / Publication
Hardcover edition, 2012
Paperback version, 2013
Revised edition, 2017
What a Plant Knows has been translated and published in a number of languages.
Document 1:::
Tolerance is the ability of plants to mitigate the negative fitness effects caused by herbivory. It is one of the general plant defense strategies against herbivores, the other being resistance, which is the ability of plants to prevent damage (Strauss and Agrawal 1999). Plant defense strategies play important roles in the survival of plants as they are fed upon by many different types of herbivores, especially insects, which may impose negative fitness effects (Strauss and Zangerl 2002). Damage can occur in almost any part of the plants, including the roots, stems, leaves, flowers and seeds (Strauss and Zergerl 2002). In response to herbivory, plants have evolved a wide variety of defense mechanisms and although relatively less studied than resistance strategies, tolerance traits play a major role in plant defense (Strauss and Zergerl 2002, Rosenthal and Kotanen 1995).
Traits that confer tolerance are controlled genetically and therefore are heritable traits under selection (Strauss and Agrawal 1999). Many factors intrinsic to the plants, such as growth rate, storage capacity, photosynthetic rates and nutrient allocation and uptake, can affect the extent to which plants can tolerate damage (Rosenthal and Kotanen 1994). Extrinsic factors such as soil nutrition, carbon dioxide levels, light levels, water availability and competition also have an effect on tolerance (Rosenthal and Kotanen 1994).
History of the study of plant tolerance
Studies of tolerance to herbivory has historically been the focus of agricultural scientists (Painter 1958; Bardner and Fletcher 1974). Tolerance was actually initially classified as a form of resistance (Painter 1958). Agricultural studies on tolerance, however, are mainly concerned with the compensatory effect on the plants' yield and not its fitness, since it is of economical interest to reduce crop losses due to herbivory by pests (Trumble 1993; Bardner and Fletcher 1974). One surprising discovery made about plant tolerance was th
Document 2:::
Xenohormesis is a hypothesis that posits that certain molecules such as plant polyphenols, which indicate stress in the plants, can have benefits of another organism (heterotrophs) which consumes it. Or in simpler terms, xenohormesis is interspecies hormesis. The expected benefits include improve lifespan and fitness, by activating the animal's cellular stress response.
This may be useful to evolve, as it gives possible cues about the state of the environment. If the plants an animal is eating have increased polyphenol content, it means the plant is under stress and may signal famines. Using the chemical cues the heterotophs could preemptively prepare and defend itself before conditions worsen. A possible example may be resveratrol, which is famously found in red wine, which modulates over two dozen receptors and enzymes in mammals.
Xenohormesis could also explain several phenomena seen in the ethno-pharmaceutical (traditional medicine) side of things. Such as in the case of cinnamon, which in several studies have shown to help treat type 2 diabetes, but hasn't been confirmed in meta analysis. This can be caused by the cinnamon used in one study differing from the other in xenohormic properties.
Some explanations as to why this works, is first and foremost, it could be a coincidence. Especially for cases which partially venomous products, cause a positive stress in the organism. The second is that it is a shared evolutionary attribute, as both animals and plants share a huge amount of homology between their pathways. The third is that there is evolutionary pressure to evolve to better respond to the molecules. The latter is proposed mainly by Howitz and his team.
There also might be the problem that our focus on maximizing the crop output, may be losing many of the xenohormetic advantages. Although the ideal conditions will cause the plant to increase its crop output it can also be argued it is loosing stress and therefore the hormesis. The honeybee colony colla
Document 3:::
Plants For A Future (PFAF) is an online not for profit resource for those interested in edible and useful plants, with a focus on temperate regions. Named after the phrase "plans for a future" as wordplay, the organization's emphasis is on perennial plants.
PFAF is a registered educational charity with the following objectives:
The website contains an online database of over 8000 plants: 7000 that can be grown in temperate regions including in the UK, and 1000 plants for tropical situations.
The database was originally set up by Ken Fern to include 1,500 plants which he had grown on his 28 acre research site in the South West of England.
Since 2008, the database has been maintained by the database administrator employed by the Plants For A Future Charity.
The organization participates in public discussion by publishing books. Members have participated in various conferences and are also participants in the International Permaculture Research Project.
Publications
Fern, Ken. Plants for a Future: Edible and Useful Plants for a Healthier World. Hampshire: Permanent Publications, 1997. .
Edible Plants: An inspirational guide to choosing and growing unusual edible plants. 2012
Woodland Gardening: Designing a low-maintenance, sustainable edible woodland garden. 2013.
Edible Trees: A practical and inspirational guide from Plants For A Future on how to grow and harvest trees with edible and other useful produce. 2013.
Plantes Comestibles: Le guide pour vous inspirer à choisir et cultiver des plantes comestibles hors du commun. 2014.
Edible Perennials: 50 Top perennial plants from Plants For A Future. 2015.
Edible Shrubs: 70+ Top Shrubs from Plants For A Future
Plants for Your Food Forest: 500 Plants for Temperate Food Forests and Permaculture Gardens. 2021.
See also
Forest gardening
Postcode Plants Database
Document 4:::
In the words of Brahma, the Manu classifies plants as
(1) Osadhi – plants bearing abundant flowers and fruits, but withering away after fructification,
(2) Vanaspati – plants bearing fruits without evident flowers,
(3) Vrksa – trees bearing both flowers and fruits,
(4) Guccha – bushy herbs,
(5) Gulma – succulent shrubs,
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Select the plant.
A. Wombats eat plants.
B. Wolves eat animals.
C. Dahlias can grow colorful flowers.
D. Dung beetles walk and run.
Answer:
|
sciq-17
|
multiple_choice
|
What occurs when the immune system attacks a harmless substance that enters the body from the outside?
|
[
"nausea",
"allergy",
"panic attack",
"plague"
] |
B
|
Relavent Documents:
Document 0:::
Immunopathology is a branch of medicine that deals with immune responses associated with disease. It includes the study of the pathology of an organism, organ system, or disease with respect to the immune system, immunity, and immune responses. In biology, it refers to damage caused to an organism by its own immune response, as a result of an infection. It could be due to mismatch between pathogen and host species, and often occurs when an animal pathogen infects a human (e.g. avian flu leads to a cytokine storm which contributes to the increased mortality rate).
Types of Immunity
In all vertebrates, there are two different kinds of immunities: Innate and Adaptive immunity. Innate immunity is used to fight off non-changing antigens and is therefore considered nonspecific. It is usually a more immediate response than the adaptive immune system, usually responding within minutes to hours. It is composed of physical blockades such as the skin, but also contains nonspecific immune cells such as dendritic cells, macrophages, and basophils. The second form of immunity is Adaptive immunity. This form of immunity requires recognition of the foreign antigen before a response is produced. Once the antigen is recognized, a specific response is produced in order to destroy the specific antigen. Because of its tailored response characteristic, adaptive immunity is considered to be specific immunity. A key part of adaptive immunity that separates it from innate is the use of memory to combat the antigen in the future. When the antigen is originally introduced, the organism does not have any receptors for the antigen so it must generate them from the first time the antigen is present. The immune system then builds a memory of that antigen, which enables it to recognize the antigen quicker in the future and be able to combat it quicker and more efficiently. The more the system is exposed to the antigen, the quicker it will build up its responsiveness. Nested within Adaptive immu
Document 1:::
An immune response is a physiological reaction which occurs within an organism in the context of inflammation for the purpose of defending against exogenous factors. These include a wide variety of different toxins, viruses, intra- and extracellular bacteria, protozoa, helminths, and fungi which could cause serious problems to the health of the host organism if not cleared from the body.
In addition, there are other forms of immune response. For example, harmless exogenous factors (such as pollen and food components) can trigger allergy; latex and metals are also known allergens.
A transplanted tissue (for example, blood) or organ can cause graft-versus-host disease. A type of immune reactivity known as Rh disease can be observed in pregnant women. These special forms of immune response are classified as hypersensitivity. Another special form of immune response is antitumor immunity.
In general, there are two branches of the immune response, the innate and the adaptive, which work together to protect against pathogens. Both branches engage humoral and cellular components.
The innate branch—the body's first reaction to an invader—is known to be a non-specific and quick response to any sort of pathogen. Components of the innate immune response include physical barriers like the skin and mucous membranes, immune cells such as neutrophils, macrophages, and monocytes, and soluble factors including cytokines and complement. On the other hand, the adaptive branch is the body's immune response which is catered against specific antigens and thus, it takes longer to activate the components involved. The adaptive branch include cells such as dendritic cells, T cell, and B cells as well as antibodies—also known as immunoglobulins—which directly interact with antigen and are a very important component for a strong response against an invader.
The first contact that an organism has with a particular antigen will result in the production of effector T and B cells which are act
Document 2:::
The following outline is provided as an overview of and topical guide to immunology:
Immunology – study of all aspects of the immune system in all organisms. It deals with the physiological functioning of the immune system in states of both health and disease; malfunctions of the immune system in immunological disorders (autoimmune diseases, hypersensitivities, immune deficiency, transplant rejection); the physical, chemical and physiological characteristics of the components of the immune system in vitro, in situ, and in vivo.
Essence of immunology
Immunology
Branch of Biomedical science
Immune system
Immunity
Branches of immunology:
1. General Immunology
2. Basic Immunology
3. Advanced Immunology
4. Medical Immunology
5. Pharmaceutical Immunology
9. Clinical Immunology
6. Environmental Immunology
8. Cellular and Molecular Immunology
9. Food and Agricultural Immunology
Classical immunology
Clinical immunology
Computational immunology
Diagnostic immunology
Evolutionary immunology
Systems immunology
Immunomics
Immunoproteomics
Immunophysics
Immunochemistry
Ecoimmunology
Immunopathology
Nutritional immunology
Psychoneuroimmunology
Reproductive immunology
Circadian immunology
Immunotoxicology
Palaeoimmunology
Tissue-based immunology
Testicular immunology - Testes
Immunodermatology - Skin
Intravascular immunology - Blood
Osteoimmunology - Bone
Mucosal immunology - Mucosal surfaces
Respiratory tract antimicrobial defense system - Respiratory tract
Neuroimmunology - Neuroimmune system in the Central nervous system
Ocularimmunology - Ocular immune system in the Eye
Cancer immunology/Immunooncology - Tumors
History of immunology
History of immunology
Timeline of immunology
General immunological concepts
Immunity:
Immunity against:
Pathogens
Pathogenic bacteria
Viruses
Fungi
Protozoa
Parasites
Tumors
Allergens
Self-proteins
Autoimmunity
Alloimmunity
Cross-reactivity
Tolerance
Central tolerance
Peripheral tolerance
Document 3:::
The adaptive immune system, also known as the acquired immune system, or specific immune system is a subsystem of the immune system that is composed of specialized, systemic cells and processes that eliminate pathogens or prevent their growth. The acquired immune system is one of the two main immunity strategies found in vertebrates (the other being the innate immune system).
Like the innate system, the adaptive immune system includes both humoral immunity components and cell-mediated immunity components and destroys invading pathogens. Unlike the innate immune system, which is pre-programmed to react to common broad categories of pathogen, the adaptive immune system is highly specific to each particular pathogen the body has encountered.
Adaptive immunity creates immunological memory after an initial response to a specific pathogen, and leads to an enhanced response to future encounters with that pathogen. Antibodies are a critical part of the adaptive immune system. Adaptive immunity can provide long-lasting protection, sometimes for the person's entire lifetime. For example, someone who recovers from measles is now protected against measles for their lifetime; in other cases it does not provide lifetime protection, as with chickenpox. This process of adaptive immunity is the basis of vaccination.
The cells that carry out the adaptive immune response are white blood cells known as lymphocytes. B cells and T cells, two different types of lymphocytes, carry out the main activities: antibody responses, and cell-mediated immune response. In antibody responses, B cells are activated to secrete antibodies, which are proteins also known as immunoglobulins. Antibodies travel through the bloodstream and bind to the foreign antigen causing it to inactivate, which does not allow the antigen to bind to the host. Antigens are any substances that elicit the adaptive immune response. Sometimes the adaptive system is unable to distinguish harmful from harmless foreign molecule
Document 4:::
Biological response modifiers (BRMs) are substances that modify immune responses. They can be both endogenous (produced naturally within the body) and exogenous (as pharmaceutical drugs), and they can either enhance an immune response or suppress it. Some of these substances arouse the body's response to an infection, and others can keep the response from becoming excessive. Thus they serve as immunomodulators in immunotherapy (therapy that makes use of immune responses), which can be helpful in treating cancer (where targeted therapy often relies on the immune system being used to attack cancer cells) and in treating autoimmune diseases (in which the immune system attacks the self), such as some kinds of arthritis and dermatitis. Most BRMs are biopharmaceuticals (biologics), including monoclonal antibodies, interleukin 2, interferons, and various types of colony-stimulating factors (e.g., CSF, GM-CSF, G-CSF). "Immunotherapy makes use of BRMs to enhance the activity of the immune system to increase the body's natural defense mechanisms against cancer", whereas BRMs for rheumatoid arthritis aim to reduce inflammation.
Some of the effects of BRMs include nausea and vomiting, diarrhea, loss of appetite, fever and chills, muscle aches, weakness, skin rash, an increased tendency to bleed, or swelling. For example, patients with systemic lupus erythematosus (SLE) who are treated with standard of care, including biologic response modifiers, experience a higher risk of mortality and opportunistic infection compared to the general population.
Abciximab
Mechanism of action: A monoclonal antibody that binds to the glycoprotein receptor IIb/IIIa on activated platelets, preventing aggregation.
Clinical use: Acute coronary syndromes, percutaneous transluminal coronary angioplasty.
Toxicity: Bleeding, thrombocytopenia.
Anakinra (Kineret)
Mechanism of action: A recombinant version of the Interleukin 1 receptor antagonist.
Clinical use: Rheumatoid arthritis.
Toxicity: Allergic re
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What occurs when the immune system attacks a harmless substance that enters the body from the outside?
A. nausea
B. allergy
C. panic attack
D. plague
Answer:
|
|
sciq-11637
|
multiple_choice
|
Biochemical compounds that include sugars, starches, and cellulose are examples of what?
|
[
"proteins",
"carbohydrates",
"lipids",
"electrolytes"
] |
B
|
Relavent Documents:
Document 0:::
This is a list of articles that describe particular biomolecules or types of biomolecules.
A
For substances with an A- or α- prefix such as
α-amylase, please see the parent page (in this case Amylase).
A23187 (Calcimycin, Calcium Ionophore)
Abamectine
Abietic acid
Acetic acid
Acetylcholine
Actin
Actinomycin D
Adenine
Adenosmeme
Adenosine diphosphate (ADP)
Adenosine monophosphate (AMP)
Adenosine triphosphate (ATP)
Adenylate cyclase
Adiponectin
Adonitol
Adrenaline, epinephrine
Adrenocorticotropic hormone (ACTH)
Aequorin
Aflatoxin
Agar
Alamethicin
Alanine
Albumins
Aldosterone
Aleurone
Alpha-amanitin
Alpha-MSH (Melaninocyte stimulating hormone)
Allantoin
Allethrin
α-Amanatin, see Alpha-amanitin
Amino acid
Amylase (also see α-amylase)
Anabolic steroid
Anandamide (ANA)
Androgen
Anethole
Angiotensinogen
Anisomycin
Antidiuretic hormone (ADH)
Anti-Müllerian hormone (AMH)
Arabinose
Arginine
Argonaute
Ascomycin
Ascorbic acid (vitamin C)
Asparagine
Aspartic acid
Asymmetric dimethylarginine
ATP synthase
Atrial-natriuretic peptide (ANP)
Auxin
Avidin
Azadirachtin A – C35H44O16
B
Bacteriocin
Beauvericin
beta-Hydroxy beta-methylbutyric acid
beta-Hydroxybutyric acid
Bicuculline
Bilirubin
Biopolymer
Biotin (Vitamin H)
Brefeldin A
Brassinolide
Brucine
Butyric acid
C
Document 1:::
The metabolome refers to the complete set of small-molecule chemicals found within a biological sample. The biological sample can be a cell, a cellular organelle, an organ, a tissue, a tissue extract, a biofluid or an entire organism. The small molecule chemicals found in a given metabolome may include both endogenous metabolites that are naturally produced by an organism (such as amino acids, organic acids, nucleic acids, fatty acids, amines, sugars, vitamins, co-factors, pigments, antibiotics, etc.) as well as exogenous chemicals (such as drugs, environmental contaminants, food additives, toxins and other xenobiotics) that are not naturally produced by an organism.
In other words, there is both an endogenous metabolome and an exogenous metabolome. The endogenous metabolome can be further subdivided to include a "primary" and a "secondary" metabolome (particularly when referring to plant or microbial metabolomes). A primary metabolite is directly involved in the normal growth, development, and reproduction. A secondary metabolite is not directly involved in those processes, but usually has important ecological function. Secondary metabolites may include pigments, antibiotics or waste products derived from partially metabolized xenobiotics. The study of the metabolome is called metabolomics.
Origins
The word metabolome appears to be a blending of the words "metabolite" and "chromosome". It was constructed to imply that metabolites are indirectly encoded by genes or act on genes and gene products. The term "metabolome" was first used in 1998 and was likely coined to match with existing biological terms referring to the complete set of genes (the genome), the complete set of proteins (the proteome) and the complete set of transcripts (the transcriptome). The first book on metabolomics was published in 2003. The first journal dedicated to metabolomics (titled simply "Metabolomics") was launched in 2005 and is currently edited by Prof. Roy Goodacre. Some of the m
Document 2:::
Biochemistry or biological chemistry is the study of chemical processes within and relating to living organisms. A sub-discipline of both chemistry and biology, biochemistry may be divided into three fields: structural biology, enzymology, and metabolism. Over the last decades of the 20th century, biochemistry has become successful at explaining living processes through these three disciplines. Almost all areas of the life sciences are being uncovered and developed through biochemical methodology and research. Biochemistry focuses on understanding the chemical basis which allows biological molecules to give rise to the processes that occur within living cells and between cells, in turn relating greatly to the understanding of tissues and organs, as well as organism structure and function. Biochemistry is closely related to molecular biology, which is the study of the molecular mechanisms of biological phenomena.
Much of biochemistry deals with the structures, bonding, functions, and interactions of biological macromolecules, such as proteins, nucleic acids, carbohydrates, and lipids. They provide the structure of cells and perform many of the functions associated with life. The chemistry of the cell also depends upon the reactions of small molecules and ions. These can be inorganic (for example, water and metal ions) or organic (for example, the amino acids, which are used to synthesize proteins). The mechanisms used by cells to harness energy from their environment via chemical reactions are known as metabolism. The findings of biochemistry are applied primarily in medicine, nutrition and agriculture. In medicine, biochemists investigate the causes and cures of diseases. Nutrition studies how to maintain health and wellness and also the effects of nutritional deficiencies. In agriculture, biochemists investigate soil and fertilizers, with the goal of improving crop cultivation, crop storage, and pest control. In recent decades, biochemical principles a
Document 3:::
Structure and nomenclature
Carbohydrates are generally divided into monosaccharides, oligosaccharides, and polysaccharides depending on the number of sugar subunits. Maltose, with two sugar units, is a disaccharide, which falls under oligosaccharides. Glucose is a hexose: a monosaccharide containing six carbon atoms. The two glucose units are in the pyranose form and are joined by an O-glycosidic bond, with the first carbon (C1) of the first glucose linked to the fourth carbon (C4) of the second glucose, indicated as (1→4). The link is characterized as α because the glycosidic bond to the anomeric carbon (C1) is in the opposite plane from the substituent in the same ring (C6 of the first glucose). If the glycosidic bond to the anomeric carbon (C1) were in the same plane as the substituent, it would be classified as a β(1→4) bond, and the resulting molecule would be cellobiose. The anomeric carbon (C1) of the second glucose molecule, which is not involved in a glycosidic bond, could be either an α- or β-anomer depending on the bond direction of the attached hydroxyl group relative to the substituent of the same ring, resulting in either α-
Document 4:::
Carbohydrate Structure Database (CSDB) is a free curated database and service platform in glycoinformatics, launched in 2005 by a group of Russian scientists from N.D. Zelinsky Institute of Organic Chemistry, Russian Academy of Sciences. CSDB stores published structural, taxonomical, bibliographic and NMR-spectroscopic data on natural carbohydrates and carbohydrate-related molecules.
Overview
The main data stored in CSDB are carbohydrate structures of bacterial, fungal, and plant origin. Each structure is assigned to an organism and is provided with the link(s) to the corresponding scientific publication(s), in which it was described. Apart from structural data, CSDB also stores NMR spectra, information on methods used to decipher a particular structure, and some other data.
CSDB provides access to several carbohydrate-related research tools:
Simulation of 1D and 2D NMR spectra of carbohydrates (GODDESS: glycan-oriented database-driven empirical spectrum simulation).
Automated NMR-based structure elucidation (GRASS: generation, ranking and assignment of saccharide structures).
Statistical analysis of structural feature distribution in glycomes of living organisms
Generation of optimized atomic coordinates for an arbitrary saccharide and subdatabase of conformation maps.
Taxon clustering based on similarities of glycomes (carbohydrate-based tree of life)
Glycosyltransferase subdatabase (GT-explorer)
History and funding
Until 2015, Bacterial Carbohydrate Structure Database (BCSDB) and Plant&Fungal Carbohydrate Structure Database (PFCSDB) databases existed in parallel. In 2015, they were joined into the single Carbohydrate Structure Database (CSDB). The development and maintenance of CSDB have been funded by International Science and Technology Center (2005-2007), Russian Federation President grant program (2005-2006), Russian Foundation for Basic Research (2005-2007,2012-2014,2015-2017,2018-2020), Deutsches Krebsforschungszentrum (short-term in 2006-2010),
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Biochemical compounds that include sugars, starches, and cellulose are examples of what?
A. proteins
B. carbohydrates
C. lipids
D. electrolytes
Answer:
|
|
sciq-7373
|
multiple_choice
|
What are electrons lost during the formation of ions called?
|
[
"catalysts",
"oxides",
"cations",
"isotopes"
] |
C
|
Relavent Documents:
Document 0:::
An ion () is an atom or molecule with a net electrical charge. The charge of an electron is considered to be negative by convention and this charge is equal and opposite to the charge of a proton, which is considered to be positive by convention. The net charge of an ion is not zero because its total number of electrons is unequal to its total number of protons.
A cation is a positively charged ion with fewer electrons than protons while an anion is a negatively charged ion with more electrons than protons. Opposite electric charges are pulled towards one another by electrostatic force, so cations and anions attract each other and readily form ionic compounds.
Ions consisting of only a single atom are termed atomic or monatomic ions, while two or more atoms form molecular ions or polyatomic ions. In the case of physical ionization in a fluid (gas or liquid), "ion pairs" are created by spontaneous molecule collisions, where each generated pair consists of a free electron and a positive ion. Ions are also created by chemical interactions, such as the dissolution of a salt in liquids, or by other means, such as passing a direct current through a conducting solution, dissolving an anode via ionization.
History of discovery
The word ion was coined from Greek neuter present participle of ienai (), meaning "to go". A cation is something that moves down ( pronounced kato, meaning "down") and an anion is something that moves up (, meaning "up"). They are so called because ions move toward the electrode of opposite charge. This term was introduced (after a suggestion by the English polymath William Whewell) by English physicist and chemist Michael Faraday in 1834 for the then-unknown species that goes from one electrode to the other through an aqueous medium. Faraday did not know the nature of these species, but he knew that since metals dissolved into and entered a solution at one electrode and new metal came forth from a solution at the other electrode; that some kind of
Document 1:::
In physics, a charge carrier is a particle or quasiparticle that is free to move, carrying an electric charge, especially the particles that carry electric charges in electrical conductors. Examples are electrons, ions and holes. The term is used most commonly in solid state physics. In a conducting medium, an electric field can exert force on these free particles, causing a net motion of the particles through the medium; this is what constitutes an electric current.
The electron and the proton are the elementary charge carriers, each carrying one elementary charge (e), of the same magnitude and opposite sign.
In conductors
In conducting media, particles serve to carry charge:
In many metals, the charge carriers are electrons. One or two of the valence electrons from each atom are able to move about freely within the crystal structure of the metal. The free electrons are referred to as conduction electrons, and the cloud of free electrons is called a Fermi gas. Many metals have electron and hole bands. In some, the majority carriers are holes.
In electrolytes, such as salt water, the charge carriers are ions, which are atoms or molecules that have gained or lost electrons so they are electrically charged. Atoms that have gained electrons so they are negatively charged are called anions, atoms that have lost electrons so they are positively charged are called cations. Cations and anions of the dissociated liquid also serve as charge carriers in melted ionic solids (see e.g. the Hall–Héroult process for an example of electrolysis of a melted ionic solid). Proton conductors are electrolytic conductors employing positive hydrogen ions as carriers.
In a plasma, an electrically charged gas which is found in electric arcs through air, neon signs, and the sun and stars, the electrons and cations of ionized gas act as charge carriers.
In a vacuum, free electrons can act as charge carriers. In the electronic component known as the vacuum tube (also called valve), the mobil
Document 2:::
Secondary electrons are electrons generated as ionization products. They are called 'secondary' because they are generated by other radiation (the primary radiation). This radiation can be in the form of ions, electrons, or photons with sufficiently high energy, i.e. exceeding the ionization potential. Photoelectrons can be considered an example of secondary electrons where the primary radiation are photons; in some discussions photoelectrons with higher energy (>50 eV) are still considered "primary" while the electrons freed by the photoelectrons are "secondary".
Applications
Secondary electrons are also the main means of viewing images in the scanning electron microscope (SEM). The range of secondary electrons depends on the energy. Plotting the inelastic mean free path as a function of energy often shows characteristics of the "universal curve" familiar to electron spectroscopists and surface analysts. This distance is on the order of a few nanometers in metals and tens of nanometers in insulators. This small distance allows such fine resolution to be achieved in the SEM.
For SiO2, for a primary electron energy of 100 eV, the secondary electron range is up to 20 nm from the point of incidence.
See also
Delta ray
Everhart-Thornley detector
Document 3:::
Ionization (or ionisation) is the process by which an atom or a molecule acquires a negative or positive charge by gaining or losing electrons, often in conjunction with other chemical changes. The resulting electrically charged atom or molecule is called an ion. Ionization can result from the loss of an electron after collisions with subatomic particles, collisions with other atoms, molecules and ions, or through the interaction with electromagnetic radiation. Heterolytic bond cleavage and heterolytic substitution reactions can result in the formation of ion pairs. Ionization can occur through radioactive decay by the internal conversion process, in which an excited nucleus transfers its energy to one of the inner-shell electrons causing it to be ejected.
Uses
Everyday examples of gas ionization are such as within a fluorescent lamp or other electrical discharge lamps. It is also used in radiation detectors such as the Geiger-Müller counter or the ionization chamber. The ionization process is widely used in a variety of equipment in fundamental science (e.g., mass spectrometry) and in industry (e.g., radiation therapy). It is also widely used for air purification, though studies have shown harmful effects of this application.
Production of ions
Negatively charged ions are produced when a free electron collides with an atom and is subsequently trapped inside the electric potential barrier, releasing any excess energy. The process is known as electron capture ionization.
Positively charged ions are produced by transferring an amount of energy to a bound electron in a collision with charged particles (e.g. ions, electrons or positrons) or with photons. The threshold amount of the required energy is known as ionization potential. The study of such collisions is of fundamental importance with regard to the few-body problem, which is one of the major unsolved problems in physics. Kinematically complete experiments, i.e. experiments in which the complete momentum vect
Document 4:::
In physics and chemistry, ionization energy (IE) (American English spelling), ionisation energy (British English spelling) is the minimum energy required to remove the most loosely bound electron of an isolated gaseous atom, positive ion, or molecule. The first ionization energy is quantitatively expressed as
X(g) + energy ⟶ X+(g) + e−
where X is any atom or molecule, X+ is the resultant ion when the original atom was stripped of a single electron, and e− is the removed electron. Ionization energy is positive for neutral atoms, meaning that the ionization is an endothermic process. Roughly speaking, the closer the outermost electrons are to the nucleus of the atom, the higher the atom's ionization energy.
In physics, ionization energy is usually expressed in electronvolts (eV) or joules (J). In chemistry, it is expressed as the energy to ionize a mole of atoms or molecules, usually as kilojoules per mole (kJ/mol) or kilocalories per mole (kcal/mol).
Comparison of ionization energies of atoms in the periodic table reveals two periodic trends which follow the rules of Coulombic attraction:
Ionization energy generally increases from left to right within a given period (that is, row).
Ionization energy generally decreases from top to bottom in a given group (that is, column).
The latter trend results from the outer electron shell being progressively farther from the nucleus, with the addition of one inner shell per row as one moves down the column.
The nth ionization energy refers to the amount of energy required to remove the most loosely bound electron from the species having a positive charge of (n − 1). For example, the first three ionization energies are defined as follows:
1st ionization energy is the energy that enables the reaction X ⟶ X+ + e−
2nd ionization energy is the energy that enables the reaction X+ ⟶ X2+ + e−
3rd ionization energy is the energy that enables the reaction X2+ ⟶ X3+ + e−
The most notable influences that determine ionization ener
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are electrons lost during the formation of ions called?
A. catalysts
B. oxides
C. cations
D. isotopes
Answer:
|
|
sciq-1825
|
multiple_choice
|
Potassium is a soft, silvery metal that ignites explosively in what?
|
[
"air",
"acid",
"cold",
"water"
] |
D
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
A nonmetal is a chemical element that mostly lacks metallic properties. Seventeen elements are generally considered nonmetals, though some authors recognize more or fewer depending on the properties considered most representative of metallic or nonmetallic character. Some borderline elements further complicate the situation.
Nonmetals tend to have low density and high electronegativity (the ability of an atom in a molecule to attract electrons to itself). They range from colorless gases like hydrogen to shiny solids like the graphite form of carbon. Nonmetals are often poor conductors of heat and electricity, and when solid tend to be brittle or crumbly. In contrast, metals are good conductors and most are pliable. While compounds of metals tend to be basic, those of nonmetals tend to be acidic.
The two lightest nonmetals, hydrogen and helium, together make up about 98% of the observable ordinary matter in the universe by mass. Five nonmetallic elements—hydrogen, carbon, nitrogen, oxygen, and silicon—make up the overwhelming majority of the Earth's crust, atmosphere, oceans and biosphere.
The distinct properties of nonmetallic elements allow for specific uses that metals often cannot achieve. Elements like hydrogen, oxygen, carbon, and nitrogen are essential building blocks for life itself. Moreover, nonmetallic elements are integral to industries such as electronics, energy storage, agriculture, and chemical production.
Most nonmetallic elements were not identified until the 18th and 19th centuries. While a distinction between metals and other minerals had existed since antiquity, a basic classification of chemical elements as metallic or nonmetallic emerged only in the late 18th century. Since then nigh on two dozen properties have been suggested as single criteria for distinguishing nonmetals from metals.
Definition and applicable elements
Properties mentioned hereafter refer to the elements in their most stable forms in ambient conditions unless otherwise
Document 2:::
Potassium nitrite (distinct from potassium nitrate) is the inorganic compound with the chemical formula . It is an ionic salt of potassium ions K+ and nitrite ions NO2−, which forms a white or slightly yellow, hygroscopic crystalline powder that is soluble in water.
It is a strong oxidizer and may accelerate the combustion of other materials. Like other nitrite salts such as sodium nitrite, potassium nitrite is toxic if swallowed, and laboratory tests suggest that it may be mutagenic or teratogenic. Gloves and safety glasses are usually used when handling potassium nitrite.
Discovery
Nitrite is present at trace levels in soil, natural waters, plant and animal tissues, and fertilizer. The pure form of nitrite was first made by the Swedish chemist Carl Wilhelm Scheele working in the laboratory of his pharmacy in the market town of Köping. He heated potassium nitrate at red heat for half an hour and obtained what he recognized as a new “salt.” The two compounds (potassium nitrate and nitrite) were characterized by Péligot and the reaction was established as:
2KNO3 ->[\Delta T] 2KNO2 + O2
Production
Potassium nitrite can be obtained by the reduction of potassium nitrate. The production of potassium nitrite by absorption of nitrogen oxides in potassium hydroxide or potassium carbonate is not employed on a large scale because of the high price of these alkalies. Furthermore, the fact that potassium nitrite is highly soluble in water makes the solid difficult to recover.
Reactions
The mixing of cyanamide and KNO2 produces changes from white solids to yellow liquid and then to orange solid, forming cyanogen and ammonia gases. No external energy is used and the reactions are carried out with a small amount of O2.
Potassium nitrite forms potassium nitrate when heated in the presence of oxygen from 550 °C to 790 °C. The rate of reaction increases with temperature, but the extent of reaction decreases. At 550 °C and 600 °C the reaction is continuous and eventually goes to
Document 3:::
Major innovations in materials technology
BC
28,000 BC – People wear beads, bracelets, and pendants
14,500 BC – First pottery, made by the Jōmon people of Japan.
6th millennium BC – Copper metallurgy is invented and copper is used for ornamentation (see Pločnik article)
2nd millennium BC – Bronze is used for weapons and armor
16th century BC – The Hittites develop crude iron metallurgy
13th century BC – Invention of steel when iron and charcoal are combined properly
10th century BC – Glass production begins in ancient Near East
1st millennium BC – Pewter beginning to be used in China and Egypt
1000 BC – The Phoenicians introduce dyes made from the purple murex.
3rd century BC – Wootz steel, the first crucible steel, is invented in ancient India
50s BC – Glassblowing techniques flourish in Phoenicia
20s BC – Roman architect Vitruvius describes low-water-content method for mixing concrete
1st millennium
3rd century – Cast iron widely used in Han Dynasty China
300 – Greek alchemist Zomius, summarizing the work of Egyptian alchemists, describes arsenic and lead acetate
4th century – Iron pillar of Delhi is the oldest surviving example of corrosion-resistant steel
8th century – Porcelain is invented in Tang Dynasty China
8th century – Tin-glazing of ceramics invented by Muslim chemists and potters in Basra, Iraq
9th century – Stonepaste ceramics invented in Iraq
900 – First systematic classification of chemical substances appears in the works attributed to Jābir ibn Ḥayyān (Latin: Geber) and in those of the Persian alchemist and physician Abū Bakr al-Rāzī ( 865–925, Latin: Rhazes)
900 – Synthesis of ammonium chloride from organic substances described in the works attributed to Jābir ibn Ḥayyān (Latin: Geber)
900 – Abū Bakr al-Rāzī describes the preparation of plaster of Paris and metallic antimony
9th century – Lustreware appears in Mesopotamia
2nd millennium
1000 – Gunpowder is developed in China
1340 – In Liège, Belgium, the first blast furnaces for the production
Document 4:::
Potassium nitrate is a chemical compound with a sharp, salty, bitter taste and the chemical formula . It is an ionic salt of potassium ions K+ and nitrate ions NO3−, and is therefore an alkali metal nitrate. It occurs in nature as a mineral, niter (or nitre in the UK). It is a source of nitrogen, and nitrogen was named after niter. Potassium nitrate is one of several nitrogen-containing compounds collectively referred to as saltpeter (or saltpetre in the UK).
Major uses of potassium nitrate are in fertilizers, tree stump removal, rocket propellants and fireworks. It is one of the major constituents of gunpowder (black powder). In processed meats, potassium nitrate reacts with hemoglobin and myoglobin generating a red color.
Etymology
Potash, or potassium nitrate, because of its early and global use and production, has many names. The chemical potassium was first isolated by the chemist Sir Humphry Davy, from pot ash. This refers to an early method of extracting various potassium salts: by placing in an iron pot, the ash of burnt wood or tree leaves, adding water, heating, and evaporating the solution.
As for nitrate, Hebrew and Egyptian words for it had the consonants n-t-r, indicating likely cognation in the Greek nitron, which was Latinised to nitrum or nitrium. Thence Old French had niter and Middle English nitre. By the 15th century, Europeans referred to it as saltpetre, specifically Indian saltpetre (sodium nitrate is chile saltpetre) and later as nitrate of potash, as the chemistry of the compound was more fully understood.
The Arabs called it "Chinese snow" ( ). It was called "Chinese salt" by the Iranians/Persians or "salt from Chinese salt marshes" ( ).
Historical production
From mineral sources
In Ancient India, saltpeter manufacturers formed the Nuniya & Labana
caste. Saltpeter finds mention in Kautilya's Arthashastra (compiled 300BC - 300AD), which mentions using its poisonous smoke as a weapon of war, although its use for propulsion did not appea
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Potassium is a soft, silvery metal that ignites explosively in what?
A. air
B. acid
C. cold
D. water
Answer:
|
|
sciq-9867
|
multiple_choice
|
What part of an egg contains the genetic material ?
|
[
"the nucleus",
"here.the nucleus",
"the fetus",
"the sperm"
] |
A
|
Relavent Documents:
Document 0:::
A pronucleus (: pronuclei) denotes the nucleus found in either a sperm or egg cell during the process of fertilization. The sperm cell undergoes a transformation into a pronucleus after entering the egg cell but prior to the fusion of the genetic material of both the sperm and egg. In contrast, the egg cell possesses a pronucleus once it becomes haploid, not upon the arrival of the sperm cell. Haploid cells, such as sperm and egg cells in humans, carry half the number of chromosomes present in somatic cells, with 23 chromosomes compared to the 46 found in somatic cells. It is noteworthy that the male and female pronuclei do not physically merge, although their genetic material does. Instead, their membranes dissolve, eliminating any barriers between the male and female chromosomes, facilitating the combination of their chromosomes into a single diploid nucleus in the resulting embryo, which contains a complete set of 46 chromosomes.
The presence of two pronuclei serves as the initial indication of successful fertilization, often observed around 18 hours after insemination, or intracytoplasmic sperm injection (ICSI) during in vitro fertilization. At this stage, the zygote is termed a two-pronuclear zygote (2PN). Two-pronuclear zygotes transitioning through 1PN or 3PN states tend to yield poorer-quality embryos compared to those maintaining 2PN status throughout development, and this distinction may hold significance in the selection of embryos during in vitro fertilization (IVF) procedures.
History
The pronucleus was discovered the 1870s microscopically using staining techniques combined with microscopes with improved magnification levels. The pronucleus was originally found during the first studies on meiosis. Edouard Van Beneden published a paper in 1875 in which he first mentions the pronucleus by studying the eggs of rabbits and bats. He stated that the two pronuclei form together in the center of the cell to form the embryonic nucleus. Van Beneden also found t
Document 1:::
A conceptus (from Latin: concipere, to conceive) is an embryo and its appendages (adnexa), the associated membranes, placenta, and umbilical cord; the products of conception or, more broadly, "the product of conception at any point between fertilization and birth." The conceptus includes all structures that develop from the zygote, both embryonic and extraembryonic. It includes the embryo as well as the embryonic part of the placenta and its associated membranes: amnion, chorion (gestational sac), and yolk sac.
Document 2:::
The germinal epithelium is the epithelial layer of the seminiferous tubules of the testicles. It is also known as the wall of the seminiferous tubules. The cells in the epithelium are connected via tight junctions.
There are two types of cells in the germinal epithelium. The large Sertoli cells (which are not dividing) function as supportive cells to the developing sperm. The second cell type are the cells belonging to the spermatogenic cell lineage. These develop to eventually become sperm cells (spermatozoon). Typically, the spermatogenic cells will make four to eight layers in the germinal epithelium.
Document 3:::
Extranuclear inheritance or cytoplasmic inheritance is the transmission of genes that occur outside the nucleus. It is found in most eukaryotes and is commonly known to occur in cytoplasmic organelles such as mitochondria and chloroplasts or from cellular parasites like viruses or bacteria.
Organelles
Mitochondria are organelles which function to transform energy as a result of cellular respiration. Chloroplasts are organelles which function to produce sugars via photosynthesis in plants and algae. The genes located in mitochondria and chloroplasts are very important for proper cellular function. The mitochondrial DNA and other extranuclear types of DNA replicate independently of the DNA located in the nucleus, which is typically arranged in chromosomes that only replicate one time preceding cellular division. The extranuclear genomes of mitochondria and chloroplasts however replicate independently of cell division. They replicate in response to a cell's increasing energy needs which adjust during that cell's lifespan. Since they replicate independently, genomic recombination of these genomes is rarely found in offspring, contrary to nuclear genomes in which recombination is common.
Mitochondrial diseases are inherited from the mother, not from the father. Mitochondria with their mitochondrial DNA are already present in the egg cell before it gets fertilized by a sperm. In many cases of fertilization, the head of the sperm enters the egg cell; leaving its middle part, with its mitochondria, behind. The mitochondrial DNA of the sperm often remains outside the zygote and gets excluded from inheritance.
Parasites
Extranuclear transmission of viral genomes and symbiotic bacteria is also possible. An example of viral genome transmission is perinatal transmission. This occurs from mother to fetus during the perinatal period, which begins before birth and ends about 1 month after birth. During this time viral material may be passed from mother to child in the bloodst
Document 4:::
Oocyte selection is a procedure that is performed prior to in vitro fertilization, in order to use oocytes with maximal chances of resulting in pregnancy. In contrast, embryo selection takes place after fertilization.
Techniques
Chromosomal evaluation may be performed. Embryos from rescued in vitro-matured metaphase II (IVM-MII) oocytes show significantly higher fertilization rates and more blastomeres per embryo compared with those from arrested metaphase I (MI) oocytes (58.5% vs. 43.9% and 5.7 vs. 5.0, respectively).
Also, morphological features of the oocyte that can be obtained by standard light or polarized light microscopy. However, there is no clear tendency in recent publications to a general increase in predictive value of morphological features. Suggested techniques include zona pellucida imaging, which can detect differences in birefringence between eggs, which is a predictor of compaction, blastulation and pregnancy.
Potentially, polar body biopsy may be used for molecular analysis, and can be used for preimplantation genetic screening.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What part of an egg contains the genetic material ?
A. the nucleus
B. here.the nucleus
C. the fetus
D. the sperm
Answer:
|
|
sciq-5295
|
multiple_choice
|
The scientific practice of classifying organisms is also known as what?
|
[
"terminology",
"taxodermy",
"taxonomy",
"methodology"
] |
C
|
Relavent Documents:
Document 0:::
Vertebrate zoology is the biological discipline that consists of the study of Vertebrate animals, i.e., animals with a backbone, such as fish, amphibians, reptiles, birds and mammals. Many natural history museums have departments named Vertebrate Zoology. In some cases whole museums bear this name, e.g. the Museum of Vertebrate Zoology at the University of California, Berkeley.
Subdivisions
This subdivision of zoology has many further subdivisions, including:
Ichthyology - the study of fishes.
Mammalogy - the study of mammals.
Chiropterology - the study of bats.
Primatology - the study of primates.
Ornithology - the study of birds.
Herpetology - the study of reptiles.
Batrachology - the study of amphibians.
These divisions are sometimes further divided into more specific specialties.
Document 1:::
Animals are multicellular, eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. As of 2022, 2.16 million living animal species have been described—of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are around 7.77 million animal species. Animals range in length from to . They have complex interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology.
Most living animal species are in Bilateria, a clade whose members have a bilaterally symmetric body plan. The Bilateria include the protostomes, containing animals such as nematodes, arthropods, flatworms, annelids and molluscs, and the deuterostomes, containing the echinoderms and the chordates, the latter including the vertebrates. Life forms interpreted as early animals were present in the Ediacaran biota of the late Precambrian. Many modern animal phyla became clearly established in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago.
Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on ad
Document 2:::
Form classification is the classification of organisms based on their morphology, which does not necessarily reflect their biological relationships. Form classification, generally restricted to palaeontology, reflects uncertainty; the goal of science is to move "form taxa" to biological taxa whose affinity is known.
Form taxonomy is restricted to fossils that preserve too few characters for a conclusive taxonomic definition or assessment of their biological affinity, but whose study is made easier if a binomial name is available by which to identify them. The term "form classification" is preferred to "form taxonomy"; taxonomy suggests that the classification implies a biological affinity, whereas form classification is about giving a name to a group of morphologically-similar organisms that may not be related.
A "parataxon" (not to be confused with parataxonomy), or "sciotaxon" (Gr. "shadow taxon"), is a classification based on incomplete data: for instance, the larval stage of an organism that cannot be matched up with an adult. It reflects a paucity of data that makes biological classification impossible. A sciotaxon is defined as a taxon thought to be equivalent to a true taxon (orthotaxon), but whose identity cannot be established because the two candidate taxa are preserved in different ways and thus cannot be compared directly.
Examples
In zoology
Form taxa are groupings that are based on common overall forms. Early attempts at classification of labyrinthodonts was based on skull shape (the heavily armoured skulls often being the only preserved part). The amount of convergent evolution in the many groups lead to a number of polyphyletic taxa. Such groups are united by a common mode of life, often one that is generalist, in consequence acquiring generally similar body shapes by convergent evolution. Ediacaran biota — whether they are the precursors of the Cambrian explosion of the fossil record, or are unrelated to any modern phylum — can currently on
Document 3:::
Animals are multicellular eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, are able to move, reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Over 1.5 million living animal species have been described—of which around 1 million are insects—but it has been estimated there are over 7 million in total. Animals range in size from 8.5 millionths of a metre to long and have complex interactions with each other and their environments, forming intricate food webs. The study of animals is called zoology.
Animals may be listed or indexed by many criteria, including taxonomy, status as endangered species, their geographical location, and their portrayal and/or naming in human culture.
By common name
List of animal names (male, female, young, and group)
By aspect
List of common household pests
List of animal sounds
List of animals by number of neurons
By domestication
List of domesticated animals
By eating behaviour
List of herbivorous animals
List of omnivores
List of carnivores
By endangered status
IUCN Red List endangered species (Animalia)
United States Fish and Wildlife Service list of endangered species
By extinction
List of extinct animals
List of extinct birds
List of extinct mammals
List of extinct cetaceans
List of extinct butterflies
By region
Lists of amphibians by region
Lists of birds by region
Lists of mammals by region
Lists of reptiles by region
By individual (real or fictional)
Real
Lists of snakes
List of individual cats
List of oldest cats
List of giant squids
List of individual elephants
List of historical horses
List of leading Thoroughbred racehorses
List of individual apes
List of individual bears
List of giant pandas
List of individual birds
List of individual bovines
List of individual cetaceans
List of individual dogs
List of oldest dogs
List of individual monkeys
List of individual pigs
List of w
Document 4:::
The following outline is provided as an overview of and topical guide to zoology:
Zoology – study of animals. Zoology, or "animal biology", is the branch of biology that relates to the animal kingdom, including the identification, structure, embryology, evolution, classification, habits, and distribution of all animals, both living and extinct, and how they interact with their ecosystems. The term is derived from Ancient Greek word ζῷον (zōon), i.e. "animal" and λόγος, (logos), i.e. "knowledge, study". To study the variety of animals that exist (or have existed), see list of animals by common name and lists of animals.
Essence of zoology
Animal
Fauna
Branches of zoology
Branches by group studied
Arthropodology - study of arthropods as a whole
Carcinology - the study of crustaceans
Myriapodology - study of milli- and centipedes
Arachnology - study of spiders and related animals such as scorpions, pseudoscorpions, and harvestmen, collectively called arachnids
Acarology - study of mites and ticks
Entomology - study of insects
Coleopterology - study of beetles
Lepidopterology - study of butterflies
Melittology - study of bees
Myrmecology - study of ants
Orthopterology - study of grasshoppers
Herpetology - study of amphibians and reptiles
Batrachology - study of amphibians including frogs and toads, salamanders, newts, and caecilians
Cheloniology - study of turtles and tortoises
Saurology - study of lizards
Serpentology - study of snakes
Ichthyology - study of fish
Malacology - study of mollusks
Conchology - study of shells
Teuthology - study of cephalopods
Mammalogy - study of mammals
Cetology - study of cetaceans
Primatology - study of primates
Ornithology - study of birds
Parasitology - study of parasites, their hosts, and the relationship between them
Helminthology - study of parasitic worms (helminths)
Planktology - study of plankton, various small drifting plants, animals and microorganisms that inhabit bodies of water
Protozoology
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The scientific practice of classifying organisms is also known as what?
A. terminology
B. taxodermy
C. taxonomy
D. methodology
Answer:
|
|
sciq-1768
|
multiple_choice
|
What are chemical messengers that control sexual development and reproduction?
|
[
"sex hormones",
"proteins",
"lipids",
"neurotransmitters"
] |
A
|
Relavent Documents:
Document 0:::
Reproductive biology includes both sexual and asexual reproduction.
Reproductive biology includes a wide number of fields:
Reproductive systems
Endocrinology
Sexual development (Puberty)
Sexual maturity
Reproduction
Fertility
Human reproductive biology
Endocrinology
Human reproductive biology is primarily controlled through hormones, which send signals to the human reproductive structures to influence growth and maturation. These hormones are secreted by endocrine glands, and spread to different tissues in the human body. In humans, the pituitary gland synthesizes hormones used to control the activity of endocrine glands.
Reproductive systems
Internal and external organs are included in the reproductive system. There are two reproductive systems including the male and female, which contain different organs from one another. These systems work together in order to produce offspring.
Female reproductive system
The female reproductive system includes the structures involved in ovulation, fertilization, development of an embryo, and birth.
These structures include:
Ovaries
Oviducts
Uterus
Vagina
Mammary Glands
Estrogen is one of the sexual reproductive hormones that aid in the sexual reproductive system of the female.
Male reproductive system
The male reproductive system includes testes, rete testis, efferent ductules, epididymis, sex accessory glands, sex accessory ducts and external genitalia.
Testosterone, an androgen, although present in both males and females, is relatively more abundant in males. Testosterone serves as one of the major sexual reproductive hormones in the male reproductive system However, the enzyme aromatase is present in testes and capable of synthesizing estrogens from androgens. Estrogens are present in high concentrations in luminal fluids of the male reproductive tract. Androgen and estrogen receptors are abundant in epithelial cells of the male reproductive tract.
Animal Reproductive Biology
Animal reproduction oc
Document 1:::
Prenatal Testosterone Transfer (also known as prenatal androgen transfer or prenatal hormone transfer) refers to the phenomenon in which testosterone synthesized by a developing male fetus transfers to one or more developing fetuses within the womb and influences development. This typically results in the partial masculinization of specific aspects of female behavior, cognition, and morphology, though some studies have found that testosterone transfer can cause an exaggerated masculinization in males. There is strong evidence supporting the occurrence of prenatal testosterone transfer in rodents and other litter-bearing species, such as pigs. When it comes to humans, studies comparing dizygotic opposite-sex and same-sex twins suggest the phenomenon may occur, though the results of these studies are often inconsistent.
Mechanisms of transfer
Testosterone is a steroid hormone; therefore it has the ability to diffuse through the amniotic fluid between fetuses. In addition, hormones can transfer among fetuses through the mother's bloodstream.
Consequences of testosterone transfer
During prenatal development, testosterone exposure is directly responsible for masculinizing the genitals and brain structures. This exposure leads to an increase in male-typical behavior.
Animal studies
Most animal studies are performed on rats or mice. In these studies, the amount of testosterone each individual fetus is exposed to depends on its intrauterine position (IUP). Each gestating fetus not at either end of the uterine horn is surrounded by either two males (2M), two females (0M), or one female and one male (1M). Development of the fetus varies widely according to its IUP.
Mice
In mice, prenatal testosterone transfer causes higher blood concentrations of testosterone in 2M females when compared to 1M or 0M females. This has a variety of consequences on later female behavior, physiology, and morphology.
Below is a table comparing physiological, morphological, and behavioral diffe
Document 2:::
The Vandenbergh effect is a phenomenon reported by J.G. Vandenbergh et al. in 1975, in which an early induction of the first estrous cycle in prepubertal female mice occurs as a result of exposure to the pheromone-laden urine of a sexually mature (dominant) male mouse.
Physiologically, the exposure to male urine induces the release of GnRH, which provokes the first estrus. The Vandenbergh effect has also been seen with exposure to adult female mice. When an immature female mouse is exposed to the urine of mature female mouse, estrus is delayed in the prepubertal female. In this situation, GnRH is inhibited and therefore delays puberty in the juvenile female mouse.
The Vandenbergh effect is caused by pheromones found in a male's urine. The male does not have to be present for this effect to take place; the urine alone is sufficient. These pheromones are detected by the vomeronasal organ in the septum of the female's nose. This occurs because the female body will only take the step to begin puberty if there are available mates around. She will not waste energy on puberty if there is no possibility of finding a mate.
In addition to GnRH, exogenous estradiol has recently implicated as having a role in the Vandenbergh effect. Utilizing tritium-labeled estradiol implanted in male mice, researchers have been able to trace the pathways the estradiol takes once transmitted to a female. The estradiol was found in a multitude of regions within the females and appeared to enter her circulation nasally and through the skin. Their findings suggested that some aspects of the Vandenbergh effect as well as the Bruce effect may be related to exogenous estradiol from males.
Additional studies have looked into the validity of estradiol's role in the Vandenbergh effect by means of exogenous estradiol placed in castrated rats. Castrated males were injected with either a control (oil) or estradiol in the oil vehicle. As expected, urinary androgens in the castrated males were below no
Document 3:::
Sex is influenced by water pollutants that are encountered in everyday life. These sources of water can range from the simplicity of a water fountain to the entirety of the oceans. The pollutants within the water range from endocrine disruptor chemicals (EDCs) in birth control to Bisphenol A (BPA). Foreign substances such as chemical pollutants that cause an alteration of sex have been found in growing prevalence in the circulating waters of the world. These pollutants have affected not only humans, but also animals in contact with the pollutants.
Endocrine disruptor chemicals
Endocrine disruptor chemicals (EDCs) are a type of chemical that directly influences sex hormones. They have acquired these names due to the fact that they are anti-estrogens and anti-androgens. By inhibiting the function of these hormones, fertility decreases, and an imbalance of such hormones has been shown to cause feminizing effects in males. This is not only a human issue, but has become increasingly noticeable in fish populations worldwide. Scientists believe that these chemicals present in the water supply leads to increasing feminizing effects in male fish. Estrogens accumulate in body fat and tissue, and because of the cycle of the food chain, the artificial estrogens/EDCs bioaccumulate as they rise up the different levels of the food chain.
EDCs are present in the environment, whether naturally or artificially. Although the EDCs from birth control are obviously causing a great effect on the humans, it turns out that, in the United States, the estrogens given to livestock are even more prevalent.
Pollutants and their source of origin
Pharmaceuticals
Sex-altering pollutants come from many sources. One source that is becoming more visible is water pollution through pharmaceuticals. Pharmaceutical products may contain microscopic pollutants that imitate the chemical structure of hormones found in living organisms. These compounds are called Endocrine Disrupting Chemicals. They usual
Document 4:::
The following is a list of hormones found in Homo sapiens. Spelling is not uniform for many hormones. For example, current North American and international usage uses estrogen and gonadotropin, while British usage retains the Greek digraph in oestrogen and favours the earlier spelling gonadotrophin.
Hormone listing
Steroid
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are chemical messengers that control sexual development and reproduction?
A. sex hormones
B. proteins
C. lipids
D. neurotransmitters
Answer:
|
|
sciq-3469
|
multiple_choice
|
Ticks spread bacteria that causes what condition?
|
[
"Rabies",
"lyme disease",
"Dengue fever",
"Malaria"
] |
B
|
Relavent Documents:
Document 0:::
Disease is described as a decrease in performance of normal functions of an individual caused by many factors, which is not limited to infectious agents. Furthermore, wildlife disease is a disease when one of the hosts includes a wildlife species. In many cases, wildlife hosts can act as a reservoir of diseases that spillover into domestic animals, people and other species. Furthermore, there are many relationships that must be considered when discussing wildlife disease, which are represented through the Epidemiological Triad Model. This model describes the relationship between a pathogen, host and the environment. There are many routes to infection of a susceptible host by a pathogen, but when the host becomes infected that host now has the potential to infect other hosts. Whereas, environmental factors affect pathogen persistence and spread through host movement and interactions with other species. An example to apply to the ecological triad is Lyme disease, where changes in environment have changed the distribution of Lyme disease and its vector, the Ixodes tick.
Wildlife Disease Management
The challenges associated with wildlife disease management, some are environmental factors, wildlife is freely moving, and the effects of anthropogenic factors. Anthropogenic factors have driven significant changes in ecosystems and species distribution globally. The changes in ecosystems can be caused by introduction of invasive species, habitat loss and fragmentation, and overall changes in the function of ecosystems. Due to the significant changes in the environment because of humans, there becomes a need for wildlife management, which manages the interactions between domestic animals and humans, and wildlife.
Wildlife species are freely moving within different areas, and come into contact with domestic animals, humans, and even invade new areas. These interactions can allow for disease transmission, and disease spillover into new populations. Disease spillover can beco
Document 1:::
Infectious diseases or ID, also known as infectiology, is a medical specialty dealing with the diagnosis and treatment of infections. An infectious diseases specialist's practice consists of managing nosocomial (healthcare-acquired) infections or community-acquired infections. An ID specialist investigates the cause of a disease to determine what kind of Bacteria, viruses, parasites, or fungi the disease is caused by. Once the pathogen is known, an ID specialist can then run various tests to determine the best antimicrobial drug to kill the pathogen and treat the disease. While infectious diseases have always been around, the infectious disease specialty did not exist until the late 1900s after scientists and physicians in the 19th century paved the way with research on the sources of infectious disease and the development of vaccines.
Scope
Infectious diseases specialists typically serve as consultants to other physicians in cases of complex infections, and often manage patients with HIV/AIDS and other forms of immunodeficiency. Although many common infections are treated by physicians without formal expertise in infectious diseases, specialists may be consulted for cases where an infection is difficult to diagnose or manage. They may also be asked to help determine the cause of a fever of unknown origin.
Specialists in infectious diseases can practice both in hospitals (inpatient) and clinics (outpatient). In hospitals, specialists in infectious diseases help ensure the timely diagnosis and treatment of acute infections by recommending the appropriate diagnostic tests to identify the source of the infection and by recommending appropriate management such as prescribing antibiotics to treat bacterial infections. For certain types of infections, involvement of specialists in infectious diseases may improve patient outcomes. In clinics, specialists in infectious diseases can provide long-term care to patients with chronic infections such as HIV/AIDS.
History
Inf
Document 2:::
Horizontal transmission is the transmission of organisms between biotic and/or abiotic members of an ecosystem that are not in a parent-progeny relationship. This concept has been generalized to include transmissions of infectious agents, symbionts, and cultural traits between humans.
Because the evolutionary fate of the agent is not tied to reproductive success of the host, horizontal transmission tends to evolve virulence. It is therefore a critical concept for evolutionary medicine.
Biological
Pathogen transmission
In biological, but not cultural, transmissions the carriers (also known as vectors) may include other species. The two main biological modes of transmission are anterior station and posterior station. In anterior station, transmission occurs via the bite of an infected organism (the vector), like in malaria, dengue fever, and bubonic plague. Posterior station is transmission via contact with infected feces. Examples are rickettsiae driven diseases (like typhus), which are contracted by a body louse's fecal material being scratched into the bloodstream. The vector is not necessarily another species, however. For example, a dog infected with Rabies may infect another dog via anterior station transmission. Moreover, there are other modes of biological transmission, such as generalized bleeding in ebola.
Symbiont transmission
Symbiosis describes a relationship in which at least two organisms are in an intimately integrated state, such that one organism acts a host and the other as the symbiont. There are obligate, those that require the host for survival, and facultative symbionts, those that can survive independently of the host. Symbionts can follow vertical, horizontal, or a mixed mode of transmission to their host. Horizontal, or lateral, transmission describes the acquisition of a facultative symbiont from the environment or from a nearby host.
The life cycle of the host includes both symbiotic and aposymbiotic phases. The aposymbiotic p
Document 3:::
Globalization, the flow of information, goods, capital, and people across political and geographic boundaries, allows infectious diseases to rapidly spread around the world, while also allowing the alleviation of factors such as hunger and poverty, which are key determinants of global health. The spread of diseases across wide geographic scales has increased through history. Early diseases that spread from Asia to Europe were bubonic plague, influenza of various types, and similar infectious diseases.
In the current era of globalization, the world is more interdependent than at any other time. Efficient and inexpensive transportation has left few places inaccessible, and increased global trade in agricultural products has brought more and more people into contact with animal diseases that have subsequently jumped species barriers (see zoonosis).
Globalization intensified during the Age of Exploration, but trading routes had long been established between Asia and Europe, along which diseases were also transmitted. An increase in travel has helped spread diseases to natives of lands who had not previously been exposed. When a native population is infected with a new disease, where they have not developed antibodies through generations of previous exposure, the new disease tends to run rampant within the population.
Etiology, the modern branch of science that deals with the causes of infectious disease, recognizes five major modes of disease transmission: airborne, waterborne, bloodborne, by direct contact, and through vector (insects or other creatures that carry germs from one species to another). As humans began traveling over seas and across lands which were previously isolated, research suggests that diseases have been spread by all five transmission modes.
Travel patterns and globalization
The Age of Exploration generally refers to the period between the 15th and 17th centuries. During this time, technological advances in shipbuilding and navigation made it e
Document 4:::
Host factor (sometimes known as risk factor) is a medical term referring to the traits of an individual person or animal that affect susceptibility to disease, especially in comparison to other individuals. The term arose in the context of infectious disease research, in contrast to "organism factors", such as the virulence and infectivity of a microbe. Host factors that may vary in a population and affect disease susceptibility can be innate or acquired.
Some examples:
general health
psychological characteristics and attitude
nutritional state
social ties
previous exposure to the organism or related antigens
haplotype or other specific genetic differences of immune function
substance abuse
race
The term is now used in oncology and many other medical contexts related to individual differences of disease vulnerability.
See also
Vulnerability index
Epidemiology
Immunology
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Ticks spread bacteria that causes what condition?
A. Rabies
B. lyme disease
C. Dengue fever
D. Malaria
Answer:
|
|
ai2_arc-246
|
multiple_choice
|
When it rains, some animals will ___.
|
[
"hibernate for the season",
"migrate to warmer climates",
"change their body covering",
"move to seek shelter"
] |
D
|
Relavent Documents:
Document 0:::
Animal science is described as "studying the biology of animals that are under the control of humankind". It can also be described as the production and management of farm animals. Historically, the degree was called animal husbandry and the animals studied were livestock species, like cattle, sheep, pigs, poultry, and horses. Today, courses available look at a broader area, including companion animals, like dogs and cats, and many exotic species. Degrees in Animal Science are offered at a number of colleges and universities. Animal science degrees are often offered at land-grant universities, which will often have on-campus farms to give students hands-on experience with livestock animals.
Education
Professional education in animal science prepares students for careers in areas such as animal breeding, food and fiber production, nutrition, animal agribusiness, animal behavior, and welfare. Courses in a typical Animal Science program may include genetics, microbiology, animal behavior, nutrition, physiology, and reproduction. Courses in support areas, such as genetics, soils, agricultural economics and marketing, legal aspects, and the environment also are offered.
Bachelor degree
At many universities, a Bachelor of Science (BS) degree in Animal Science allows emphasis in certain areas. Typical areas are species-specific or career-specific. Species-specific areas of emphasis prepare students for a career in dairy management, beef management, swine management, sheep or small ruminant management, poultry production, or the horse industry. Other career-specific areas of study include pre-veterinary medicine studies, livestock business and marketing, animal welfare and behavior, animal nutrition science, animal reproduction science, or genetics. Youth programs are also an important part of animal science programs.
Pre-veterinary emphasis
Many schools that offer a degree option in Animal Science also offer a pre-veterinary emphasis such as Iowa State University, th
Document 1:::
Animal migration is the relatively long-distance movement of individual animals, usually on a seasonal basis. It is the most common form of migration in ecology. It is found in all major animal groups, including birds, mammals, fish, reptiles, amphibians, insects, and crustaceans. The cause of migration may be local climate, local availability of food, the season of the year or for mating.
To be counted as a true migration, and not just a local dispersal or irruption, the movement of the animals should be an annual or seasonal occurrence, or a major habitat change as part of their life. An annual event could include Northern Hemisphere birds migrating south for the winter, or wildebeest migrating annually for seasonal grazing. A major habitat change could include young Atlantic salmon or sea lamprey leaving the river of their birth when they have reached a few inches in size. Some traditional forms of human migration fit this pattern.
Migrations can be studied using traditional identification tags such as bird rings, or tracked directly with electronic tracking devices.
Before animal migration was understood, folklore explanations were formulated for the appearance and disappearance of some species, such as that barnacle geese grew from goose barnacles.
Overview
Concepts
Migration can take very different forms in different species, and has a variety of causes.
As such, there is no simple accepted definition of migration. One of the most commonly used definitions, proposed by the zoologist J. S. Kennedy is
Migration encompasses four related concepts: persistent straight movement; relocation of an individual on a greater scale (in both space and time) than its normal daily activities; seasonal to-and-fro movement of a population between two areas; and movement leading to the redistribution of individuals within a population. Migration can be either obligate, meaning individuals must migrate, or facultative, meaning individuals can "choose" to migrate or not. Wi
Document 2:::
In nature and human societies, many phenomena have causal relationships where one phenomenon A (a cause) impacts another phenomenon B (an effect). Establishing causal relationships is the aim of many scientific studies across fields ranging from biology and physics to social sciences and economics. It is also a subject of accident analysis, and can be considered a prerequisite for effective policy making.
To describe causal relationships between phenomena, non-quantitative visual notations are common, such as arrows, e.g. in the nitrogen cycle or many chemistry and mathematics textbooks. Mathematical conventions are also used, such as plotting an independent variable on a horizontal axis and a dependent variable on a vertical axis, or the notation to denote that a quantity "" is a dependent variable which is a function of an independent variable "". Causal relationships are also described using quantitative mathematical expressions.
The following examples illustrate various types of causal relationships. These are followed by different notations used to represent causal relationships.
Examples
What follows does not necessarily assume the convention whereby denotes an independent variable, and
denotes a function of the independent variable . Instead, and denote two quantities with an a priori unknown causal relationship, which can be related by a mathematical expression.
Ecosystem example: correlation without causation
Imagine the number of days of weather below zero degrees Celsius, , causes ice to form on a lake, , and it causes bears to go into hibernation . Even though does not cause and vice-versa, one can write an equation relating and . This equation may be used to successfully calculate the number of hibernating bears , given the surface area of the lake covered by ice. However, melting the ice in a region of the lake by pouring salt onto it, will not cause bears to come out of hibernation. Nor will waking the bears by physically disturbing the
Document 3:::
Escape response, escape reaction, or escape behavior is a mechanism by which animals avoid potential predation. It consists of a rapid sequence of movements, or lack of movement, that position the animal in such a way that allows it to hide, freeze, or flee from the supposed predator. Often, an animal's escape response is representative of an instinctual defensive mechanism, though there is evidence that these escape responses may be learned or influenced by experience.
The classical escape response follows this generalized, conceptual timeline: threat detection, escape initiation, escape execution, and escape termination or conclusion. Threat detection notifies an animal to a potential predator or otherwise dangerous stimulus, which provokes escape initiation, through neural reflexes or more coordinated cognitive processes. Escape execution refers to the movement or series of movements that will hide the animal from the threat or will allow for the animal to flee. Once the animal has effectively avoided the predator or threat, the escape response is terminated. Upon completion of the escape behavior or response, the animal may integrate the experience with its memory, allowing it to learn and adapt its escape response.
Escape responses are anti-predator behaviour that can vary from species to species. The behaviors themselves differ depending upon the species, but may include camouflaging techniques, freezing, or some form of fleeing (jumping, flying, withdrawal, etc.). In fact, variation between individuals is linked to increased survival. In addition, it is not merely increased speed that contributes to the success of the escape response; other factors, including reaction time and the individual's context can play a role. The individual escape response of a particular animal can vary based on an animal's previous experiences and its current state.
Evolutionary importance
The ability to perform an effective escape maneuver directly affects the fitness of the
Document 4:::
Communication occurs when an animal produces a signal and uses it to influences the behaviour of another animal. A signal can be any behavioural, structural or physiological trait that has evolved specifically to carry information about the sender and/or the external environment and to stimulate the sensory system of the receiver to change their behaviour. A signal is different from a cue in that cues are informational traits that have not been selected for communication purposes. For example, if an alerted bird gives a warning call to a predator and causes the predator to give up the hunt, the bird is using the sound as a signal to communicate its awareness to the predator. On the other hand, if a rat forages in the leaves and makes a sound that attracts a predator, the sound itself is a cue and the interaction is not considered a communication attempt.
Air and water have different physical properties which lead to different velocity and clarity of the signal transmission process during communication. This means that common understanding of communication mechanisms and structures of terrestrial animals cannot be applied to aquatic animals. For example, a horse can sniff the air to detect pheromones but a fish which is surrounded by water will need a different method to detect chemicals.
Aquatic animals can communicate through various signal modalities including visual, auditory, tactile, chemical and electrical signals. Communication using any of these forms requires specialised signal producing and detecting organs. Thus, the structure, distribution and mechanism of these sensory systems vary amongst different classes and species of aquatic animals and they also differ greatly to those of terrestrial animals.
The basic functions of communication in aquatic animals are similar to those of terrestrial animals. In general, communication can be used to facilitate social recognition and aggregation, to locate, attract and evaluate mating partners and to engage in te
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
When it rains, some animals will ___.
A. hibernate for the season
B. migrate to warmer climates
C. change their body covering
D. move to seek shelter
Answer:
|
|
scienceQA-11867
|
multiple_choice
|
What do these two changes have in common?
chicken cooking in an oven
melting glass
|
[
"Both are caused by cooling.",
"Both are only physical changes.",
"Both are caused by heating.",
"Both are chemical changes."
] |
C
|
Step 1: Think about each change.
Cooking chicken is a chemical change. The heat causes the matter in the chicken to change. Cooked chicken and raw chicken are different types of matter.
Melting glass is a change of state. So, it is a physical change. The glass changes from solid to liquid. But a different type of matter is not formed.
Step 2: Look at each answer choice.
Both are only physical changes.
Melting glass is a physical change. But cooking chicken is not.
Both are chemical changes.
Cooking chicken is a chemical change. But melting glass is not.
Both are caused by heating.
Both changes are caused by heating.
Both are caused by cooling.
Neither change is caused by cooling.
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.
Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate.
A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density.
An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge.
Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change.
Examples
Heating and cooling
Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation.
Magnetism
Ferro-magnetic materials can become magnetic. The process is reve
Document 2:::
Thermofluids is a branch of science and engineering encompassing four intersecting fields:
Heat transfer
Thermodynamics
Fluid mechanics
Combustion
The term is a combination of "thermo", referring to heat, and "fluids", which refers to liquids, gases and vapors. Temperature, pressure, equations of state, and transport laws all play an important role in thermofluid problems. Phase transition and chemical reactions may also be important in a thermofluid context. The subject is sometimes also referred to as "thermal fluids".
Heat transfer
Heat transfer is a discipline of thermal engineering that concerns the transfer of thermal energy from one physical system to another. Heat transfer is classified into various mechanisms, such as heat conduction, convection, thermal radiation, and phase-change transfer. Engineers also consider the transfer of mass of differing chemical species, either cold or hot, to achieve heat transfer.
Sections include :
Energy transfer by heat, work and mass
Laws of thermodynamics
Entropy
Refrigeration Techniques
Properties and nature of pure substances
Applications
Engineering : Predicting and analysing the performance of machines
Thermodynamics
Thermodynamics is the science of energy conversion involving heat and other forms of energy, most notably mechanical work. It studies and interrelates the macroscopic variables, such as temperature, volume and pressure, which describe physical, thermodynamic systems.
Fluid mechanics
Fluid Mechanics the study of the physical forces at work during fluid flow. Fluid mechanics can be divided into fluid kinematics, the study of fluid motion, and fluid kinetics, the study of the effect of forces on fluid motion. Fluid mechanics can further be divided into fluid statics, the study of fluids at rest, and fluid dynamics, the study of fluids in motion. Some of its more interesting concepts include momentum and reactive forces in fluid flow and fluid machinery theory and performance.
Sections include:
Flu
Document 3:::
Perfect thermal contact of the surface of a solid with the environment (convective heat transfer) or another solid occurs when the temperatures of the mating surfaces are equal.
Perfect thermal contact conditions
Perfect thermal contact supposes that on the boundary surface there holds an equality of the temperatures
and an equality of heat fluxes
where are temperatures of the solid and environment (or mating solid), respectively; are thermal conductivity coefficients of the solid and mating laminar layer (or solid), respectively; is normal to the surface .
If there is a heat source on the boundary surface , e.g. caused by sliding friction, the latter equality transforms in the following manner
where is heat-generation rate per unit area.
Document 4:::
Heat transfer is a discipline of thermal engineering that concerns the generation, use, conversion, and exchange of thermal energy (heat) between physical systems. Heat transfer is classified into various mechanisms, such as thermal conduction, thermal convection, thermal radiation, and transfer of energy by phase changes. Engineers also consider the transfer of mass of differing chemical species (mass transfer in the form of advection), either cold or hot, to achieve heat transfer. While these mechanisms have distinct characteristics, they often occur simultaneously in the same system.
Heat conduction, also called diffusion, is the direct microscopic exchanges of kinetic energy of particles (such as molecules) or quasiparticles (such as lattice waves) through the boundary between two systems. When an object is at a different temperature from another body or its surroundings, heat flows so that the body and the surroundings reach the same temperature, at which point they are in thermal equilibrium. Such spontaneous heat transfer always occurs from a region of high temperature to another region of lower temperature, as described in the second law of thermodynamics.
Heat convection occurs when the bulk flow of a fluid (gas or liquid) carries its heat through the fluid. All convective processes also move heat partly by diffusion, as well. The flow of fluid may be forced by external processes, or sometimes (in gravitational fields) by buoyancy forces caused when thermal energy expands the fluid (for example in a fire plume), thus influencing its own transfer. The latter process is often called "natural convection". The former process is often called "forced convection." In this case, the fluid is forced to flow by use of a pump, fan, or other mechanical means.
Thermal radiation occurs through a vacuum or any transparent medium (solid or fluid or gas). It is the transfer of energy by means of photons or electromagnetic waves governed by the same laws.
Overview
Heat
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do these two changes have in common?
chicken cooking in an oven
melting glass
A. Both are caused by cooling.
B. Both are only physical changes.
C. Both are caused by heating.
D. Both are chemical changes.
Answer:
|
sciq-2740
|
multiple_choice
|
The units of time, day and year are based on what?
|
[
"moon phases",
"gravitational waves",
"motions of the sun",
"motions of earth"
] |
D
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
An astronomical constant is any of several physical constants used in astronomy. Formal sets of constants, along with recommended values, have been defined by the International Astronomical Union (IAU) several times: in 1964 and in 1976 (with an update in 1994). In 2009 the IAU adopted a new current set, and recognizing that new observations and techniques continuously provide better values for these constants, they decided to not fix these values, but have the Working Group on Numerical Standards continuously maintain a set of Current Best Estimates. The set of constants is widely reproduced in publications such as the Astronomical Almanac of the United States Naval Observatory and HM Nautical Almanac Office.
Besides the IAU list of units and constants, also the International Earth Rotation and Reference Systems Service defines constants relevant to the orientation and rotation of the Earth, in its technical notes.
The IAU system of constants defines a system of astronomical units for length, mass and time (in fact, several such systems), and also includes constants such as the speed of light and the constant of gravitation which allow transformations between astronomical units and SI units. Slightly different values for the constants are obtained depending on the frame of reference used. Values quoted in barycentric dynamical time (TDB) or equivalent time scales such as the Teph of the Jet Propulsion Laboratory ephemerides represent the mean values that would be measured by an observer on the Earth's surface (strictly, on the surface of the geoid) over a long period of time. The IAU also recommends values in SI units, which are the values which would be measured (in proper length and proper time) by an observer at the barycentre of the Solar System: these are obtained by the following transformations:
Astronomical system of units
The astronomical unit of time is a time interval of one day (D) of 86400 seconds. The astronomical unit of mass is the mass of the
Document 2:::
This is a list of topics that are included in high school physics curricula or textbooks.
Mathematical Background
SI Units
Scalar (physics)
Euclidean vector
Motion graphs and derivatives
Pythagorean theorem
Trigonometry
Motion and forces
Motion
Force
Linear motion
Linear motion
Displacement
Speed
Velocity
Acceleration
Center of mass
Mass
Momentum
Newton's laws of motion
Work (physics)
Free body diagram
Rotational motion
Angular momentum (Introduction)
Angular velocity
Centrifugal force
Centripetal force
Circular motion
Tangential velocity
Torque
Conservation of energy and momentum
Energy
Conservation of energy
Elastic collision
Inelastic collision
Inertia
Moment of inertia
Momentum
Kinetic energy
Potential energy
Rotational energy
Electricity and magnetism
Ampère's circuital law
Capacitor
Coulomb's law
Diode
Direct current
Electric charge
Electric current
Alternating current
Electric field
Electric potential energy
Electron
Faraday's law of induction
Ion
Inductor
Joule heating
Lenz's law
Magnetic field
Ohm's law
Resistor
Transistor
Transformer
Voltage
Heat
Entropy
First law of thermodynamics
Heat
Heat transfer
Second law of thermodynamics
Temperature
Thermal energy
Thermodynamic cycle
Volume (thermodynamics)
Work (thermodynamics)
Waves
Wave
Longitudinal wave
Transverse waves
Transverse wave
Standing Waves
Wavelength
Frequency
Light
Light ray
Speed of light
Sound
Speed of sound
Radio waves
Harmonic oscillator
Hooke's law
Reflection
Refraction
Snell's law
Refractive index
Total internal reflection
Diffraction
Interference (wave propagation)
Polarization (waves)
Vibrating string
Doppler effect
Gravity
Gravitational potential
Newton's law of universal gravitation
Newtonian constant of gravitation
See also
Outline of physics
Physics education
Document 3:::
Chronometry (from Greek χρόνος chronos, "time" and μέτρον metron, "measure") is the science of the measurement of time, or timekeeping. Chronometry provides a standard of measurement for time, and therefore serves as a significant reference for many and various fields of science.
The importance of the accuracy and reliability of measuring time provides a standardized unit for chronometric experiments for the modern world, and more specifically scientific research. Despite the coincidental identicality of worldwide units of time, time produces a measurement of change and is a variable in many experiments. Therefore, time is an essential part of many areas of science.
It should not be confused with chronology, the science of locating events in time, which often relies upon it. Also, of similarity to chronometry is horology, the study of time; however, it is commonly used specifically with reference to the mechanical instruments created to keep time, with examples such as stopwatches, clocks, and hourglasses. Chronometry is utilised in many areas, and its fields are often derived from aspects of other areas in science, for example geochronometry, combining geology and chronometry.
Early records of time keeping are thought to have originated in the Paleolithic era, with etchings to mark the passing of moons in order to measure the year. And then progressed to written versions of calendars, before mechanisms and devices made to track time were invented. Today, the highest level of precision in timekeeping comes with atomic clocks, which are used for the international standard of the second.
Etymology
Chronometry is derived from two root words, chronos and metron (χρόνος and μέτρον in Ancient Greek respectively), with rough meanings of "time" and "measure". The combination of the two is taken to mean time measuring.
In the Ancient Greek lexicon, meanings and translations differ depending on the source. Chronos, used in relation to time when in definite periods, and
Document 4:::
Advanced Placement (AP) Physics C: Mechanics (also known as AP Mechanics) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a one-semester calculus-based university course in mechanics. The content of Physics C: Mechanics overlaps with that of AP Physics 1, but Physics 1 is algebra-based, while Physics C is calculus-based. Physics C: Mechanics may be combined with its electricity and magnetism counterpart to form a year-long course that prepares for both exams.
Course content
Intended to be equivalent to an introductory college course in mechanics for physics or engineering majors, the course modules are:
Kinematics
Newton's laws of motion
Work, energy and power
Systems of particles and linear momentum
Circular motion and rotation
Oscillations and gravitation.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a Calculus I class.
This course is often compared to AP Physics 1: Algebra Based for its similar course material involving kinematics, work, motion, forces, rotation, and oscillations. However, AP Physics 1: Algebra Based lacks concepts found in Calculus I, like derivatives or integrals.
This course may be combined with AP Physics C: Electricity and Magnetism to make a unified Physics C course that prepares for both exams.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Mechanics is separate from the AP examination for AP Physics C: Electricity and Magnetism. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday aftern
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The units of time, day and year are based on what?
A. moon phases
B. gravitational waves
C. motions of the sun
D. motions of earth
Answer:
|
|
sciq-8338
|
multiple_choice
|
What is the term for matter that does not let any light pass through?
|
[
"mirrored",
"dark",
"opaque",
"reflective"
] |
C
|
Relavent Documents:
Document 0:::
Invisibility is the state of an object that cannot be seen. An object in this state is said to be invisible (literally, "not visible"). The phenomenon is studied by physics and perceptual psychology.
Since objects can be seen by light in the visible spectrum from a source reflecting off their surfaces and hitting the viewer's eye, the most natural form of invisibility (whether real or fictional) is an object that neither reflects nor absorbs light (that is, it allows light to pass through it). This is known as
transparency, and is seen in many naturally occurring materials (although no naturally occurring material is 100% transparent).
Invisibility perception depends on several optical and visual factors. For example, invisibility depends on the eyes of the observer and/or the instruments used. Thus an object can be classified as "invisible to" a person, animal, instrument, etc. In research on sensorial perception it has been shown that invisibility is perceived in cycles.
Invisibility is often considered to be the supreme form of camouflage, as it does not reveal to the viewer any kind of vital signs, visual effects, or any frequencies of the electromagnetic spectrum detectable to the human eye, instead making use of radio, infrared or ultraviolet wavelengths.
In illusion optics, invisibility is a special case of illusion effects: the illusion of free space.
The term is often used in fantasy and science fiction, where objects cannot be seen by means of magic or hypothetical technology.
Practical efforts
Technology can be used theoretically or practically to render real-world objects invisible.
Making use of a real-time image displayed on a wearable display, it is possible to create a see-through effect. This is known as active camouflage. Though stealth technology is declared to be invisible to radar, all officially disclosed applications of the technology can only reduce the size and/or clarity of the signature detected by radar.
In 2003 the Chilean s
Document 1:::
Total external reflection is a phenomenon traditionally involving X-rays, but in principle any type of electromagnetic or other wave, closely related to total internal reflection.
Total internal reflection describes the fact that radiation (e.g. visible light) can, at certain angles, be totally reflected from an interface between two media of different indices of refraction (see Snell's law). Total internal reflection occurs when the first medium has a larger refractive index than the second medium, for example, light that starts in water and bounces off the water-to-air interface.
Total external reflection is the situation where the light starts in air and vacuum (refractive index 1), and bounces off a material with index of refraction less than 1. For example, in X-rays, the refractive index is frequently slightly less than 1, and therefore total external reflection can happen at a glancing angle. It is called external because the light bounces off the exterior of the material. This makes it possible to focus X-rays.
Document 2:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 3:::
A physical property is any property that is measurable, involved in the physical system, intensity on the object's state and behavior. The changes in the physical properties of a system can be used to describe its changes between momentary states. A quantifiable physical property is called physical quantity. Measurable physical quantities are often referred to as observables.
Some physical properties are qualitative, such as shininess, brittleness, etc.; some general qualitative properties admit more specific related quantitative properties, such as in opacity, hardness, ductility,viscosity, etc.
Physical properties are often characterized as intensive and extensive properties. An intensive property does not depend on the size or extent of the system, nor on the amount of matter in the object, while an extensive property shows an additive relationship. These classifications are in general only valid in cases when smaller subdivisions of the sample do not interact in some physical or chemical process when combined.
Properties may also be classified with respect to the directionality of their nature. For example, isotropic properties do not change with the direction of observation, and anisotropic properties do have spatial variance.
It may be difficult to determine whether a given property is a material property or not. Color, for example, can be seen and measured; however, what one perceives as color is really an interpretation of the reflective properties of a surface and the light used to illuminate it. In this sense, many ostensibly physical properties are called supervenient. A supervenient property is one which is actual, but is secondary to some underlying reality. This is similar to the way in which objects are supervenient on atomic structure. A cup might have the physical properties of mass, shape, color, temperature, etc., but these properties are supervenient on the underlying atomic structure, which may in turn be supervenient on an underlying quan
Document 4:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the term for matter that does not let any light pass through?
A. mirrored
B. dark
C. opaque
D. reflective
Answer:
|
|
sciq-154
|
multiple_choice
|
What is the rigid layer that is found outside the cell membrane and surrounds the cell?
|
[
"cell wall",
"cell root",
"cell barrier",
"cell shield"
] |
A
|
Relavent Documents:
Document 0:::
A laminar organization describes the way certain tissues, such as bone membrane, skin, or brain tissues, are arranged in layers.
Types
Embryo
The earliest forms of laminar organization are shown in the diploblastic and triploblastic formation of the germ layers in the embryo. In the first week of human embryogenesis two layers of cells have formed, an external epiblast layer (the primitive ectoderm), and an internal hypoblast layer (primitive endoderm). This gives the early bilaminar disc. In the third week in the stage of gastrulation epiblast cells invaginate to form endoderm, and a third layer of cells known as mesoderm. Cells that remain in the epiblast become ectoderm. This is the trilaminar disc and the epiblast cells have given rise to the three germ layers.
Brain
In the brain a laminar organization is evident in the arrangement of the three meninges, the membranes that cover the brain and spinal cord. These membranes are the dura mater, arachnoid mater, and pia mater. The dura mater has two layers a periosteal layer near to the bone of the skull, and a meningeal layer next to the other meninges.
The cerebral cortex, the outer neural sheet covering the cerebral hemispheres can be described by its laminar organization, due to the arrangement of cortical neurons into six distinct layers.
Eye
The eye in mammals has an extensive laminar organization. There are three main layers – the outer fibrous tunic, the middle uvea, and the inner retina. These layers have sublayers with the retina having ten ranging from the outer choroid to the inner vitreous humor and including the retinal nerve fiber layer.
Skin
The human skin has a dense laminar organization. The outer epidermis has four or five layers.
Document 1:::
The elements composing the layer of rods and cones (Jacob's membrane) in the retina of the eye are of two kinds, rod cells and cone cells, the former being much more numerous than the latter except in the macula lutea.
Jacob's membrane is named after Irish ophthalmologist Arthur Jacob, who was the first to describe this nervous layer of the retina.
Document 2:::
The basal lamina is a layer of extracellular matrix secreted by the epithelial cells, on which the epithelium sits. It is often incorrectly referred to as the basement membrane, though it does constitute a portion of the basement membrane. The basal lamina is visible only with the electron microscope, where it appears as an electron-dense layer that is 20–100 nm thick (with some exceptions that are thicker, such as basal lamina in lung alveoli and renal glomeruli).
Structure
The layers of the basal lamina ("BL") and those of the basement membrane ("BM") are described below:
Anchoring fibrils composed of type VII collagen extend from the basal lamina into the underlying reticular lamina and loop around collagen bundles. Although found beneath all basal laminae, they are especially numerous in stratified squamous cells of the skin.
These layers should not be confused with the lamina propria, which is found outside the basal lamina.
Basement membrane
The basement membrane is visible under light microscopy. Electron microscopy shows that the basement membrane consists of three layers: the lamina lucida (electron-lucent), lamina densa (electron-dense), and lamina fibro-reticularis (electron-lucent).
The lamina densa was formerly called the “basal lamina”. The terms “basal lamina” and “basement membrane” were often used interchangeably, until it was realised that all three layers seen with the electron microscope constituted the single layer seen with the light microscope. This has led to considerable terminological confusion; if used, the term “basal lamina” should be confined to its meaning as lamina densa.
Some theorize that the lamina lucida is an artifact created when preparing the tissue, and that the lamina lucida is therefore equal to the lamina densa in vivo.
The term "basal lamina" is usually used with electron microscopy, while the term "basement membrane" is usually used with light microscopy.
Examples of basement membranes include:
Basilar membrane
Document 3:::
Basal cell may refer to:
the epidermal cell in the stratum basale
the airway basal cell, an epithelial cell in the respiratory epithelium
Document 4:::
External lamina is a structure similar to basal lamina that surrounds the sarcolemma of muscle cells. It is secreted by myocytes and consists primarily of Collagen type IV, laminin and perlecan (heparan sulfate proteoglycan). Nerve cells, including perineurial cells and Schwann cells also have an external lamina-like protective coating.
Adipocytes also have an external lamina.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the rigid layer that is found outside the cell membrane and surrounds the cell?
A. cell wall
B. cell root
C. cell barrier
D. cell shield
Answer:
|
|
sciq-2429
|
multiple_choice
|
The largest phylum in the animal kingdom, arthropod, is primarily comprised of what?
|
[
"insects",
"mammals",
"amphibians",
"reptiles"
] |
A
|
Relavent Documents:
Document 0:::
Animals are multicellular, eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. As of 2022, 2.16 million living animal species have been described—of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are around 7.77 million animal species. Animals range in length from to . They have complex interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology.
Most living animal species are in Bilateria, a clade whose members have a bilaterally symmetric body plan. The Bilateria include the protostomes, containing animals such as nematodes, arthropods, flatworms, annelids and molluscs, and the deuterostomes, containing the echinoderms and the chordates, the latter including the vertebrates. Life forms interpreted as early animals were present in the Ediacaran biota of the late Precambrian. Many modern animal phyla became clearly established in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago.
Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on ad
Document 1:::
History of Animals (, Ton peri ta zoia historion, "Inquiries on Animals"; , "History of Animals") is one of the major texts on biology by the ancient Greek philosopher Aristotle, who had studied at Plato's Academy in Athens. It was written in the fourth century BC; Aristotle died in 322 BC.
Generally seen as a pioneering work of zoology, Aristotle frames his text by explaining that he is investigating the what (the existing facts about animals) prior to establishing the why (the causes of these characteristics). The book is thus an attempt to apply philosophy to part of the natural world. Throughout the work, Aristotle seeks to identify differences, both between individuals and between groups. A group is established when it is seen that all members have the same set of distinguishing features; for example, that all birds have feathers, wings, and beaks. This relationship between the birds and their features is recognized as a universal.
The History of Animals contains many accurate eye-witness observations, in particular of the marine biology around the island of Lesbos, such as that the octopus had colour-changing abilities and a sperm-transferring tentacle, that the young of a dogfish grow inside their mother's body, or that the male of a river catfish guards the eggs after the female has left. Some of these were long considered fanciful before being rediscovered in the nineteenth century. Aristotle has been accused of making errors, but some are due to misinterpretation of his text, and others may have been based on genuine observation. He did however make somewhat uncritical use of evidence from other people, such as travellers and beekeepers.
The History of Animals had a powerful influence on zoology for some two thousand years. It continued to be a primary source of knowledge until zoologists in the sixteenth century, such as Conrad Gessner, all influenced by Aristotle, wrote their own studies of the subject.
Context
Aristotle (384–322 BC) studied at Plat
Document 2:::
Animals are multicellular eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, are able to move, reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Over 1.5 million living animal species have been described—of which around 1 million are insects—but it has been estimated there are over 7 million in total. Animals range in size from 8.5 millionths of a metre to long and have complex interactions with each other and their environments, forming intricate food webs. The study of animals is called zoology.
Animals may be listed or indexed by many criteria, including taxonomy, status as endangered species, their geographical location, and their portrayal and/or naming in human culture.
By common name
List of animal names (male, female, young, and group)
By aspect
List of common household pests
List of animal sounds
List of animals by number of neurons
By domestication
List of domesticated animals
By eating behaviour
List of herbivorous animals
List of omnivores
List of carnivores
By endangered status
IUCN Red List endangered species (Animalia)
United States Fish and Wildlife Service list of endangered species
By extinction
List of extinct animals
List of extinct birds
List of extinct mammals
List of extinct cetaceans
List of extinct butterflies
By region
Lists of amphibians by region
Lists of birds by region
Lists of mammals by region
Lists of reptiles by region
By individual (real or fictional)
Real
Lists of snakes
List of individual cats
List of oldest cats
List of giant squids
List of individual elephants
List of historical horses
List of leading Thoroughbred racehorses
List of individual apes
List of individual bears
List of giant pandas
List of individual birds
List of individual bovines
List of individual cetaceans
List of individual dogs
List of oldest dogs
List of individual monkeys
List of individual pigs
List of w
Document 3:::
Taxonomy of commonly fossilised invertebrates is a complex and evolving field that combines both traditional and modern paleozoological terminology. This article aims to provide a comprehensive overview of the various invertebrate taxa that are commonly found in the fossil record, from protists to arthropods. The taxonomy presented here is not intended to be exhaustive but focuses on invertebrates that are either popularly collected as fossils or are extinct. Special notations are used to highlight invertebrate groups that are important as fossils, very abundant in the fossil record, or have a large proportion of extinct species. These notations are explained below for clarity:
[ ! ]: Indicates clades that are important as fossils or very abundant in the fossil record.
[ – ]: Indicates clades that contain a large proportion of extinct species.
[ † ]: Indicates clades that are completely extinct.
The paleobiologic systematics that follow are not intended to be comprehensive, rather encompass invertebrates that (a) are popularly collected as fossils and/or (b) extinct. As a result, some groups of invertebrates are not listed.
If an invertebrate animal is mentioned below using its common (vernacular) name, it is an extant (living) taxon, but if it is cited by its scientific genus, then it is typically an extinct invertebrate known only from the fossil record.
Invertebrate clades that are important fossils (e.g. ostracods, frequently used as index fossils), and/or clades that are very abundant as fossils (e.g. crinoids, easily found in crinoidal limestone), are highlighted with a bracketed exclamation mark [ ! ].
Domain of Eukaryota/Eukarya
Eukaryotes; eukaryotes are cellular organisms bearing a central, organized nucleus with DNA.
most of the species which have been documented by biologists and paleontologists, extinct or extant, are eukaryotic.
includes: a wide variety of single-celled protists; all algae; most plankton; most molds; the green plants; all an
Document 4:::
The following outline is provided as an overview of and topical guide to zoology:
Zoology – study of animals. Zoology, or "animal biology", is the branch of biology that relates to the animal kingdom, including the identification, structure, embryology, evolution, classification, habits, and distribution of all animals, both living and extinct, and how they interact with their ecosystems. The term is derived from Ancient Greek word ζῷον (zōon), i.e. "animal" and λόγος, (logos), i.e. "knowledge, study". To study the variety of animals that exist (or have existed), see list of animals by common name and lists of animals.
Essence of zoology
Animal
Fauna
Branches of zoology
Branches by group studied
Arthropodology - study of arthropods as a whole
Carcinology - the study of crustaceans
Myriapodology - study of milli- and centipedes
Arachnology - study of spiders and related animals such as scorpions, pseudoscorpions, and harvestmen, collectively called arachnids
Acarology - study of mites and ticks
Entomology - study of insects
Coleopterology - study of beetles
Lepidopterology - study of butterflies
Melittology - study of bees
Myrmecology - study of ants
Orthopterology - study of grasshoppers
Herpetology - study of amphibians and reptiles
Batrachology - study of amphibians including frogs and toads, salamanders, newts, and caecilians
Cheloniology - study of turtles and tortoises
Saurology - study of lizards
Serpentology - study of snakes
Ichthyology - study of fish
Malacology - study of mollusks
Conchology - study of shells
Teuthology - study of cephalopods
Mammalogy - study of mammals
Cetology - study of cetaceans
Primatology - study of primates
Ornithology - study of birds
Parasitology - study of parasites, their hosts, and the relationship between them
Helminthology - study of parasitic worms (helminths)
Planktology - study of plankton, various small drifting plants, animals and microorganisms that inhabit bodies of water
Protozoology
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The largest phylum in the animal kingdom, arthropod, is primarily comprised of what?
A. insects
B. mammals
C. amphibians
D. reptiles
Answer:
|
|
ai2_arc-429
|
multiple_choice
|
Muscle cells have the ability to store and release large amounts of energy. Which body function is best served by this release of energy?
|
[
"exchanging gases",
"moving body parts",
"absorbing nutrients",
"sending nerve impulses"
] |
B
|
Relavent Documents:
Document 0:::
Muscle memory has been used to describe the observation that various muscle-related tasks seem to be easier to perform after previous practice, even if the task has not been performed in a while. It is as if the muscles “remember”. The term could relate to tasks as disparate as playing the clarinet and weight-lifting, i.e., the observation that strength trained athletes experience a rapid return of muscle mass and strength even after long periods of inactivity.
Background
Until recently such effects were attributed solely to motor learning occurring in the central nervous system. Long-term effects of previous training on the muscle fibers themselves, however, have recently also been observed related to Training Muscle.
Until recently it was generally assumed that the effects of exercise on muscle was reversible, and that after a long period of de-training the muscle fibers returned to their previous state. For strength training this view was recently challenged by using ESTA Dynamic American imaging techniques revealing specific long lasting structural changes in muscle fibers after a strength-training episode.
Implications
The notion of a memory mechanism residing in the Domaine Muscles might have implications for health related exercise advice, and for exclusion times after doping offences. Muscle memory is probably related to the cell nuclei residing inside the muscle fibers, as is described below.
Cell Nuclei and Muscle Fibers
The muscle cells are the largest cells in the body with a volume thousands of times larger than most other body cells. To support this large volume, the muscle cells are one of the very few in the mammalian body that contain several cell nuclei. Such multinucleated cells are called syncytia. Strength-training increases muscle mass and force mainly by changing the caliber of each fiber rather than increasing the number of fibers. During such fiber enlargement muscle stem cells in the muscle tissue multiply and fuse with pre-existing
Document 1:::
The following outline is provided as an overview of and topical guide to physiology:
Physiology – scientific study of the normal function in living systems. A branch of biology, its focus is in how organisms, organ systems, organs, cells, and biomolecules carry out the chemical or physical functions that exist in a living system.
What type of thing is physiology?
Physiology can be described as all of the following:
An academic discipline
A branch of science
A branch of biology
Branches of physiology
By approach
Applied physiology
Clinical physiology
Exercise physiology
Nutrition physiology
Comparative physiology
Mathematical physiology
Yoga physiology
By organism
Animal physiology
Mammal physiology
Human physiology
Fish physiology
Insect physiology
Plant physiology
By process
Developmental physiology
Ecophysiology
Evolutionary physiology
By subsystem
Cardiovascular physiology
Renal physiology
Defense physiology
Gastrointestinal physiology
Musculoskeletal physiology
Neurophysiology
Respiratory physiology
History of physiology
History of physiology
General physiology concepts
Physiology organizations
American Physiological Society
International Union of Physiological Sciences
Physiology publications
American Journal of Physiology
Experimental Physiology
Journal of Applied Physiology
Persons influential in physiology
List of Nobel laureates in Physiology or Medicine
List of physiologists
See also
Outline of biology
Document 2:::
Normal aging movement control in humans is about the changes in the muscles, motor neurons, nerves, sensory functions, gait, fatigue, visual and manual responses, in men and women as they get older but who do not have neurological, muscular (atrophy, dystrophy...) or neuromuscular disorder. With aging, neuromuscular movements are impaired, though with training or practice, some aspects may be prevented.
Force production
For voluntary force production, action potentials occur in the cortex. They propagate in the spinal cord, the motor neurons and the set of muscle fibers they innervate. This results in a twitch which properties are driven by two mechanisms: motor unit recruitment and rate coding. Both mechanisms are affected with aging. For instance, the number of motor units may decrease, the size of the motor units, i.e. the number of muscle fibers they innervate may increase, the frequency at which the action potentials are triggered may be reduced. Consequently, force production is generally impaired in old adults.
Aging is associated with decreases in muscle mass and strength. These decreases may be partially due to losses of alpha motor neurons. By the age of 70, these losses occur in both proximal and distal muscles. In biceps brachii and brachialis, old adults show decreased strength (by 1/3) correlated with a reduction in the number of motor units (by 1/2). Old adults show evidence that remaining motor units may become larger as motor units innervate collateral muscle fibers.
In first dorsal interosseus, almost all motor units are recruited at moderate rate coding, leading to 30-40% of maximal voluntary contraction (MVC). Motor unit discharge rates measured at 50% MVC are not significantly different in the young subjects from those observed in the old adults. However, for the maximal effort contractions, there is an appreciable difference in discharge rates between the two age groups. Discharge rates obtained at 100% of MVC are 64% smaller in the old adul
Document 3:::
Exertion is the physical or perceived use of energy. Exertion traditionally connotes a strenuous or costly effort, resulting in generation of force, initiation of motion, or in the performance of work. It often relates to muscular activity and can be quantified, empirically and by measurable metabolic response.
Physical
In physics, exertion is the expenditure of energy against, or inductive of, inertia as described by Isaac Newton's third law of motion. In physics, force exerted equivocates work done. The ability to do work can be either positive or negative depending on the direction of exertion relative to gravity. For example, a force exerted upwards, like lifting an object, creates positive work done on that object.
Exertion often results in force generated, a contributing dynamic of general motion. In mechanics it describes the use of force against a body in the direction of its motion (see vector).
Physiological
Exertion, physiologically, can be described by the initiation of exercise, or, intensive and exhaustive physical activity that causes cardiovascular stress or a sympathetic nervous response. This can be continuous or intermittent exertion.
Exertion requires, of the body, modified oxygen uptake, increased heart rate, and autonomic monitoring of blood lactate concentrations. Mediators of physical exertion include cardio-respiratory and musculoskeletal strength, as well as metabolic capability. This often correlates to an output of force followed by a refractory period of recovery. Exertion is limited by cumulative load and repetitive motions.
Muscular energy reserves, or stores for biomechanical exertion, stem from metabolic, immediate production of ATP and increased oxygen consumption. Muscular exertion generated depends on the muscle length and the velocity at which it is able to shorten, or contract.
Perceived exertion can be explained as subjective, perceived experience that mediates response to somatic sensations and mechanisms. A rating of pe
Document 4:::
Kinesiogenomics refers to the study of genetics in the various disciplines of the field of kinesiology, the study of human movement. The field has also been referred to as "exercise genomics" or "exercisenomics." Areas of study within kinesiogenomics include the role of gene sequence variation (i.e., alleles) in sport performance, identification of genes (and their different alleles) that contribute to the response and adaptation of the body's tissue systems (e.g., muscles, heart, metabolism, etc.) to various exercise-related stimuli, the use of genetic testing to predict sport performance or individualize exercise prescription, and gene doping, the potential for genetic therapy to be used to enhance sport performance.
The field of kinesiogenomics is relatively new, though two books have outlined basic concepts. A regularly published review article entitled, "The human gene map for performance and health-related fitness phenotypes," describes the genes that have been studied in relation to specific exercise- and fitness-related traits. The most recent (seventh) update was published in 2009.
Research
Within the field of kinesiogenomics, several research studies have been conducted in recent years. This increase in research has led to advancements of knowledge in associating how genes and gene sequencing effects a person's exercise habits and health. One study focusing on twins looked to see the effect of genes on exercise ability, the effects of exercise on mood, and the ability to lose weight. The research concluded that genetics had a significant impact of the likelihood an individual would participate in exercise. An increase in participation can be linked to personality factors such as self-motivation and self-discipline, while a lower participation in exercise can be influenced by factors such as anxiety and depression. These personality trait, both positive and negative, can be associated to one's genetic makeup.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Muscle cells have the ability to store and release large amounts of energy. Which body function is best served by this release of energy?
A. exchanging gases
B. moving body parts
C. absorbing nutrients
D. sending nerve impulses
Answer:
|
|
sciq-1419
|
multiple_choice
|
Linked genes are located on the same what?
|
[
"bacterium",
"genome",
"chromosome",
"nucleolus"
] |
C
|
Relavent Documents:
Document 0:::
The National Centre for Biotechnology Education (NCBE) is a national resource centre at the University of Reading to teach pre-university biotechnology in schools in the UK. It was founded in 1990.
History
It began as the National Centre for School Biotechnology (NCSB) in 1985 in the Department of Microbiology. It became the NCBE in 1990. For many years it was the only centre in Europe that was devoted to the teaching of biotechnology in schools. The Dolan DNA Learning Center had been set up in the USA.
It was set up as an education project by the Society for General Microbiology, now the Microbiology Society. Money from the Laboratory of the Government Chemist set up the National Centre for School Biotechnology (NCSB). Money also came from the Gatsby Charitable Foundation. For the first five years, the UK government's DTI was involved, but from 1990 onwards wanted the organization to become self-supporting as it had to cut back on budgets. By 1992 the government provided no money for the centre.
Structure
The site was set up in former buildings of the University of Reading's Department of Microbiology.
Function
It reaches out to schools to give up-to-date information on biotechnology. Biotechnology is a rapidly evolving subject, and schools cannot keep up-to-date with all that they would be required to know. It produces educational resources. It runs the Microbiology in Schools Advisory Committee (MISAC).
See also
Centre for Industry Education Collaboration at York
National Centre for Excellence in the Teaching of Mathematics, University of York
Science and Plants for Schools, another well-known science resource for UK schools
Document 1:::
The Oxford Centre for Gene Function is a multidisciplinary research institute in the University of Oxford, England. It is directed by Frances Ashcroft, Kay Davies and Peter Donnelly.
It involves the departments of Human anatomy and genetics, Physiology, and Statistics.
External links
Oxford Centre for Gene Function website
Wellcome Trust Centre for Human Genetics
Departments of the University of Oxford
Genetics in the United Kingdom
Human genetics
Research institutes in Oxford
Document 2:::
MicrobeLibrary is a permanent collection of over 1400 original peer-reviewed resources for teaching undergraduate microbiology. It is provided by the American Society for Microbiology, Washington DC, United States.
Contents include curriculum activities; images and animations; reviews of books, websites and other resources; and articles from Focus on Microbiology Education, Microbiology Education and Microbe. Around 40% of the materials are free to educators and students, the remainder require a subscription. the service is suspended with the message to:
"Please check back with us in 2017".
External links
MicrobeLibrary
Microbiology
Document 3:::
The School of Biological Sciences is a School within the Faculty Biology, Medicine and Health at The University of Manchester. Biology at University of Manchester and its precursor institutions has gone through a number of reorganizations (see History below), the latest of which was the change from a Faculty of Life Sciences to the current School.
Academics
Research
The School, though unitary for teaching, is divided into a number of broadly defined sections for research purposes, these sections consist of: Cellular Systems, Disease Systems, Molecular Systems, Neuro Systems and Tissue Systems.
Research in the School is structured into multiple research groups including the following themes:
Cell-Matrix Research (part of the Wellcome Trust Centre for Cell-Matrix Research)
Cell Organisation and Dynamics
Computational and Evolutionary Biology
Developmental Biology
Environmental Research
Eye and Vision Sciences
Gene Regulation and Cellular Biotechnology
History of Science, Technology and Medicine
Immunology and Molecular Microbiology
Molecular Cancer Studies
Neurosciences (part of the University of Manchester Neurosciences Research Institute)
Physiological Systems & Disease
Structural and Functional Systems
The School hosts a number of research centres, including: the Manchester Centre for Biophysics and Catalysis, the Wellcome Trust Centre for Cell-Matrix Research, the Centre of Excellence in Biopharmaceuticals, the Centre for the History of Science, Technology and Medicine, the Centre for Integrative Mammalian Biology, and the Healing Foundation Centre for Tissue Regeneration. The Manchester Collaborative Centre for Inflammation Research is a joint endeavour with the Faculty of Medical and Human Sciences of Manchester University and industrial partners.
Research Assessment Exercise (2008)
The faculty entered research into the units of assessment (UOA) for Biological Sciences and Pre-clinical and Human Biological Sciences. In Biological Sciences 20% of outputs
Document 4:::
Genetics (from Ancient Greek , “genite” and that from , “origin”), a discipline of biology, is the science of heredity and variation in living organisms.
Articles (arranged alphabetically) related to genetics include:
#
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Linked genes are located on the same what?
A. bacterium
B. genome
C. chromosome
D. nucleolus
Answer:
|
|
sciq-1293
|
multiple_choice
|
What is at the top of the mesosphere?
|
[
"Troposphere",
"Mesosphere",
"Stratosphere",
"mesopause"
] |
D
|
Relavent Documents:
Document 0:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 1:::
Tech City College (Formerly STEM Academy) is a free school sixth form located in the Islington area of the London Borough of Islington, England.
It originally opened in September 2013, as STEM Academy Tech City and specialised in Science, Technology, Engineering and Maths (STEM) and the Creative Application of Maths and Science. In September 2015, STEM Academy joined the Aspirations Academy Trust was renamed Tech City College. Tech City College offers A-levels and BTECs as programmes of study for students.
Document 2:::
Science, technology, engineering, and mathematics (STEM) is an umbrella term used to group together the distinct but related technical disciplines of science, technology, engineering, and mathematics. The term is typically used in the context of education policy or curriculum choices in schools. It has implications for workforce development, national security concerns (as a shortage of STEM-educated citizens can reduce effectiveness in this area), and immigration policy, with regard to admitting foreign students and tech workers.
There is no universal agreement on which disciplines are included in STEM; in particular, whether or not the science in STEM includes social sciences, such as psychology, sociology, economics, and political science. In the United States, these are typically included by organizations such as the National Science Foundation (NSF), the Department of Labor's O*Net online database for job seekers, and the Department of Homeland Security. In the United Kingdom, the social sciences are categorized separately and are instead grouped with humanities and arts to form another counterpart acronym HASS (Humanities, Arts, and Social Sciences), rebranded in 2020 as SHAPE (Social Sciences, Humanities and the Arts for People and the Economy). Some sources also use HEAL (health, education, administration, and literacy) as the counterpart of STEM.
Terminology
History
Previously referred to as SMET by the NSF, in the early 1990s the acronym STEM was used by a variety of educators, including Charles E. Vela, the founder and director of the Center for the Advancement of Hispanics in Science and Engineering Education (CAHSEE). Moreover, the CAHSEE started a summer program for talented under-represented students in the Washington, D.C., area called the STEM Institute. Based on the program's recognized success and his expertise in STEM education, Charles Vela was asked to serve on numerous NSF and Congressional panels in science, mathematics, and engineering edu
Document 3:::
The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work.
History
It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council.
Function
Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres.
STEM ambassadors
To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell.
Funding
STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments.
See also
The WISE Campaign
Engineering and Physical Sciences Research Council
National Centre for Excellence in Teaching Mathematics
Association for Science Education
Glossary of areas of mathematics
Glossary of astronomy
Glossary of biology
Glossary of chemistry
Glossary of engineering
Glossary of physics
Document 4:::
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is at the top of the mesosphere?
A. Troposphere
B. Mesosphere
C. Stratosphere
D. mesopause
Answer:
|
|
sciq-2622
|
multiple_choice
|
What type of organism was the only able to live in the anoxic atmosphere of the first 2 billion years?
|
[
"protists",
"anaerobic",
"mammals",
"aerobic"
] |
B
|
Relavent Documents:
Document 0:::
The Boring Billion, otherwise known as the Mid Proterozoic and Earth's Middle Ages, is the time period between 1.8 and 0.8 billion years ago (Ga) spanning the middle Proterozoic eon, characterized by more or less tectonic stability, climatic stasis, and slow biological evolution. It is bordered by two different oxygenation and glacial events, but the Boring Billion itself had very low oxygen levels and no evidence of glaciation.
The oceans may have been oxygen- and nutrient-poor and sulfidic (euxinia), populated by mainly anoxygenic purple bacteria, a type of chlorophyll-based photosynthetic bacteria which uses hydrogen sulfide (H2S) instead of water and produces sulfur instead of oxygen. This is known as a Canfield ocean. Such composition may have caused the oceans to be black- and milky-turquoise instead of blue. (By contrast, during the much earlier Purple Earth phase the photosynthesis was retinal-based.)
Despite such adverse conditions, eukaryotes may have evolved around the beginning of the Boring Billion, and adopted several novel adaptations, such as various organelles, multicellularity, and possibly sexual reproduction, and diversified into plants, animals, and fungi at the end of this time interval. Such advances may have been important precursors to the evolution of large, complex life later in the Ediacaran and Phanerozoic. Nonetheless, prokaryotic cyanobacteria were the dominant lifeforms during this time, and likely supported an energy-poor food-web with a small number of protists at the apex level. The land was likely inhabited by prokaryotic cyanobacteria and eukaryotic proto-lichens, the latter more successful here probably due to the greater availability of nutrients than in offshore ocean waters.
Description
In 1995, geologists Roger Buick, Davis Des Marais, and Andrew Knoll reviewed the apparent lack of major biological, geological, and climatic events during the Mesoproterozoic era 1.6 to 1 billion years ago (Ga), and, thus, described it as
Document 1:::
The evolution of bacteria has progressed over billions of years since the Precambrian time with their first major divergence from the archaeal/eukaryotic lineage roughly 3.2-3.5 billion years ago. This was discovered through gene sequencing of bacterial nucleoids to reconstruct their phylogeny. Furthermore, evidence of permineralized microfossils of early prokaryotes was also discovered in the Australian Apex Chert rocks, dating back roughly 3.5 billion years ago during the time period known as the Precambrian time. This suggests that an organism in of the phylum Thermotogota (formerly Thermotogae) was the most recent common ancestor of modern bacteria.
Further chemical and isotopic analysis of ancient rock reveals that by the Siderian period, roughly 2.45 billion years ago, oxygen had appeared. This indicates that oceanic, photosynthetic cyanobacteria evolved during this period because they were the first microbes to produce oxygen as a byproduct of their metabolic process. Therefore, this phylum was thought to have been predominant roughly 2.3 billion years ago. However, some scientists argue they could have lived as early as 2.7 billion years ago, as this was roughly before the time of the Great Oxygenation Event, meaning oxygen levels had time to increase in the atmosphere before it altered the ecosystem during this event.
The rise in atmospheric oxygen led to the evolution of Pseudomonadota (formerly proteobacteria). Today this phylum includes many nitrogen fixing bacteria, pathogens, and free-living microorganisms. This phylum evolved approximately 1.5 billion years ago during the Paleoproterozoic era.
However, there are still many conflicting theories surrounding the origins of bacteria. Even though microfossils of ancient bacteria have been discovered, some scientists argue that the lack of identifiable morphology in these fossils means they can not be utilised to draw conclusions on an accurate evolutionary timeline of bacteria. Nevertheless, more recent
Document 2:::
The history of life on Earth traces the processes by which living and fossil organisms evolved, from the earliest emergence of life to present day. Earth formed about 4.5 billion years ago (abbreviated as Ga, for gigaannum) and evidence suggests that life emerged prior to 3.7 Ga. Although there is some evidence of life as early as 4.1 to 4.28 Ga, it remains controversial due to the possible non-biological formation of the purported fossils.
The similarities among all known present-day species indicate that they have diverged through the process of evolution from a common ancestor. Only a very small percentage of species have been identified: one estimate claims that Earth may have 1 trillion species. However, only 1.75–1.8 million have been named and 1.8 million documented in a central database. These currently living species represent less than one percent of all species that have ever lived on Earth.
The earliest evidence of life comes from biogenic carbon signatures and stromatolite fossils discovered in 3.7 billion-year-old metasedimentary rocks from western Greenland. In 2015, possible "remains of biotic life" were found in 4.1 billion-year-old rocks in Western Australia. In March 2017, putative evidence of possibly the oldest forms of life on Earth was reported in the form of fossilized microorganisms discovered in hydrothermal vent precipitates in the Nuvvuagittuq Belt of Quebec, Canada, that may have lived as early as 4.28 billion years ago, not long after the oceans formed 4.4 billion years ago, and not long after the formation of the Earth 4.54 billion years ago.
Microbial mats of coexisting bacteria and archaea were the dominant form of life in the early Archean eon and many of the major steps in early evolution are thought to have taken place in this environment. The evolution of photosynthesis by cyanobacteria, around 3.5 Ga, eventually led to a buildup of its waste product, oxygen, in the ocean and then the atmosphere after depleting all available
Document 3:::
Natural History of an Alien, also known as Anatomy of an Alien in the US, is an early Discovery Channel pseudo-documentary similar to Alien Planet, aired in 1998. This pseudo-documentary featured various alien ecosystem projects from the Epona Project to Ringworld. It also featured many notable scientists and science fiction authors such as Dr. Jack Cohen, Derek Briggs, Christopher McKay, David Wynn-Williams, Emily Holton, Peter Cattermole, Brian Aldiss, Sil Read, Wolf Read, Edward K. Smallwood, Adega Zuidema, Steve Hanly, Kevin Warwick and Dougal Dixon.
Plot
The viewer is in an intergalactic spaceship named the S.S. Attenborough, run by a small green alien.
Cambrian Earth
Earth during the Cambrian.
Mars
Asteroids
The documentary visits asteroids and talks about the possibility of panspermia seeding solar system with life.
Europa
Featured organisms
Europa Cone Bacteria: Orange-gray bacteria that grow in huge towers that rise many miles above the ocean floor. Inside these vents, warm water rises, nourishing layer upon layer of bacteria.
Europa Sea Vent Herbivore: A giant, gray, shark-like swimmer that feeds on bacteria in schools with a suction cup-like mouth on an extended, Opabinia-like trunk. These trunk-shaped mouths pierce the vents to suck in vast quantities of bacteria. These grazers are territorial and, like squid on Earth, flash warning glows to drive away rivals. They make a series of dolphin-like cries.
Europa Sea Vent Carnivore: A predatory, yellow-green, echolocating, streamlined, shark-like swimmer that is built for speed and preys on the Europa Sea Vent Herbivores. Like the Europa Sea Vent Herbivores, the Europa Sea Vent Carnivores also have an Opabinia-like snout, which they use to kill their prey.
High Gravity Planet
The next world visited is a high gravity planet home to many insect-like aliens who have adapted to 1.5 times Earth's gravity. High gravity means a thicker atmosphere (the planet in question having an atmosphere 15 times as den
Document 4:::
Animals are multicellular, eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. As of 2022, 2.16 million living animal species have been described—of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are around 7.77 million animal species. Animals range in length from to . They have complex interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology.
Most living animal species are in Bilateria, a clade whose members have a bilaterally symmetric body plan. The Bilateria include the protostomes, containing animals such as nematodes, arthropods, flatworms, annelids and molluscs, and the deuterostomes, containing the echinoderms and the chordates, the latter including the vertebrates. Life forms interpreted as early animals were present in the Ediacaran biota of the late Precambrian. Many modern animal phyla became clearly established in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago.
Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on ad
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of organism was the only able to live in the anoxic atmosphere of the first 2 billion years?
A. protists
B. anaerobic
C. mammals
D. aerobic
Answer:
|
|
ai2_arc-752
|
multiple_choice
|
Two pure substances combine to make a new substance. The new substance cannot be physically separated and has a different boiling point than each of the original substances. This new substance can best be classified as
|
[
"an atom.",
"a mixture.",
"an element.",
"a compound."
] |
D
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In chemistry, a mixture is a material made up of two or more different chemical substances which are not chemically bonded. A mixture is the physical combination of two or more substances in which the identities are retained and are mixed in the form of solutions, suspensions and colloids.
Mixtures are one product of mechanically blending or mixing chemical substances such as elements and compounds, without chemical bonding or other chemical change, so that each ingredient substance retains its own chemical properties and makeup. Despite the fact that there are no chemical changes to its constituents, the physical properties of a mixture, such as its melting point, may differ from those of the components. Some mixtures can be separated into their components by using physical (mechanical or thermal) means. Azeotropes are one kind of mixture that usually poses considerable difficulties regarding the separation processes required to obtain their constituents (physical or chemical processes or, even a blend of them).
Characteristics of mixtures
All mixtures can be characterized as being separable by mechanical means (e.g. purification, distillation, electrolysis, chromatography, heat, filtration, gravitational sorting, centrifugation). Mixtures differ from chemical compounds in the following ways:
the substances in a mixture can be separated using physical methods such as filtration, freezing, and distillation.
there is little or no energy change when a mixture forms (see Enthalpy of mixing).
The substances in a mixture keep its separate properties.
In the example of sand and water, neither one of the two substances changed in any way when they are mixed. Although the sand is in the water it still keeps the same properties that it had when it was outside the water.
mixtures have variable compositions, while compounds have a fixed, definite formula.
when mixed, individual substances keep their properties in a mixture, while if they form a compound their properties
Document 2:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 3:::
In chemistry, solubility is the ability of a substance, the solute, to form a solution with another substance, the solvent. Insolubility is the opposite property, the inability of the solute to form such a solution.
The extent of the solubility of a substance in a specific solvent is generally measured as the concentration of the solute in a saturated solution, one in which no more solute can be dissolved. At this point, the two substances are said to be at the solubility equilibrium. For some solutes and solvents, there may be no such limit, in which case the two substances are said to be "miscible in all proportions" (or just "miscible").
The solute can be a solid, a liquid, or a gas, while the solvent is usually solid or liquid. Both may be pure substances, or may themselves be solutions. Gases are always miscible in all proportions, except in very extreme situations, and a solid or liquid can be "dissolved" in a gas only by passing into the gaseous state first.
The solubility mainly depends on the composition of solute and solvent (including their pH and the presence of other dissolved substances) as well as on temperature and pressure. The dependency can often be explained in terms of interactions between the particles (atoms, molecules, or ions) of the two substances, and of thermodynamic concepts such as enthalpy and entropy.
Under certain conditions, the concentration of the solute can exceed its usual solubility limit. The result is a supersaturated solution, which is metastable and will rapidly exclude the excess solute if a suitable nucleation site appears.
The concept of solubility does not apply when there is an irreversible chemical reaction between the two substances, such as the reaction of calcium hydroxide with hydrochloric acid; even though one might say, informally, that one "dissolved" the other. The solubility is also not the same as the rate of solution, which is how fast a solid solute dissolves in a liquid solvent. This property de
Document 4:::
Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.
Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate.
A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density.
An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge.
Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change.
Examples
Heating and cooling
Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation.
Magnetism
Ferro-magnetic materials can become magnetic. The process is reve
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Two pure substances combine to make a new substance. The new substance cannot be physically separated and has a different boiling point than each of the original substances. This new substance can best be classified as
A. an atom.
B. a mixture.
C. an element.
D. a compound.
Answer:
|
|
sciq-3260
|
multiple_choice
|
Roots, stems and leaves are organs commonly found in what?
|
[
"plants",
"algae",
"animals",
"fungi"
] |
A
|
Relavent Documents:
Document 0:::
A stem is one of two main structural axes of a vascular plant, the other being the root. It supports leaves, flowers and fruits, transports water and dissolved substances between the roots and the shoots in the xylem and phloem, photosynthesis takes place here, stores nutrients, and produces new living tissue. The stem can also be called halm or haulm or culms.
The stem is normally divided into nodes and internodes:
The nodes are the points of attachment for leaves and can hold one or more leaves. There are sometimes axillary buds between the stem and leaf which can grow into branches (with leaves, conifer cones, or flowers). Adventitious roots may also be produced from the nodes. Vines may produce tendrils from nodes.
The internodes distance one node from another.
The term "shoots" is often confused with "stems"; "shoots" generally refers to new fresh plant growth, including both stems and other structures like leaves or flowers.
In most plants, stems are located above the soil surface, but some plants have underground stems.
Stems have several main functions:
Support for and the elevation of leaves, flowers, and fruits. The stems keep the leaves in the light and provide a place for the plant to keep its flowers and fruits.
Transport of fluids between the roots and the shoots in the xylem and phloem.
Storage of nutrients.
Production of new living tissue. The normal lifespan of plant cells is one to three years. Stems have cells called meristems that annually generate new living tissue.
Photosynthesis.
Stems have two pipe-like tissues called xylem and phloem. The xylem tissue arises from the cell facing inside and transports water by the action of transpiration pull, capillary action, and root pressure. The phloem tissue arises from the cell facing outside and consists of sieve tubes and their companion cells. The function of phloem tissue is to distribute food from photosynthetic tissue to other tissues. The two tissues are separated by cambium, a tis
Document 1:::
Edible plant stems are one part of plants that are eaten by humans. Most plants are made up of stems, roots, leaves, flowers, and produce fruits containing seeds. Humans most commonly eat the seeds (e.g. maize, wheat), fruit (e.g. tomato, avocado, banana), flowers (e.g. broccoli), leaves (e.g. lettuce, spinach, and cabbage), roots (e.g. carrots, beets), and stems (e.g. asparagus of many plants. There are also a few edible petioles (also known as leaf stems) such as celery or rhubarb.
Plant stems have a variety of functions. Stems support the entire plant and have buds, leaves, flowers, and fruits. Stems are also a vital connection between leaves and roots. They conduct water and mineral nutrients through xylem tissue from roots upward, and organic compounds and some mineral nutrients through phloem tissue in any direction within the plant. Apical meristems, located at the shoot tip and axillary buds on the stem, allow plants to increase in length, surface, and mass. In some plants, such as cactus, stems are specialized for photosynthesis and water storage.
Modified stems
Typical stems are located above ground, but there are modified stems that can be found either above or below ground. Modified stems located above ground are phylloids, stolons, runners, or spurs. Modified stems located below ground are corms, rhizomes, and tubers.
Detailed description of edible plant stems
Asparagus The edible portion is the rapidly emerging stems that arise from the crowns in the
Bamboo The edible portion is the young shoot (culm).
Birch Trunk sap is drunk as a tonic or rendered into birch syrup, vinegar, beer, soft drinks, and other foods.
Broccoli The edible portion is the peduncle stem tissue, flower buds, and some small leaves.
Cauliflower The edible portion is proliferated peduncle and flower tissue.
Cinnamon Many favor the unique sweet flavor of the inner bark of cinnamon, and it is commonly used as a spice.
Fig The edible portion is stem tissue. The
Document 2:::
In biology, tissue is a historically derived biological organizational level between cells and a complete organ. A tissue is therefore often thought of as an assembly of similar cells and their extracellular matrix from the same embryonic origin that together carry out a specific function. Organs are then formed by the functional grouping together of multiple tissues.
Biological organisms follow this hierarchy:
Cells < Tissue < Organ < Organ System < Organism
The English word "tissue" derives from the French word "tissu", the past participle of the verb tisser, "to weave".
The study of tissues is known as histology or, in connection with disease, as histopathology. Xavier Bichat is considered as the "Father of Histology". Plant histology is studied in both plant anatomy and physiology. The classical tools for studying tissues are the paraffin block in which tissue is embedded and then sectioned, the histological stain, and the optical microscope. Developments in electron microscopy, immunofluorescence, and the use of frozen tissue-sections have enhanced the detail that can be observed in tissues. With these tools, the classical appearances of tissues can be examined in health and disease, enabling considerable refinement of medical diagnosis and prognosis.
Plant tissue
In plant anatomy, tissues are categorized broadly into three tissue systems: the epidermis, the ground tissue, and the vascular tissue.
Epidermis – Cells forming the outer surface of the leaves and of the young plant body.
Vascular tissue – The primary components of vascular tissue are the xylem and phloem. These transport fluids and nutrients internally.
Ground tissue – Ground tissue is less differentiated than other tissues. Ground tissue manufactures nutrients by photosynthesis and stores reserve nutrients.
Plant tissues can also be divided differently into two types:
Meristematic tissues
Permanent tissues.
Meristematic tissue
Meristematic tissue consists of actively dividing cell
Document 3:::
Phytomorphology is the study of the physical form and external structure of plants. This is usually considered distinct from plant anatomy, which is the study of the internal structure of plants, especially at the microscopic level. Plant morphology is useful in the visual identification of plants. Recent studies in molecular biology started to investigate the molecular processes involved in determining the conservation and diversification of plant morphologies. In these studies transcriptome conservation patterns were found to mark crucial ontogenetic transitions during the plant life cycle which may result in evolutionary constraints limiting diversification.
Scope
Plant morphology "represents a study of the development, form, and structure of plants, and, by implication, an attempt to interpret these on the basis of similarity of plan and origin". There are four major areas of investigation in plant morphology, and each overlaps with another field of the biological sciences.
First of all, morphology is comparative, meaning that the morphologist examines structures in many different plants of the same or different species, then draws comparisons and formulates ideas about similarities. When structures in different species are believed to exist and develop as a result of common, inherited genetic pathways, those structures are termed homologous. For example, the leaves of pine, oak, and cabbage all look very different, but share certain basic structures and arrangement of parts. The homology of leaves is an easy conclusion to make. The plant morphologist goes further, and discovers that the spines of cactus also share the same basic structure and development as leaves in other plants, and therefore cactus spines are homologous to leaves as well. This aspect of plant morphology overlaps with the study of plant evolution and paleobotany.
Secondly, plant morphology observes both the vegetative (somatic) structures of plants, as well as the reproductive str
Document 4:::
Rhizoids are protuberances that extend from the lower epidermal cells of bryophytes and algae. They are similar in structure and function to the root hairs of vascular land plants. Similar structures are formed by some fungi. Rhizoids may be unicellular or multicellular.
Evolutionary development
Plants originated in aquatic environments and gradually migrated to land during their long course of evolution. In water or near it, plants could absorb water from their surroundings, with no need for any special absorbing organ or tissue. Additionally, in the primitive states of plant development, tissue differentiation and division of labor was minimal, thus specialized water absorbing tissue was not required. The development of specialized tissues to absorb water efficiently and anchor themselves to the ground enabled the spread of plants to the land.
Description
Rhizoids absorb water mainly by capillary action, in which water moves up between threads of rhizoids and not through each of them as it does in roots, but some species of bryophytes do have the ability to take up water inside their rhizoids.
Land plants
In land plants, rhizoids are trichomes that anchor the plant to the ground. In the liverworts, they are absent or unicellular, but multicellular in mosses. In vascular plants they are often called root hairs, and may be unicellular or multicellular.
Algae
In certain algae, there is an extensive rhizoidal system that allows the alga to anchor itself to a sandy substrate from which it can absorb nutrients. Microscopic free-floating species, however, do not have rhizoids at all.
Fungi
In fungi, rhizoids are small branching hyphae that grow downwards from the stolons that anchor the fungus to the substrate, where they release digestive enzymes and absorb digested organic material. That is why fungí are called heterotrophs by absorption.
See also
Rhizine, the equivalent structure in lichens
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Roots, stems and leaves are organs commonly found in what?
A. plants
B. algae
C. animals
D. fungi
Answer:
|
|
ai2_arc-314
|
multiple_choice
|
In a grassland ecosystem, if the population of eagles suddenly decreased, what will most likely be the effect on the rest of the ecosystem?
|
[
"The ecosystem will become overpopulated with snakes.",
"There will be a decrease in the population of snakes in the ecosystem.",
"The nutrition of the soil in the ecosystem will decrease.",
"More types of plants will begin growing in the ecosystem."
] |
A
|
Relavent Documents:
Document 0:::
Ecological forecasting uses knowledge of physics, ecology and physiology to predict how ecological populations, communities, or ecosystems will change in the future in response to environmental factors such as climate change. The goal of the approach is to provide natural resource managers with information to anticipate and respond to short and long-term climate conditions.
Changing climate conditions present ecologists with the challenge to predict where, when and with what magnitude changes are likely to occur so that we can mitigate or at least prepare for them. Ecological forecasting applies existing knowledge of ecosystem interactions to predict how changes in environmental factors might result in changes to the ecosystems as a whole.
One of the most complete sources on the topic is the book Ecological Forecasting written by Michael C. Dietze.
Methods
Ecologists shifted towards Bayesian methods starting 1990, when improvements in computational power allowed the use of more demanding computational statistics such as Hierarchical Bayes. This kind of analysis employs a Bayesian Network that provides a probabilistic graphical model of a set of parameters, and can accommodate unobserved variables. A Bayesian structure is a probabilistic approach that is flexible for high-dimensional data, and allows ecologists to separate sources of uncertainty in their models.
Forecasts can leverage Bayes' Theorem and be iteratively updated with new observations using a process called Data Assimilation. Data Assimilation combines observations on different temporal and geographic scales with forecasts, all of which combine to provide more information than any one data source alone. Some ecologists have found this framework to be useful for ecological models as they often rely on a wide range of data sources.
Models
Ecological forecasting varies in spatial and temporal extent, as well as in what is being forecast (presence, abundance, diversity, production, etc.).
Population
Document 1:::
In nature and human societies, many phenomena have causal relationships where one phenomenon A (a cause) impacts another phenomenon B (an effect). Establishing causal relationships is the aim of many scientific studies across fields ranging from biology and physics to social sciences and economics. It is also a subject of accident analysis, and can be considered a prerequisite for effective policy making.
To describe causal relationships between phenomena, non-quantitative visual notations are common, such as arrows, e.g. in the nitrogen cycle or many chemistry and mathematics textbooks. Mathematical conventions are also used, such as plotting an independent variable on a horizontal axis and a dependent variable on a vertical axis, or the notation to denote that a quantity "" is a dependent variable which is a function of an independent variable "". Causal relationships are also described using quantitative mathematical expressions.
The following examples illustrate various types of causal relationships. These are followed by different notations used to represent causal relationships.
Examples
What follows does not necessarily assume the convention whereby denotes an independent variable, and
denotes a function of the independent variable . Instead, and denote two quantities with an a priori unknown causal relationship, which can be related by a mathematical expression.
Ecosystem example: correlation without causation
Imagine the number of days of weather below zero degrees Celsius, , causes ice to form on a lake, , and it causes bears to go into hibernation . Even though does not cause and vice-versa, one can write an equation relating and . This equation may be used to successfully calculate the number of hibernating bears , given the surface area of the lake covered by ice. However, melting the ice in a region of the lake by pouring salt onto it, will not cause bears to come out of hibernation. Nor will waking the bears by physically disturbing the
Document 2:::
Overpopulation or overabundance is a phenomenon in which a species' population becomes larger than the carrying capacity of its environment. This may be caused by increased birth rates, lowered mortality rates, reduced predation or large scale migration, leading to an overabundant species and other animals in the ecosystem competing for food, space, and resources. The animals in an overpopulated area may then be forced to migrate to areas not typically inhabited, or die off without access to necessary resources.
Judgements regarding overpopulation always involve both facts and values. Animals often are judged overpopulated when their numbers cause impacts that people find dangerous, damaging, expensive, or otherwise harmful. Societies may be judged overpopulated when their human numbers cause impacts that degrade ecosystem services, decrease human health and well-being, or crowd other species out of existence.
Background
In ecology, overpopulation is a concept used primarily in wildlife management. Typically, an overpopulation causes the entire population of the species in question to become weaker, as no single individual is able to find enough food or shelter. As such, overpopulation is thus characterized by an increase in the diseases and parasite-load which live upon the species in question, as the entire population is weaker. Other characteristics of overpopulation are lower fecundity, adverse effects on the environment (soil, vegetation or fauna) and lower average body weights. Especially the worldwide increase of deer populations, which usually show irruptive growth, is proving to be of ecological concern. Ironically, where ecologists were preoccupied with conserving or augmenting deer populations only a century ago, the focus has now shifted in the direct opposite, and ecologists are now more concerned with limiting the populations of such animals.
Supplemental feeding of charismatic species or interesting game species is a major problem in causing overp
Document 3:::
Ecological triage refers to the decision making of environmental conservation using the concepts of medical triage. In medicine, the allocation of resources in an urgent situation is prioritized for those with the greatest need and those who would receive the greatest benefit. Similarly, the two parameters of ecological triage are the level of threat and the probability of ecological recovery. Because there are limitations to resources such as time, money, and manpower, it is important to prioritize specific efforts and distribute resources efficiently. Ecological triage differentiates between areas with an attainable emergent need, those who would benefit from preventive measures, and those that are beyond repair.
Methods
Ecological triage is not simple, dichotomous decision making. It involves a complex array of factors including assumptions, mathematical calculations, and planning for uncertainties. When assessing an ecosystem, there are a myriad of factors conservationists consider, but there are also variables which they are unable to account for. Conservationists and scientists often have incomplete understanding of population dynamics, impacts of external threats, and efficacy of different conservation tactics. It is important to incorporate these unknowns when assessing a population or ecosystem. By following the principles of triage, we are able to allow for the efficient allocation of resources as conservationists continue to develop the best options for ecological preservation and restoration.
Info-Gap Decision Model
Due to the multitude of variables within a population or ecosystem, it is important to address the unknown factors which may not initially be accounted for. Many ecologists utilize the Info-gap decision theory, which focuses on strategies that are most likely to succeed despite uncertainties. This process is composed of three main elements:
Mathematical calculations which assess performance as a result of management. This step determine
Document 4:::
Wildlife crossings are structures that allow animals to cross human-made barriers safely. Wildlife crossings may include underpass tunnels or wildlife tunnels, viaducts, and overpasses or green bridges (mainly for large or herd-type animals); amphibian tunnels; fish ladders; canopy bridges (especially for monkeys and squirrels); tunnels and culverts (for small mammals such as otters, hedgehogs, and badgers); and green roofs (for butterflies and birds).
Wildlife crossings are a practice in habitat conservation, allowing connections or reconnections between habitats, combating habitat fragmentation. They also assist in avoiding collisions between vehicles and animals, which in addition to killing or injuring wildlife may cause injury to humans and property damage.
Similar structures can be used for domesticated animals, such as cattle creeps.
Roads and habitat fragmentation
Habitat fragmentation occurs when human-made barriers such as roads, railroads, canals, electric power lines, and pipelines penetrate and divide wildlife habitat. Of these, roads have the most widespread and detrimental effects. Scientists estimate that the system of roads in the United States affects the ecology of at least one-fifth of the land area of the country. For many years ecologists and conservationists have documented the adverse relationship between roads and wildlife identify four ways that roads and traffic detrimentally affect wildlife populations: (1) they decrease habitat amount and quality, (2) they increase mortality due to wildlife-vehicle collisions (road kill), (3) they prevent access to resources on the other side of the road, and (4) they subdivide wildlife populations into smaller and more vulnerable sub-populations (fragmentation). Habitat fragmentation can lead to extinction or extirpation if a population's gene pool is restricted enough.
The first three effects (loss of habitat, road kill, and isolation from resources) exert pressure on various animal populations by
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
In a grassland ecosystem, if the population of eagles suddenly decreased, what will most likely be the effect on the rest of the ecosystem?
A. The ecosystem will become overpopulated with snakes.
B. There will be a decrease in the population of snakes in the ecosystem.
C. The nutrition of the soil in the ecosystem will decrease.
D. More types of plants will begin growing in the ecosystem.
Answer:
|
|
sciq-11365
|
multiple_choice
|
What is the term for traits that show wide variation, such as height, skin color, and eye color?
|
[
"inherited traits",
"maladaptive traits",
"polygenic traits",
"recessive traits"
] |
C
|
Relavent Documents:
Document 0:::
Quantitative genetics deals with quantitative traits, which are phenotypes that vary continuously (such as height or mass)—as opposed to discretely identifiable phenotypes and gene-products (such as eye-colour, or the presence of a particular biochemical).
Both branches use the frequencies of different alleles of a gene in breeding populations (gamodemes), and combine them with concepts from simple Mendelian inheritance to analyze inheritance patterns across generations and descendant lines. While population genetics can focus on particular genes and their subsequent metabolic products, quantitative genetics focuses more on the outward phenotypes, and makes only summaries of the underlying genetics.
Due to the continuous distribution of phenotypic values, quantitative genetics must employ many other statistical methods (such as the effect size, the mean and the variance) to link phenotypes (attributes) to genotypes. Some phenotypes may be analyzed either as discrete categories or as continuous phenotypes, depending on the definition of cut-off points, or on the metric used to quantify them. Mendel himself had to discuss this matter in his famous paper, especially with respect to his peas' attribute tall/dwarf, which actually was "length of stem". Analysis of quantitative trait loci, or QTL, is a more recent addition to quantitative genetics, linking it more directly to molecular genetics.
Gene effects
In diploid organisms, the average genotypic "value" (locus value) may be defined by the allele "effect" together with a dominance effect, and also by how genes interact with genes at other loci (epistasis). The founder of quantitative genetics - Sir Ronald Fisher - perceived much of this when he proposed the first mathematics of this branch of genetics.
Being a statistician, he defined the gene effects as deviations from a central value—enabling the use of statistical concepts such as mean and variance, which use this idea. The central value he chose for the ge
Document 1:::
Genetic variance is a concept outlined by the English biologist and statistician Ronald Fisher in his fundamental theorem of natural selection. In his 1930 book The Genetical Theory of Natural Selection, Fisher postulates that the rate of change of biological fitness can be calculated by the genetic variance of the fitness itself. Fisher tried to give a statistical formula about how the change of fitness in a population can be attributed to changes in the allele frequency. Fisher made no restrictive assumptions in his formula concerning fitness parameters, mate choices or the number of alleles and loci involved.
Definition
Phenotypic variance, usually combines the genotype variance with the environmental variance. Genetic variance has three major components: the additive genetic variance, dominance variance, and epistatic variance.
Additive genetic variance involves the inheritance of a particular allele from your parent and this allele's independent effect on the specific phenotype, which will cause the phenotype deviation from the mean phenotype. Dominance genetic variance refers to the phenotype deviation caused by the interactions between alternative alleles that control one trait at one specific locus. Epistatic variance involves an interaction between different alleles in different loci.
Heritability
Heritability refers to how much of the phenotypic variance is due to variance in genetic factors. Usually after we know the total amount of genetic variance that is responsible for a trait, we can calculate the trait heritability. Heritability can be used as an important predictor to evaluate if a population can respond to artificial or natural selection.
Broad-sense heritability, H2 = VG/VP, Involves the proportion of phenotypic variation due to the effects of additive, dominance, and epistatic variance. Narrow-sense heritability, h2 = VA/VP, refers to the proportion of phenotypic variation that is due to additive genetic values (VA).
Quantitive formula
Document 2:::
Mendelian traits behave according to the model of monogenic or simple gene inheritance in which one gene corresponds to one trait. Discrete traits (as opposed to continuously varying traits such as height) with simple Mendelian inheritance patterns are relatively rare in nature, and many of the clearest examples in humans cause disorders. Discrete traits found in humans are common examples for teaching genetics.
Mendelian model
According to the model of Mendelian inheritance, alleles may be dominant or recessive, one allele is inherited from each parent, and only those who inherit a recessive allele from each parent exhibit the recessive phenotype. Offspring with either one or two copies of the dominant allele will display the dominant phenotype.
Very few phenotypes are purely Mendelian traits. Common violations of the Mendelian model include incomplete dominance, codominance, genetic linkage, environmental effects, and quantitative contributions from a number of genes (see: gene interactions, polygenic inheritance, oligogenic inheritance).
OMIM (Online Mendelian Inheritance in Man) is a comprehensive database of human genotype–phenotype links. Many visible human traits that exhibit high heritability were included in the older McKusick's Mendelian Inheritance in Man. Before the discovery of genotyping, they were used as genetic markers in medicolegal practice, including in cases of disputed paternity.
Human traits with probable or uncertain simple inheritance patterns
See also
Polygenic inheritance
Trait
Gene interaction
Dominance
Homozygote
Heterozygote
Document 3:::
Complex traits, also known as quantitative traits, are traits that do not behave according to simple Mendelian inheritance laws. More specifically, their inheritance cannot be explained by the genetic segregation of a single gene. Such traits show a continuous range of variation and are influenced by both environmental and genetic factors. Compared to strictly Mendelian traits, complex traits are far more common, and because they can be hugely polygenic, they are studied using statistical techniques such as quantitative genetics and quantitative trait loci (QTL) mapping rather than classical genetics methods. Examples of complex traits include height, circadian rhythms, enzyme kinetics, and many diseases including diabetes and Parkinson's disease. One major goal of genetic research today is to better understand the molecular mechanisms through which genetic variants act to influence complex traits.
History
When Mendel's work on inheritance was rediscovered in 1900, scientists debated whether Mendel's laws could account for the continuous variation observed for many traits. One group known as the biometricians argued that continuous traits such as height were largely heritable, but could not be explained by the inheritance of single Mendelian genetic factors. Work published by Ronald Fisher in 1919 mostly resolved debate by demonstrating that the variation in continuous traits could be accounted for if multiple such factors contributed additively to each trait. However, the number of genes involved in such traits remained undetermined; until recently, genetic loci were expected to have moderate effect sizes and each explain several percent of heritability. After the conclusion of the Human Genome Project in 2001, it seemed that the sequencing and mapping of many individuals would soon allow for a complete understanding of traits' genetic architectures. However, variants discovered through genome-wide association studies (GWASs) accounted for only a small percentag
Document 4:::
Genetics is the study of genes and tries to explain what they are and how they work. Genes are how living organisms inherit features or traits from their ancestors; for example, children usually look like their parents because they have inherited their parents' genes. Genetics tries to identify which traits are inherited and to explain how these traits are passed from generation to generation.
Some traits are part of an organism's physical appearance, such as eye color, height or weight. Other sorts of traits are not easily seen and include blood types or resistance to diseases. Some traits are inherited through genes, which is the reason why tall and thin people tend to have tall and thin children. Other traits come from interactions between genes and the environment, so a child who inherited the tendency of being tall will still be short if poorly nourished. The way our genes and environment interact to produce a trait can be complicated. For example, the chances of somebody dying of cancer or heart disease seems to depend on both their genes and their lifestyle.
Genes are made from a long molecule called DNA, which is copied and inherited across generations. DNA is made of simple units that line up in a particular order within it, carrying genetic information. The language used by DNA is called genetic code, which lets organisms read the information in the genes. This information is the instructions for the construction and operation of a living organism.
The information within a particular gene is not always exactly the same between one organism and another, so different copies of a gene do not always give exactly the same instructions. Each unique form of a single gene is called an allele. As an example, one allele for the gene for hair color could instruct the body to produce much pigment, producing black hair, while a different allele of the same gene might give garbled instructions that fail to produce any pigment, giving white hair. Mutations are random
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the term for traits that show wide variation, such as height, skin color, and eye color?
A. inherited traits
B. maladaptive traits
C. polygenic traits
D. recessive traits
Answer:
|
|
sciq-2688
|
multiple_choice
|
Early blastomeres can form what if isolated?
|
[
"tumors",
"lesions",
"cancer",
"a complete embryo"
] |
D
|
Relavent Documents:
Document 0:::
In biology, a blastomere is a type of cell produced by cell division (cleavage) of the zygote after fertilization; blastomeres are an essential part of blastula formation, and blastocyst formation in mammals.
Human blastomere characteristics
In humans, blastomere formation begins immediately following fertilization and continues through the first week of embryonic development. About 90 minutes after fertilization, the zygote divides into two cells. The two-cell blastomere state, present after the zygote first divides, is considered the earliest mitotic product of the fertilized oocyte. These mitotic divisions continue and result in a grouping of cells called blastomeres. During this process, the total size of the embryo does not increase, so each division results in smaller and smaller cells. When the zygote contains 16 to 32 blastomeres it is referred to as a morula. These are the preliminary stages in the embryo beginning to form. Once this begins, microtubules within the morula's cytosolic material in the blastomere cells can develop into important membrane functions, such as sodium pumps. These pumps allow the inside of the embryo to fill with blastocoelic fluid, which supports the further growth of life.
The blastomere is considered totipotent; that is, blastomeres are capable of developing from a single cell into a fully fertile adult organism. This has been demonstrated through studies and conjectures made with mouse blastomeres, which have been accepted as true for most mammalian blastomeres as well. Studies have analyzed monozygotic twin mouse blastomeres in their two-cell state, and have found that when one of the twin blastomeres is destroyed, a fully fertile adult mouse can still develop. Thus, it can be assumed that since one of the twin cells was totipotent, the destroyed one originally was as well.
Relative blastomere size within the embryo is dependent not only on the stage of the cleavage, but also on the regularity of the cleavage amongst t
Document 1:::
A blastema (Greek βλάστημα, "offspring") is a mass of cells capable of growth and regeneration into organs or body parts. The changing definition of the word "blastema" has been reviewed by Holland (2021). A broad survey of how blastema has been used over time brings to light a somewhat involved history. The word entered the biomedical vocabulary in 1799 to designate a sinister acellular slime that was the starting point for the growth of cancers, themselves, at the time, thought to be acellular, as reviewed by Hajdu (2011, Cancer 118: 1155-1168). Then, during the early nineteenth century, the definition broadened to include growth zones (still considered acellular) in healthy, normally developing plant and animal embryos. Contemporaneously, cancer specialists dropped the term from their vocabulary, perhaps because they felt a term connoting a state of health and normalcy was not appropriate for describing a pathological condition. During the middle decades of the nineteenth century, Schleiden and Schwann proposed the cell theory, and Remak and Virchow insisted that cells can only be generated by division of existing ones. Consequently, the conception of the blastema changed from acellular to cellular. More specifically, the term came to designate a population of embryonic cells that gave rise to a particular tissue. In short, the term blastema started being used to refer to what modern embryologists increasingly began calling a rudiment or Anlage. Importantly, the term blastema did not yet refer to a mass of undifferentiated-looking cells that accumulates relatively early in a regenerating body part. For instance, Morgan (1900), does not use the term even once in his classic book, “Regeneration.” It was not until the eve of World War 1 that Fritsch (1911, Zool. Jb. Zool. Physiol. 30: 377-472) introduced the term blastema in the modern sense, as now used by contemporary students of regeneration. Currently, the old usage of blastema to refer to a normal embryological
Document 2:::
Embryomics is the identification, characterization and study of the diverse cell types which arise during embryogenesis, especially as this relates to the location and developmental history of cells in the embryo. Cell type may be determined according to several criteria: location in the developing embryo, gene expression as indicated by protein and nucleic acid markers and surface antigens, and also position on the embryogenic tree.
Embryome
There are many cell markers useful in distinguishing, classifying, separating and purifying the numerous cell types present at any given time in a developing organism. These cell markers consist of select RNAs and proteins present inside, and surface antigens present on the surface of, the cells making up the embryo. For any given cell type, these RNA and protein markers reflect the genes characteristically active in that cell type. The catalog of all these cell types and their characteristic markers is known as the organism's embryome. The word is a portmanteau of embryo and genome. “Embryome” may also refer to the totality of the physical cell markers themselves.
Embryogenesis
As an embryo develops from a fertilized egg, the single egg cell splits into many cells, which grow in number and migrate to the appropriate locations inside the embryo at appropriate times during development. As the embryo's cells grow in number and migrate, they also differentiate into an increasing number of different cell types, ultimately turning into the stable, specialized cell types characteristic of the adult organism. Each of the cells in an embryo contains the same genome, characteristic of the species, but the level of activity of each of the many thousands of genes that make up the complete genome varies with, and determines, a particular cell's type (e.g. neuron, bone cell, skin cell, muscle cell, etc.).
During embryo development (embryogenesis), many cell types are present which are not present in the adult organism. These temporary c
Document 3:::
A blastoid is an embryoid, a stem cell-based embryo model which, morphologically and transcriptionally resembles the early, pre-implantation, mammalian conceptus, called the blastocyst. The first blastoids were created by the Nicolas Rivron laboratory by combining mouse embryonic stem cells and mouse trophoblast stem cells. Upon in vitro development, blastoids generate analogs of the primitive endoderm cells, thus comprising analogs of the three founding cell types of the conceptus (epiblast, trophoblast and primitive endoderm), and recapitulate aspects of implantation on being introduced into the uterus of a compatible female. Mouse blastoids have not shown the capacity to support the development of a foetus and are thus generally not considered as an embryo but rather as a model. As compared to other stem cell-based embryo models (e.g., Gastruloids), blastoids model the preimplantation stage and the integrated development of the conceptus including the embryo proper and the two extraembryonic tissues (trophectoderm and primitive endoderm). The blastoid is a model system for the study of mammalian development and disease. It might be useful for the identification of therapeutic targets and preclinical modelling.
Document 4:::
This is a list of cells in humans derived from the three embryonic germ layers – ectoderm, mesoderm, and endoderm.
Cells derived from ectoderm
Surface ectoderm
Skin
Trichocyte
Keratinocyte
Anterior pituitary
Gonadotrope
Corticotrope
Thyrotrope
Somatotrope
Lactotroph
Tooth enamel
Ameloblast
Neural crest
Peripheral nervous system
Neuron
Glia
Schwann cell
Satellite glial cell
Neuroendocrine system
Chromaffin cell
Glomus cell
Skin
Melanocyte
Nevus cell
Merkel cell
Teeth
Odontoblast
Cementoblast
Eyes
Corneal keratocyte
Neural tube
Central nervous system
Neuron
Glia
Astrocyte
Ependymocytes
Muller glia (retina)
Oligodendrocyte
Oligodendrocyte progenitor cell
Pituicyte (posterior pituitary)
Pineal gland
Pinealocyte
Cells derived from mesoderm
Paraxial mesoderm
Mesenchymal stem cell
Osteochondroprogenitor cell
Bone (Osteoblast → Osteocyte)
Cartilage (Chondroblast → Chondrocyte)
Myofibroblast
Fat
Lipoblast → Adipocyte
Muscle
Myoblast → Myocyte
Myosatellite cell
Tendon cell
Cardiac muscle cell
Other
Fibroblast → Fibrocyte
Other
Digestive system
Interstitial cell of Cajal
Intermediate mesoderm
Renal stem cell
Angioblast → Endothelial cell
Mesangial cell
Intraglomerular
Extraglomerular
Juxtaglomerular cell
Macula densa cell
Stromal cell → Interstitial cell → Telocytes
Simple epithelial cell → Podocyte
Kidney proximal tubule brush border cell
Reproductive system
Sertoli cell
Leydig cell
Granulosa cell
Peg cell
Germ cells (which migrate here primordially)
spermatozoon
ovum
Lateral plate mesoderm
Hematopoietic stem cell
Lymphoid
Lymphoblast
see lymphocytes
Myeloid
CFU-GEMM
see myeloid cells
Circulatory system
Endothelial progenitor cell
Endothelial colony forming cell
Endothelial stem cell
Angioblast/Mesoangioblast
Pericyte
Mural cell
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Early blastomeres can form what if isolated?
A. tumors
B. lesions
C. cancer
D. a complete embryo
Answer:
|
|
sciq-5461
|
multiple_choice
|
Exemplified by canada and alaska, what kind of climate has cool, short summers and long, cold winters, little precipitation, and abundant conifers?
|
[
"subarctic climate",
"temperate climate",
"tropical climate",
"droughts climate"
] |
A
|
Relavent Documents:
Document 0:::
A climograph is a graphical representation of a location's basic climate. Climographs display data for two variables: (a) monthly average temperature and (b) monthly average precipitation. These are useful tools to quickly describe a location's climate.
Representation
While temperature is typically visualized using a line, some climographs opt to visualize the data using a bar. This method's advantage allows the climograph to display the average range in temperature (average minimum and average maximum temperatures) rather than a simple monthly average.
Use
The patterns in a climograph describe not just a location's climate but also provide evidence for that climate's relative geographical location. For example, a climograph with a narrow range in temperature over the year might represent a location close to the equator, or alternatively a location adjacent to a large body of water exerting a moderating effect on the temperature range. Meanwhile, a wide range in annual temperature might suggest the opposite. We could also derive information about a site's ecological conditions through a climograph. For example, if precipitation is consistently low year-round, we might suggest the location reflects a desert; if there is a noticeable seasonal pattern to the precipitation, we might suggest the location experiences a monsoon season. When combining the temperature and precipitation patterns together, we have even better clues as to the local conditions. Despite this, a number of local factors contribute to the patterns observed in a particular place; therefore, a climograph is not a foolproof tool that captures all the geographic variation that might exist.
Document 1:::
Polar ecology is the relationship between plants and animals in a polar environment. Polar environments are in the Arctic and Antarctic regions. Arctic regions are in the Northern Hemisphere, and it contains land and the islands that surrounds it. Antarctica is in the Southern Hemisphere and it also contains the land mass, surrounding islands and the ocean. Polar regions also contain the subantarctic and subarctic zone which separate the polar regions from the temperate regions. Antarctica and the Arctic lie in the polar circles. The polar circles are imaginary lines shown on maps to be the areas that receives less sunlight due to less radiation. These areas either receive sunlight (midnight sun) or shade (polar night) 24 hours a day because of the earth's tilt. Plants and animals in the polar regions are able to withstand living in harsh weather conditions but are facing environmental threats that limit their survival.
Climate
Polar climates are cold, windy and dry. Because of the lack of precipitation and low temperatures the Arctic and Antarctic are considered the world's largest deserts or Polar deserts. Much of the radiation from the sun that is received is reflected off the snow making the polar regions cold. When the radiation is reflected, the heat is also reflected. The polar regions reflect 89-90% of the sun radiation that the earth receives. And because Antarctica is closer to the sun at perihelion, it receives 7% more radiation than the Arctic. Also in the polar region, the atmosphere is thin. Because of this the UV radiation that gets to the atmosphere can cause fast sun tanning and snow blindness.
Polar regions are dry areas; there is very little precipitation due to the cold air. There are some times when the humidity may be high but the water vapor present in the air may be low. Wind is also strong in the polar region. Wind carries snow creating blizzard like conditions. Winds may also move small organisms or vegetation if it is present. The wind
Document 2:::
Climatic adaptation refers to adaptations of an organism that are triggered due to the patterns of variation of abiotic factors that determine a specific climate. Annual means, seasonal variation and daily patterns of abiotic factors are properties of a climate where organisms can be adapted to. Changes in behavior, physical structure, internal mechanisms and metabolism are forms of adaptation that is caused by climate properties. Organisms of the same species that occur in different climates can be compared to determine which adaptations are due to climate and which are influenced majorly by other factors. Climatic adaptations limits to adaptations that have been established, characterizing species that live within the specific climate. It is different from climate change adaptations which refers to the ability to adapt to gradual changes of a climate. Once a climate has changed, the climate change adaptation that led to the survival of the specific organisms as a species can be seen as a climatic adaptation. Climatic adaptation is constrained by the genetic variability of the species in question.
Climate patterns
The patterns of variation of abiotic factors determine a climate and thus climatic adaptation. There are many different climates around the world, each with its unique patterns. Because of this, the manner of climatic adaptation shows large differences between the climates. A subarctic climate, for instance, shows daylight time and temperature fluctuations as most important factors, while in rainforest climate, the most important factor is characterized by the stable high precipitation rate and high average temperature that doesn't fluctuate a lot. Humid continental climate is marked by seasonal temperature variances which commonly lead to seasonal climate adaptations. Because the variance of these abiotic factors differ depending on the type of climate, differences in the manner of climatic adaptation are expected.
Research
Research on climatic adaptat
Document 3:::
A temperate forest is a forest found between the tropical and boreal regions, located in the temperate zone. It is the second largest biome on our planet, covering 25% of the world's forest area, only behind the boreal forest, which covers about 33%. These forests cover both hemispheres at latitudes ranging from 25 to 50 degrees, wrapping the planet in a belt similar to that of the boreal forest. Due to its large size spanning several continents, there are several main types: deciduous, coniferous, mixed forest, and rainforest.
Climate
The climate of a temperate forest is highly variable depending on the location of the forest. For example, Los Angeles and Vancouver, Canada are both considered to be located in a temperate zone, however, Vancouver is located in a temperate rainforest, while Los Angeles is a relatively dry subtropical climate.
Types of temperate forest
Deciduous
They are found in Europe, East Asia, North America, and in some parts of South America.
Deciduous forests are composed mainly of broadleaf trees, such as maple and oak, that shed all their leaves during one season. They are typically found in three middle-latitude regions with temperate climates characterized by a winter season and year-round precipitation: eastern North America, western Eurasia and northeastern Asia.
Coniferous
Coniferous forests are composed of needle-leaved evergreen trees, such as pine or fir. Evergreen forests are typically found in regions with moderate climates. Boreal forests, however, are an exception as they are found in subarctic regions. Coniferous trees often have an advantage over broadleaf trees in harsher environments. Their leaves are typically hardier and longer lived but require more energy to grow.
Mixed
As the name implies, conifers and broadleaf trees grow in the same area. The main trees found in these forests in North America and Eurasia include fir, oak, ash, maple, birch, beech, poplar, elm and pine. Other plant species may include magnolia,
Document 4:::
Frost resistance is the ability of plants to survive cold temperatures. Generally, land plants of the northern hemisphere have higher frost resistance than those of the southern hemisphere. An example of a frost resistant plant is Drimys winteri which is more frost-tolerant than naturally occurring conifers and vessel-bearing angiosperms such as the Nothofagus that can be found in its range in southern South America.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Exemplified by canada and alaska, what kind of climate has cool, short summers and long, cold winters, little precipitation, and abundant conifers?
A. subarctic climate
B. temperate climate
C. tropical climate
D. droughts climate
Answer:
|
|
sciq-7458
|
multiple_choice
|
What is the most diverse and abundant group of organisms on earth, numbering in the millions of trillions?
|
[
"pests",
"pathogens",
"bacteria",
"viruses"
] |
C
|
Relavent Documents:
Document 0:::
Animals are multicellular, eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. As of 2022, 2.16 million living animal species have been described—of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are around 7.77 million animal species. Animals range in length from to . They have complex interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology.
Most living animal species are in Bilateria, a clade whose members have a bilaterally symmetric body plan. The Bilateria include the protostomes, containing animals such as nematodes, arthropods, flatworms, annelids and molluscs, and the deuterostomes, containing the echinoderms and the chordates, the latter including the vertebrates. Life forms interpreted as early animals were present in the Ediacaran biota of the late Precambrian. Many modern animal phyla became clearly established in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago.
Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on ad
Document 1:::
Microbial population biology is the application of the principles of population biology to microorganisms.
Distinguishing from other biological disciplines
Microbial population biology, in practice, is the application of population ecology and population genetics toward understanding the ecology and evolution of bacteria, archaebacteria, microscopic fungi (such as yeasts), additional microscopic eukaryotes (e.g., "protozoa" and algae), and viruses.
Microbial population biology also encompasses the evolution and ecology of community interactions (community ecology) between microorganisms, including microbial coevolution and predator-prey interactions. In addition, microbial population biology considers microbial interactions with more macroscopic organisms (e.g., host-parasite interactions), though strictly this should be more from the perspective of the microscopic rather than the macroscopic organism. A good deal of microbial population biology may be described also as microbial evolutionary ecology. On the other hand, typically microbial population biologists (unlike microbial ecologists) are less concerned with questions of the role of microorganisms in ecosystem ecology, which is the study of nutrient cycling and energy movement between biotic as well as abiotic components of ecosystems.
Microbial population biology can include aspects of molecular evolution or phylogenetics. Strictly, however, these emphases should be employed toward understanding issues of microbial evolution and ecology rather than as a means of understanding more universal truths applicable to both microscopic and macroscopic organisms. The microorganisms in such endeavors consequently should be recognized as organisms rather than simply as molecular or evolutionary reductionist model systems. Thus, the study of RNA in vitro evolution is not microbial population biology and nor is the in silico generation of phylogenies of otherwise non-microbial sequences, even if aspects of either may
Document 2:::
MicrobeLibrary is a permanent collection of over 1400 original peer-reviewed resources for teaching undergraduate microbiology. It is provided by the American Society for Microbiology, Washington DC, United States.
Contents include curriculum activities; images and animations; reviews of books, websites and other resources; and articles from Focus on Microbiology Education, Microbiology Education and Microbe. Around 40% of the materials are free to educators and students, the remainder require a subscription. the service is suspended with the message to:
"Please check back with us in 2017".
External links
MicrobeLibrary
Microbiology
Document 3:::
The smallest organisms found on Earth can be determined according to various aspects of organism size, including volume, mass, height, length, or genome size.
Given the incomplete nature of scientific knowledge, it is possible that the smallest organism is undiscovered. Furthermore, there is some debate over the definition of life, and what entities qualify as organisms; consequently the smallest known organism (microorganism) is debatable.
Microorganisms
Obligate endosymbiotic bacteria
The genome of Nasuia deltocephalinicola, a symbiont of the European pest leafhopper, Macrosteles quadripunctulatus, consists of a circular chromosome of 112,031 base pairs.
The genome of Nanoarchaeum equitans is 491 Kbp nucleotides long.
Pelagibacter ubique
Pelagibacter ubique is one of the smallest known free-living bacteria, with a length of and an average cell diameter of . They also have the smallest free-living bacterium genome: 1.3 Mbp, 1354 protein genes, 35 RNA genes. They are one of the most common and smallest organisms in the ocean, with their total weight exceeding that of all fish in the sea.
Mycoplasma genitalium
Mycoplasma genitalium, a parasitic bacterium which lives in the primate bladder, waste disposal organs, genital, and respiratory tracts, is thought to be the smallest known organism capable of independent growth and reproduction. With a size of approximately 200 to 300 nm, M. genitalium is an ultramicrobacterium, smaller than other small bacteria, including rickettsia and chlamydia. However, the vast majority of bacterial strains have not been studied, and the marine ultramicrobacterium Sphingomonas sp. strain RB2256 is reported to have passed through a ultrafilter. A complicating factor is nutrient-downsized bacteria, bacteria that become much smaller due to a lack of available nutrients.
Nanoarchaeum
Nanoarchaeum equitans is a species of microbe in diameter. It was discovered in 2002 in a hydrothermal vent off the coast of Iceland by Karl Stet
Document 4:::
A supergroup, in evolutionary biology, is a large group of organisms that share one common ancestor and have important defining characteristics. It is an informal, mostly arbitrary rank in biological taxonomy that is often greater than phylum or kingdom, although some supergroups are also treated as phyla.
Eukaryotic supergroups
Since the decade of 2000's, the eukaryotic tree of life (abbreviated as eToL) has been divided into 5–8 major groupings called 'supergroups'. These groupings were established after the idea that only monophyletic groups should be accepted as ranks, as an alternative to the use of paraphyletic kingdom Protista. In the early days of the eToL six traditional supergroups were considered: Amoebozoa, Opisthokonta, "Excavata", Archaeplastida, "Chromalveolata" and Rhizaria. Since then, the eToL has been rearranged profoundly, and most of these groups were found as paraphyletic or lacked defining morphological characteristics that unite their members, which makes the 'supergroup' label more arbitrary.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the most diverse and abundant group of organisms on earth, numbering in the millions of trillions?
A. pests
B. pathogens
C. bacteria
D. viruses
Answer:
|
|
sciq-4353
|
multiple_choice
|
At equilibrium reactants and what are equally abundant?
|
[
"minerals",
"results",
"proactives",
"products"
] |
D
|
Relavent Documents:
Document 0:::
Equilibrium chemistry is concerned with systems in chemical equilibrium. The unifying principle is that the free energy of a system at equilibrium is the minimum possible, so that the slope of the free energy with respect to the reaction coordinate is zero. This principle, applied to mixtures at equilibrium provides a definition of an equilibrium constant. Applications include acid–base, host–guest, metal–complex, solubility, partition, chromatography and redox equilibria.
Thermodynamic equilibrium
A chemical system is said to be in equilibrium when the quantities of the chemical entities involved do not and cannot change in time without the application of an external influence. In this sense a system in chemical equilibrium is in a stable state. The system at chemical equilibrium will be at a constant temperature, pressure or volume and a composition. It will be insulated from exchange of heat with the surroundings, that is, it is a closed system. A change of temperature, pressure (or volume) constitutes an external influence and the equilibrium quantities will change as a result of such a change. If there is a possibility that the composition might change, but the rate of change is negligibly slow, the system is said to be in a metastable state. The equation of chemical equilibrium can be expressed symbolically as
reactant(s) product(s)
The sign means "are in equilibrium with". This definition refers to macroscopic properties. Changes do occur at the microscopic level of atoms and molecules, but to such a minute extent that they are not measurable and in a balanced way so that the macroscopic quantities do not change. Chemical equilibrium is a dynamic state in which forward and backward reactions proceed at such rates that the macroscopic composition of the mixture is constant. Thus, equilibrium sign symbolizes the fact that reactions occur in both forward and backward directions.
A steady state, on the other hand, is not necessarily an equilibrium state
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
Document 3:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 4:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
At equilibrium reactants and what are equally abundant?
A. minerals
B. results
C. proactives
D. products
Answer:
|
|
sciq-6455
|
multiple_choice
|
A non-bony skeleton that forms outside of the body is known as a what?
|
[
"excitoskeleton",
"exoskeleton",
"endoskeleton",
"exoplate"
] |
B
|
Relavent Documents:
Document 0:::
Work
He is an associate professor of anatomy, Department of Anatomy, Howard University College of Medicine (US). He was among the most cited/influential anatomists in 2019.
Books
Single author or co-author books
DIOGO, R. (2021). Meaning of Life, Human Nature and Delusions - How Tales about Love, Sex, Races, Gods and Progress Affect Us and Earth's Splendor. Springer (New York, US).
MONTERO, R., ADESOMO, A. & R. DIOGO (2021). On viruses, pandemics, and us: a developing story [De virus, pandemias y nosotros: una historia en desarollo]. Independently published, Tucuman, Argentina. 495 pages.
DIOGO, R., J. ZIERMANN, J. MOLNAR, N. SIOMAVA & V. ABDALA (2018). Muscles of Chordates: development, homologies and evolution. Taylor & Francis (Oxford, UK). 650 pages.
DIOGO, R., B. SHEARER, J. M. POTAU, J. F. PASTOR, F. J. DE PAZ, J. ARIAS-MARTORELL, C. TURCOTTE, A. HAMMOND, E. VEREECKE, M. VANHOOF, S. NAUWELAERTS & B. WOOD (2017). Photographic and descriptive musculoskeletal atlas of bonobos - with notes on the weight, attachments, variations, and innervation of the muscles and comparisons with common chimpanzees and humans. Springer (New York, US). 259 pages.
DIOGO, R. (2017). Evolution driven by organismal behavior: a unifying view of life, function, form, mismatches and trends. Springer
Document 1:::
Instruments used in Anatomy dissections are as follows:
Instrument list
Image gallery
Document 2:::
An exoskeleton (from Greek éxō "outer" and skeletós "skeleton") is an external skeleton that both supports the body shape and protects the internal organs of an animal, in contrast to an internal endoskeleton (e.g. that of a human) which is enclosed under other soft tissues. Some large, hard protective exoskeletons are known as "shells".
Examples of exoskeletons in animals include the arthropod exoskeleton shared by arthropods (insects, chelicerates, myriapods and crustaceans) and tardigrades, as well as the outer shell of certain sponges and the mollusc shell shared by snails, clams, tusk shells, chitons and nautilus. Some vertebrate animals, such as the turtle, have both an endoskeleton and a protective exoskeleton.
Role
Exoskeletons contain rigid and resistant components that fulfil a set of functional roles in many animals including protection, excretion, sensing, support, feeding, and acting as a barrier against desiccation in terrestrial organisms. Exoskeletons have roles in defence from pests and predators and in providing an attachment framework for musculature.
Arthropod exoskeletons contain chitin; the addition of calcium carbonate makes them harder and stronger, at the price of increased weight. Ingrowths of the arthropod exoskeleton known as apodemes serve as attachment sites for muscles. These structures are composed of chitin and are approximately six times stronger and twice the stiffness of vertebrate tendons. Similar to tendons, apodemes can stretch to store elastic energy for jumping, notably in locusts. Calcium carbonates constitute the shells of molluscs, brachiopods, and some tube-building polychaete worms. Silica forms the exoskeleton in the microscopic diatoms and radiolaria. One mollusc species, the scaly-foot gastropod, even uses the iron sulfides greigite and pyrite.
Some organisms, such as some foraminifera, agglutinate exoskeletons by sticking grains of sand and shell to their exterior. Contrary to a common misconception, echinoder
Document 3:::
Arthropods are covered with a tough, resilient integument or exoskeleton of chitin. Generally the exoskeleton will have thickened areas in which the chitin is reinforced or stiffened by materials such as minerals or hardened proteins. This happens in parts of the body where there is a need for rigidity or elasticity. Typically the mineral crystals, mainly calcium carbonate, are deposited among the chitin and protein molecules in a process called biomineralization. The crystals and fibres interpenetrate and reinforce each other, the minerals supplying the hardness and resistance to compression, while the chitin supplies the tensile strength. Biomineralization occurs mainly in crustaceans. In insects and arachnids, the main reinforcing materials are various proteins hardened by linking the fibres in processes called sclerotisation and the hardened proteins are called sclerotin. The dorsal tergum, ventral sternum, and the lateral pleura form the hardened plates or sclerites of a typical body segment.
In either case, in contrast to the carapace of a tortoise or the cranium of a vertebrate, the exoskeleton has little ability to grow or change its form once it has matured. Except in special cases, whenever the animal needs to grow, it moults, shedding the old skin after growing a new skin from beneath.
Microscopic structure
A typical arthropod exoskeleton is a multi-layered structure with four functional regions: epicuticle, procuticle, epidermis and basement membrane. Of these, the epicuticle is a multi-layered external barrier that, especially in terrestrial arthropods, acts as a barrier against desiccation. The strength of the exoskeleton is provided by the underlying procuticle, which is in turn secreted by the epidermis. Arthropod cuticle is a biological composite material, consisting of two main portions: fibrous chains of alpha-chitin within a matrix of silk-like and globular proteins, of which the best-known is the rubbery protein called resilin. The rel
Document 4:::
The following outline is provided as an overview of and topical guide to human anatomy:
Human anatomy – scientific study of the morphology of the adult human. It is subdivided into gross anatomy and microscopic anatomy. Gross anatomy (also called topographical anatomy, regional anatomy, or anthropotomy) is the study of anatomical structures that can be seen by unaided vision. Microscopic anatomy is the study of minute anatomical structures assisted with microscopes, and includes histology (the study of the organization of tissues), and cytology (the study of cells).
Essence of human anatomy
Human body
Anatomy
Branches of human anatomy
Gross anatomy- systemic or region-wise study of human body parts and organs. Gross anatomy encompasses cadaveric anatomy and osteology
Microscopic anatomy/histology
Cell biology (Cytology) & cytogenetics
Surface anatomy
Radiological anatomy
Developmental anatomy/embryology
Anatomy of the human body
The following list of human anatomical structures is based on the Terminologia Anatomica, the international standard for anatomical nomenclature. While the order is standardized, the hierarchical relationships in the TA are somewhat vague, and thus are open to interpretation.
General anatomy
Parts of human body
Head
Ear
Face
Forehead
Cheek
Chin
Eye
Nose
Nostril
Mouth
Lip
Tongue
Tooth
Neck
Torso
Thorax
Abdomen
Pelvis
Back
Pectoral girdle
Shoulder
Arm
Axilla
Elbow
Forearm
Wrist
Hand
Finger
Thumb
Palm
Lower limb
Pelvic girdle
Leg
Buttocks
Hip
Thigh
Knee
Calf
Foot
Ankle
Heel
Toe
Big toe
Sole
Cavities
Cranial cavity
Spinal cavity
Thoracic cavity
Abdominopelvic cavity
Abdominal cavity
Pelvic cavity
Planes, lines, and regions
Regions of head
Regions of neck
Anterior and lateral thoracic regions
Abdominal regions
Regions of back
Perineal regions
Regions of upper limb
Regions of lower limb
Bones
General terms
Bony part
Cortical bone
Compact bone
Spongy bone
Cartilaginous part
Membranous part
Periosteum
Perichondrium
Axial skele
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A non-bony skeleton that forms outside of the body is known as a what?
A. excitoskeleton
B. exoskeleton
C. endoskeleton
D. exoplate
Answer:
|
|
sciq-5389
|
multiple_choice
|
In addition to a nucleus what do eukaryotic cells have?
|
[
"protons",
"organelles",
"nutrons",
"electrons"
] |
B
|
Relavent Documents:
Document 0:::
Cell physiology is the biological study of the activities that take place in a cell to keep it alive. The term physiology refers to normal functions in a living organism. Animal cells, plant cells and microorganism cells show similarities in their functions even though they vary in structure.
General characteristics
There are two types of cells: prokaryotes and eukaryotes.
Prokaryotes were the first of the two to develop and do not have a self-contained nucleus. Their mechanisms are simpler than later-evolved eukaryotes, which contain a nucleus that envelops the cell's DNA and some organelles.
Prokaryotes
Prokaryotes have DNA located in an area called the nucleoid, which is not separated from other parts of the cell by a membrane. There are two domains of prokaryotes: bacteria and archaea. Prokaryotes have fewer organelles than eukaryotes. Both have plasma membranes and ribosomes (structures that synthesize proteins and float free in cytoplasm). Two unique characteristics of prokaryotes are fimbriae (finger-like projections on the surface of a cell) and flagella (threadlike structures that aid movement).
Eukaryotes
Eukaryotes have a nucleus where DNA is contained. They are usually larger than prokaryotes and contain many more organelles. The nucleus, the feature of a eukaryote that distinguishes it from a prokaryote, contains a nuclear envelope, nucleolus and chromatin. In cytoplasm, endoplasmic reticulum (ER) synthesizes membranes and performs other metabolic activities. There are two types, rough ER (containing ribosomes) and smooth ER (lacking ribosomes). The Golgi apparatus consists of multiple membranous sacs, responsible for manufacturing and shipping out materials such as proteins. Lysosomes are structures that use enzymes to break down substances through phagocytosis, a process that comprises endocytosis and exocytosis. In the mitochondria, metabolic processes such as cellular respiration occur. The cytoskeleton is made of fibers that support the str
Document 1:::
Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence.
Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism.
Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry.
See also
Cell (biology)
Cell biology
Biomolecule
Organelle
Tissue (biology)
External links
https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm
Document 2:::
The cell is the basic structural and functional unit of all forms of life. Every cell consists of cytoplasm enclosed within a membrane, and contains many macromolecules such as proteins, DNA and RNA, as well as many small molecules of nutrients and metabolites. The term comes from the Latin word meaning 'small room'.
Cells can acquire specified function and carry out various tasks within the cell such as replication, DNA repair, protein synthesis, and motility. Cells are capable of specialization and mobility within the cell.
Most plant and animal cells are only visible under a light microscope, with dimensions between 1 and 100 micrometres. Electron microscopy gives a much higher resolution showing greatly detailed cell structure. Organisms can be classified as unicellular (consisting of a single cell such as bacteria) or multicellular (including plants and animals). Most unicellular organisms are classed as microorganisms.
The study of cells and how they work has led to many other studies in related areas of biology, including: discovery of DNA, cancer systems biology, aging and developmental biology.
Cell biology is the study of cells, which were discovered by Robert Hooke in 1665, who named them for their resemblance to cells inhabited by Christian monks in a monastery. Cell theory, first developed in 1839 by Matthias Jakob Schleiden and Theodor Schwann, states that all organisms are composed of one or more cells, that cells are the fundamental unit of structure and function in all living organisms, and that all cells come from pre-existing cells. Cells emerged on Earth about 4 billion years ago.
Discovery
With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as i
Document 3:::
The nucleoplasm, also known as karyoplasm, is the type of protoplasm that makes up the cell nucleus, the most prominent organelle of the eukaryotic cell. It is enclosed by the nuclear envelope, also known as the nuclear membrane. The nucleoplasm resembles the cytoplasm of a eukaryotic cell in that it is a gel-like substance found within a membrane, although the nucleoplasm only fills out the space in the nucleus and has its own unique functions. The nucleoplasm suspends structures within the nucleus that are not membrane-bound and is responsible for maintaining the shape of the nucleus. The structures suspended in the nucleoplasm include chromosomes, various proteins, nuclear bodies, the nucleolus, nucleoporins, nucleotides, and nuclear speckles.
The soluble, liquid portion of the nucleoplasm is called the karyolymph nucleosol, or nuclear hyaloplasm.
History
The existence of the nucleus, including the nucleoplasm, was first documented as early as 1682 by the Dutch microscopist Leeuwenhoek and was later described and drawn by Franz Bauer. However, the cell nucleus was not named and described in detail until Robert Brown's presentation to the Linnean Society in 1831.
The nucleoplasm, while described by Bauer and Brown, was not specifically isolated as a separate entity until its naming in 1882 by Polish-German scientist Eduard Strasburger, one of the most famous botanists of the 19th century, and the first person to discover mitosis in plants.
Role
Many important cell functions take place in the nucleus, more specifically in the nucleoplasm. The main function of the nucleoplasm is to provide the proper environment for essential processes that take place in the nucleus, serving as the suspension substance for all organelles inside the nucleus, and storing the structures that are used in these processes. 34% of proteins encoded in the human genome are ones that localize to the nucleoplasm. These proteins take part in RNA transcription and gene regulation in the n
Document 4:::
This lecture, named in memory of Keith R. Porter, is presented to an eminent cell biologist each year at the ASCB Annual Meeting. The ASCB Program Committee and the ASCB President recommend the Porter Lecturer to the Porter Endowment each year.
Lecturers
Source: ASCB
See also
List of biology awards
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
In addition to a nucleus what do eukaryotic cells have?
A. protons
B. organelles
C. nutrons
D. electrons
Answer:
|
|
sciq-6612
|
multiple_choice
|
What is defined as maintaining a stable internal environment?
|
[
"peristalsis",
"homeostasis",
"ketosis",
"consciousness"
] |
B
|
Relavent Documents:
Document 0:::
Ecological competence is a term that has several different meanings that are dependent on the context it is used. The term "Ecological competence" can be used in a microbial sense, and it can be used in a sociological sense.
Microbiology
Ecological competence is the ability of an organism, often a pathogen, to survive and compete in new habitats. In the case of plant pathogens, it is also their ability to survive between growing seasons. For example, peanut clump virus can survive in the spores of its fungal vector until a new growing season begins and it can proceed to infect its primary host again. If a pathogen does not have ecological competence it is likely to become extinct. Bacteria and other pathogens can increase their ecological competence by creating a micro-niche, or a highly specialized environment that only they can survive in. This in turn will increase plasmid stability. Increased plasmid stability leads to a higher ecological competence due to added spatial organization and regulated cell protection.
Sociology
Ecological competence in a sociological sense is based around the relationship that humans have formed with the environment. It is often important in certain careers that will have a drastic impact on the surrounding ecosystem. A specific example is engineers working around and planning mining operations, due to the possible negative effects it can have on the surrounding environment. Ecological competence is especially important at the managerial level so that managers may understand society's risk to nature. These risks are learned through specific ecological knowledge so that the environment can be better protected in the future.
See also
Cultural ecology
Environmental education
Sustainable development
Ecological relationship
Document 1:::
The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work.
History
It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council.
Function
Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres.
STEM ambassadors
To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell.
Funding
STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments.
See also
The WISE Campaign
Engineering and Physical Sciences Research Council
National Centre for Excellence in Teaching Mathematics
Association for Science Education
Glossary of areas of mathematics
Glossary of astronomy
Glossary of biology
Glossary of chemistry
Glossary of engineering
Glossary of physics
Document 2:::
Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research.
Americas
Human Biology major at Stanford University, Palo Alto (since 1970)
Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government.
Human and Social Biology (Caribbean)
Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment.
Human Biology Program at University of Toronto
The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications.
Asia
BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002)
BSc (honours) Human Biology at AIIMS (New
Document 3:::
A glossary of terms relating to systems theory.
A
Adaptive capacity: An important part of the resilience of systems in the face of a perturbation, helping to minimise loss of function in individual human, and collective social and biological systems.
Allopoiesis: The process whereby a system produces something other than the system itself.
Allostasis: The process of achieving stability, or homeostasis, through physiological or behavioral change.
Autopoiesis: The process by which a system regenerates itself through the self-reproduction of its own elements and of the network of interactions that characterize them. An autopoietic system renews, repairs, and replicates or reproduces itself in a flow of matter and energy. Note: from a strictly Maturanian point of view, autopoiesis is an essential property of biological/living systems.
B
Black box: A technical term for a device or system or object when it is viewed primarily in terms of its input and output characteristics, without observing or describing its internal structure or behaviour.
Boundaries: The parametric conditions, often vague, always subjectively stipulated, that delimit and define a system and set it apart from its environment.
C
Cascading failure: Failure in a system of interconnected parts, where the service provided depends on the operation of a preceding part, and the failure of a preceding part can trigger the failure of successive parts.
Closed system: A system which can exchange energy (as heat or work), but not matter, with its surroundings.
Complexity: A complex system is characterised by components that interact in multiple ways and follow local rules. A complicated system is characterised by its layers.
Culture: The result of individual learning processes that distinguish one social group of higher animals from another. In humans culture is the set of interrelated concepts, products and activities through which humans group themselves, interact with each other, and become aware o
Document 4:::
Organizational ecology (also organizational demography and the population ecology of organizations) is a theoretical and empirical approach in the social sciences that is considered a sub-field of organizational studies. Organizational ecology utilizes insights from biology, economics, and sociology, and employs statistical analysis to try to understand the conditions under which organizations emerge, grow, and die.
The ecology of organizations is divided into three levels, the community, the population, and the organization. The community level is the functionally integrated system of interacting populations. The population level is the set of organizations engaged in similar activities. The organization level focuses on the individual organizations (some research further divides organizations into individual member and sub-unit levels).
What is generally referred to as organizational ecology in research is more accurately population ecology, focusing on the second level.
Development
Wharton School researcher William Evan called the population level the organization-set, and focused on the interrelations of individual organizations within the population as early as 1966. However, prior to the mid-1970s, the majority of organizational studies research focused on adaptive change in organizations (See also adaptive management and adaptive performance). The ecological approach moved focus to the environmental selection processes that affect organizations.
In 1976, Eric Trist defined population ecology as "the study of the organizational field created by a number of organizations whose interrelations compose a system at the level of the whole field". He also advocated for organizational studies research to focus on populations and individual organizations as part of open rather than closed systems that have both bureaucratic (internal) regulation and ecological (community environment) regulation (see also Open and closed systems in social science).
The first ex
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is defined as maintaining a stable internal environment?
A. peristalsis
B. homeostasis
C. ketosis
D. consciousness
Answer:
|
|
ai2_arc-35
|
multiple_choice
|
Which celestial object listed below has the greatest density?
|
[
"a planet",
"a comet",
"a nebula",
"a neutron star"
] |
D
|
Relavent Documents:
Document 0:::
A mass deficit is the amount of mass (in stars) that has been removed from the center of a galaxy, presumably by the action of a binary supermassive black hole.
The density of stars increases toward the center in most galaxies. In small galaxies, this increase continues into the very center. In large galaxies, there is usually a "core", a region near the center where the density is constant or slowly rising. The size of the core – the "core radius" – can be a few hundred parsecs in large elliptical galaxies.
The greatest observed stellar cores reach 3.2 to 5.7 kiloparsecs in radius.
It is believed that cores are produced by binary supermassive black holes (SMBHs). Binary SMBHs form during the merger of two galaxies. If a star passes near the massive binary, it will be ejected, by a process called the gravitational slingshot. This ejection continues until most of the stars near the center of the galaxy have been removed. The result is a low-density core. Such cores are ubiquitous in giant elliptical galaxies.
The mass deficit is defined as the amount of mass that was removed in creating the core. Mathematically, the mass deficit is defined as
where ρi is the original density, ρ is the observed density, and Rc is the core radius. In practice, the core-Sersic model can be used to help quantify the deficits.
Observed mass deficits are typically in the range of one to a few times the mass of the central SMBH, and observed core radii are comparable to the influence radii of the central SMBH. These properties are consistent with what is predicted in theoretical models of core formation and lend support to the hypothesis that all bright galaxies once contained binary SMBHs at their centers.
It is not known whether most galaxies still contain massive binaries, or whether the two black holes have coalesced. Both possibilities are consistent with the presence of mass deficits.
Document 1:::
A brightest cluster galaxy (BCG) is defined as the brightest galaxy in a cluster of galaxies. BCGs include the most massive galaxies in the universe. They are generally elliptical galaxies which lie close to the geometric and kinematical center of their host galaxy cluster, hence at the bottom of the cluster potential well. They are also generally coincident with the peak of the cluster X-ray emission.
Formation scenarios for BCGs include:
Cooling flow—Star formation from the central cooling flow in high density cooling centers of X-ray cluster halos.
The study of accretion populations in BCGs has cast doubt over this theory and astronomers have seen no evidence of cooling flows in radiative cooling clusters. The two remaining theories exhibit healthier prospects.
Galactic cannibalism—Galaxies sink to the center of the cluster due to dynamical friction and tidal stripping.
Galactic merger—Rapid galactic mergers between several galaxies take place during cluster collapse.
It is possible to differentiate the cannibalism model from the merging model by considering the formation period of the BCGs. In the cannibalism model, there are numerous small galaxies present in the evolved cluster, whereas in the merging model, a hierarchical cosmological model is expected due to the collapse of clusters. It has been shown that the orbit decay of cluster galaxies is not effective enough to account for the growth of BCGs.
The merging model is now generally accepted as the most likely one, but recent observations are at odds with some of its predictions. For example, it has been found that the stellar mass of BCGs was assembled much earlier than the merging model predicts.
BCGs are divided into various classes of galaxies: giant ellipticals (gE), D galaxies and cD galaxies. cD and D galaxies both exhibit an extended diffuse envelope surrounding an elliptical-like nucleus akin to regular elliptical galaxies. The light profiles of BCGs are often described by a Sersic surface
Document 2:::
Astrophysics is a science that employs the methods and principles of physics and chemistry in the study of astronomical objects and phenomena. As one of the founders of the discipline, James Keeler, said, Astrophysics "seeks to ascertain the nature of the heavenly bodies, rather than their positions or motions in space–what they are, rather than where they are." Among the subjects studied are the Sun (solar physics), other stars, galaxies, extrasolar planets, the interstellar medium and the cosmic microwave background. Emissions from these objects are examined across all parts of the electromagnetic spectrum, and the properties examined include luminosity, density, temperature, and chemical composition. Because astrophysics is a very broad subject, astrophysicists apply concepts and methods from many disciplines of physics, including classical mechanics, electromagnetism, statistical mechanics, thermodynamics, quantum mechanics, relativity, nuclear and particle physics, and atomic and molecular physics.
In practice, modern astronomical research often involves a substantial amount of work in the realms of theoretical and observational physics. Some areas of study for astrophysicists include their attempts to determine the properties of dark matter, dark energy, black holes, and other celestial bodies; and the origin and ultimate fate of the universe. Topics also studied by theoretical astrophysicists include Solar System formation and evolution; stellar dynamics and evolution; galaxy formation and evolution; magnetohydrodynamics; large-scale structure of matter in the universe; origin of cosmic rays; general relativity, special relativity, quantum and physical cosmology, including string cosmology and astroparticle physics.
History
Astronomy is an ancient science, long separated from the study of terrestrial physics. In the Aristotelian worldview, bodies in the sky appeared to be unchanging spheres whose only motion was uniform motion in a circle, while the earthl
Document 3:::
The Plummer model or Plummer sphere is a density law that was first used by H. C. Plummer to fit observations of globular clusters. It is now often used as toy model in N-body simulations of stellar systems.
Description of the model
The Plummer 3-dimensional density profile is given by
where is the total mass of the cluster, and a is the Plummer radius, a scale parameter that sets the size of the cluster core. The corresponding potential is
where G is Newton's gravitational constant. The velocity dispersion is
The isotropic distribution function reads
if , and otherwise, where is the specific energy.
Properties
The mass enclosed within radius is given by
Many other properties of the Plummer model are described in Herwig Dejonghe's comprehensive article.
Core radius , where the surface density drops to half its central value, is at .
Half-mass radius is
Virial radius is .
The 2D surface density is:
,
and hence the 2D projected mass profile is:
.
In astronomy, it is convenient to define 2D half-mass radius which is the radius where the 2D projected mass profile is half of the total mass: .
For the Plummer profile: .
The escape velocity at any point is
For bound orbits, the radial turning points of the orbit is characterized by specific energy and specific angular momentum are given by the positive roots of the cubic equation
where , so that . This equation has three real roots for : two positive and one negative, given that , where is the specific angular momentum for a circular orbit for the same energy. Here can be calculated from single real root of the discriminant of the cubic equation, which is itself another cubic equation
where underlined parameters are dimensionless in Henon units defined as , , and .
Applications
The Plummer model comes closest to representing the observed density profiles of star clusters, although the rapid falloff of the density at large radii () is not a good description of these systems.
Document 4:::
This article is a list of notable unsolved problems in astronomy. Some of these problems are theoretical, meaning that existing theories may be incapable of explaining certain observed phenomena or experimental results. Others are experimental, meaning that experiments necessary to test proposed theory or investigate a phenomenon in greater detail have not yet been performed. Some pertain to unique events or occurrences that have not repeated themselves and whose causes remain unclear.
Planetary astronomy
Our solar system
Orbiting bodies and rotation:
Are there any non-dwarf planets beyond Neptune?
Why do extreme trans-Neptunian objects have elongated orbits?
Rotation rate of Saturn:
Why does the magnetosphere of Saturn rotate at a rate close to that at which the planet's clouds rotate?
What is the rotation rate of Saturn's deep interior?
Satellite geomorphology:
What is the origin of the chain of high mountains that closely follows the equator of Saturn's moon, Iapetus?
Are the mountains the remnant of hot and fast-rotating young Iapetus?
Are the mountains the result of material (either from the rings of Saturn or its own ring) that over time collected upon the surface?
Extra-solar
How common are Solar System-like planetary systems? Some observed planetary systems contain Super-Earths and Hot Jupiters that orbit very close to their stars. Systems with Jupiter-like planets in Jupiter-like orbits appear to be rare. There are several possibilities why Jupiter-like orbits are rare, including that data is lacking or the grand tack hypothesis.
Stellar astronomy and astrophysics
Solar cycle:
How does the Sun generate its periodically reversing large-scale magnetic field?
How do other Sol-like stars generate their magnetic fields, and what are the similarities and differences between stellar activity cycles and that of the Sun?
What caused the Maunder Minimum and other grand minima, and how does the solar cycle recover from a minimum state?
Coronal heat
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which celestial object listed below has the greatest density?
A. a planet
B. a comet
C. a nebula
D. a neutron star
Answer:
|
|
sciq-9061
|
multiple_choice
|
Change in what equals the average net external force multiplied by the time this force acts?
|
[
"momentum",
"height",
"rate",
"lag"
] |
A
|
Relavent Documents:
Document 0:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
Document 1:::
Advanced Placement (AP) Physics C: Mechanics (also known as AP Mechanics) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a one-semester calculus-based university course in mechanics. The content of Physics C: Mechanics overlaps with that of AP Physics 1, but Physics 1 is algebra-based, while Physics C is calculus-based. Physics C: Mechanics may be combined with its electricity and magnetism counterpart to form a year-long course that prepares for both exams.
Course content
Intended to be equivalent to an introductory college course in mechanics for physics or engineering majors, the course modules are:
Kinematics
Newton's laws of motion
Work, energy and power
Systems of particles and linear momentum
Circular motion and rotation
Oscillations and gravitation.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a Calculus I class.
This course is often compared to AP Physics 1: Algebra Based for its similar course material involving kinematics, work, motion, forces, rotation, and oscillations. However, AP Physics 1: Algebra Based lacks concepts found in Calculus I, like derivatives or integrals.
This course may be combined with AP Physics C: Electricity and Magnetism to make a unified Physics C course that prepares for both exams.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Mechanics is separate from the AP examination for AP Physics C: Electricity and Magnetism. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday aftern
Document 2:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 3:::
As described by the third of Newton's laws of motion of classical mechanics, all forces occur in pairs such that if one object exerts a force on another object, then the second object exerts an equal and opposite reaction force on the first. The third law is also more generally stated as: "To every action there is always opposed an equal reaction: or the mutual actions of two bodies upon each other are always equal, and directed to contrary parts." The attribution of which of the two forces is the action and which is the reaction is arbitrary. Either of the two can be considered the action, while the other is its associated reaction.
Examples
Interaction with ground
When something is exerting force on the ground, the ground will push back with equal force in the opposite direction. In certain fields of applied physics, such as biomechanics, this force by the ground is called 'ground reaction force'; the force by the object on the ground is viewed as the 'action'.
When someone wants to jump, he or she exerts additional downward force on the ground ('action'). Simultaneously, the ground exerts upward force on the person ('reaction'). If this upward force is greater than the person's weight, this will result in upward acceleration. When these forces are perpendicular to the ground, they are also called a normal force.
Likewise, the spinning wheels of a vehicle attempt to slide backward across the ground. If the ground is not too slippery, this results in a pair of friction forces: the 'action' by the wheel on the ground in backward direction, and the 'reaction' by the ground on the wheel in forward direction. This forward force propels the vehicle.
Gravitational forces
The Earth, among other planets, orbits the Sun because the Sun exerts a gravitational pull that acts as a centripetal force, holding the Earth to it, which would otherwise go shooting off into space. If the Sun's pull is considered an action, then Earth simultaneously exerts a reaction as a gravi
Document 4:::
Velocity is the speed in combination with the direction of motion of an object. Velocity is a fundamental concept in kinematics, the branch of classical mechanics that describes the motion of bodies.
Velocity is a physical vector quantity: both magnitude and direction are needed to define it. The scalar absolute value (magnitude) of velocity is called , being a coherent derived unit whose quantity is measured in the SI (metric system) as metres per second (m/s or m⋅s−1). For example, "5 metres per second" is a scalar, whereas "5 metres per second east" is a vector. If there is a change in speed, direction or both, then the object is said to be undergoing an acceleration.
Constant velocity vs acceleration
To have a constant velocity, an object must have a constant speed in a constant direction. Constant direction constrains the object to motion in a straight path thus, a constant velocity means motion in a straight line at a constant speed.
For example, a car moving at a constant 20 kilometres per hour in a circular path has a constant speed, but does not have a constant velocity because its direction changes. Hence, the car is considered to be undergoing an acceleration.
Difference between speed and velocity
While the terms speed and velocity are often colloquially used interchangeably to connote how fast an object is moving, in scientific terms they are different. Speed, the scalar magnitude of a velocity vector, denotes only how fast an object is moving, while velocity indicates both an objects speed and direction.
Equation of motion
Average velocity
Velocity is defined as the rate of change of position with respect to time, which may also be referred to as the instantaneous velocity to emphasize the distinction from the average velocity. In some applications the average velocity of an object might be needed, that is to say, the constant velocity that would provide the same resultant displacement as a variable velocity in the same time interval, , over some
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Change in what equals the average net external force multiplied by the time this force acts?
A. momentum
B. height
C. rate
D. lag
Answer:
|
|
sciq-851
|
multiple_choice
|
When we move down a group of elements on the periodic table, what happens to their electronegativity?
|
[
"it decreases",
"it increases",
"it stays the same",
"it doubles"
] |
A
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams.
Course content
E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are:
Electrostatics
Conductors, capacitors, and dielectrics
Electric circuits
Magnetic fields
Electromagnetism.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with
Document 2:::
There are four Advanced Placement (AP) Physics courses administered by the College Board as part of its Advanced Placement program: the algebra-based Physics 1 and Physics 2 and the calculus-based Physics C: Mechanics and Physics C: Electricity and Magnetism. All are intended to be at the college level. Each AP Physics course has an exam for which high-performing students may receive credit toward their college coursework.
AP Physics 1 and 2
AP Physics 1 and AP Physics 2 were introduced in 2015, replacing AP Physics B. The courses were designed to emphasize critical thinking and reasoning as well as learning through inquiry. They are algebra-based and do not require any calculus knowledge.
AP Physics 1
AP Physics 1 covers Newtonian mechanics, including:
Unit 1: Kinematics
Unit 2: Dynamics
Unit 3: Circular Motion and Gravitation
Unit 4: Energy
Unit 5: Momentum
Unit 6: Simple Harmonic Motion
Unit 7: Torque and Rotational Motion
Until 2020, the course also covered topics in electricity (including Coulomb's Law and resistive DC circuits), mechanical waves, and sound. These units were removed because they are included in AP Physics 2.
AP Physics 2
AP Physics 2 covers the following topics:
Unit 1: Fluids
Unit 2: Thermodynamics
Unit 3: Electric Force, Field, and Potential
Unit 4: Electric Circuits
Unit 5: Magnetism and Electromagnetic Induction
Unit 6: Geometric and Physical Optics
Unit 7: Quantum, Atomic, and Nuclear Physics
AP Physics C
From 1969 to 1972, AP Physics C was a single course with a single exam that covered all standard introductory university physics topics, including mechanics, fluids, electricity and magnetism, optics, and modern physics. In 1973, the College Board split the course into AP Physics C: Mechanics and AP Physics C: Electricity and Magnetism. The exam was also split into two separate 90-minute tests, each equivalent to a semester-length calculus-based college course. Until 2006, both exams could be taken for a single
Document 3:::
Test equating traditionally refers to the statistical process of determining comparable scores on different forms of an exam. It can be accomplished using either classical test theory or item response theory.
In item response theory, equating is the process of placing scores from two or more parallel test forms onto a common score scale. The result is that scores from two different test forms can be compared directly, or treated as though they came from the same test form. When the tests are not parallel, the general process is called linking. It is the process of equating the units and origins of two scales on which the abilities of students have been estimated from results on different tests. The process is analogous to equating degrees Fahrenheit with degrees Celsius by converting measurements from one scale to the other. The determination of comparable scores is a by-product of equating that results from equating the scales obtained from test results.
Purpose
Suppose that Dick and Jane both take a test to become licensed in a certain profession. Because the high stakes (you get to practice the profession if you pass the test) may create a temptation to cheat, the organization that oversees the test creates two forms. If we know that Dick scored 60% on form A and Jane scored 70% on form B, do we know for sure which one has a better grasp of the material? What if form A is composed of very difficult items, while form B is relatively easy? Equating analyses are performed to address this very issue, so that scores are as fair as possible.
Equating in item response theory
In item response theory, person "locations" (measures of some quality being assessed by a test) are estimated on an interval scale; i.e., locations are estimated in relation to a unit and origin. It is common in educational assessment to employ tests in order to assess different groups of students with the intention of establishing a common scale by equating the origins, and when appropri
Document 4:::
In chemistry and physics, the iron group refers to elements that are in some way related to iron; mostly in period (row) 4 of the periodic table. The term has different meanings in different contexts.
In chemistry, the term is largely obsolete, but it often means iron, cobalt, and nickel, also called the iron triad; or, sometimes, other elements that resemble iron in some chemical aspects.
In astrophysics and nuclear physics, the term is still quite common, and it typically means those three plus chromium and manganese—five elements that are exceptionally abundant, both on Earth and elsewhere in the universe, compared to their neighbors in the periodic table. Titanium and vanadium are also produced in Type Ia supernovae.
General chemistry
In chemistry, "iron group" used to refer to iron and the next two elements in the periodic table, namely cobalt and nickel. These three comprised the "iron triad". They are the top elements of groups 8, 9, and 10 of the periodic table; or the top row of "group VIII" in the old (pre-1990) IUPAC system, or of "group VIIIB" in the CAS system. These three metals (and the three of the platinum group, immediately below them) were set aside from the other elements because they have obvious similarities in their chemistry, but are not obviously related to any of the other groups. The iron group and its alloys exhibit ferromagnetism.
The similarities in chemistry were noted as one of Döbereiner's triads and by Adolph Strecker in 1859. Indeed, Newlands' "octaves" (1865) were harshly criticized for separating iron from cobalt and nickel. Mendeleev stressed that groups of "chemically analogous elements" could have similar atomic weights as well as atomic weights which increase by equal increments, both in his original 1869 paper and his 1889 Faraday Lecture.
Analytical chemistry
In the traditional methods of qualitative inorganic analysis, the iron group consists of those cations which
have soluble chlorides; and
are not precipitated
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
When we move down a group of elements on the periodic table, what happens to their electronegativity?
A. it decreases
B. it increases
C. it stays the same
D. it doubles
Answer:
|
|
sciq-8456
|
multiple_choice
|
In a tropical rainforest, where are ferns common?
|
[
"forrest floor",
"canopy",
"understory",
"emergent"
] |
C
|
Relavent Documents:
Document 0:::
The following is a list of vascular plants, bryophytes and lichens which are constant species in one or more community of the British National Vegetation Classification system.
Vascular plants
Grasses
Sedges and rushes
Trees
Other dicotyledons
Other monocotyledons
Ferns
Clubmosses
Bryophytes
Mosses
Liverworts
Lichens
British National Vegetation Classification
Lists of biota of the United Kingdom
British National Vegetation Classification, constant
Document 1:::
The tree ferns are arborescent (tree-like) ferns that grow with a trunk elevating the fronds above ground level, making them trees. Many extant tree ferns are members of the order Cyatheales, to which belong the families Cyatheaceae (scaly tree ferns), Dicksoniaceae, Metaxyaceae, and Cibotiaceae. It is estimated that Cyatheales originated in the early Jurassic, and is the third group of ferns known to have given rise to tree-like forms. The others are the extinct Tempskya of uncertain position, and Osmundales where the extinct Guaireaceae and some members of Osmundaceae also grew into trees. In addition there were the Psaroniaceae and Tietea in the Marattiales, which is the sister group to most living ferns including Cyatheales.
Other ferns which are also tree ferns, are Leptopteris and Todea in the family Osmundaceae, which can achieve short trunks under a metre tall. Fern species with short trunks in the genera Blechnum, Cystodium and Sadleria from the order Polypodiales, and smaller members of Cyatheales like Calochlaena, Cnemedaria, Culcita (mountains only tree fern), Lophosoria and Thyrsopteris are also considered tree ferns.
Range
Tree ferns are found growing in tropical and subtropical areas worldwide, as well as cool to temperate rainforests in Australia, New Zealand and neighbouring regions (e.g. Lord Howe Island, etc.). Like all ferns, tree ferns reproduce by means of spores formed on the undersides of the fronds.
Description
The fronds of tree ferns are usually very large and multiple-pinnate. Their trunk is actually a vertical and modified rhizome, and woody tissue is absent. To add strength, there are deposits of lignin in the cell walls and the lower part of the stem is reinforced with thick, interlocking mats of tiny roots. If the crown of Dicksonia antarctica (the most common species in gardens) is damaged, it will inevitably die because that is where all the new growth occurs. But other clump-forming tree fern species, such as D. squarrosa and D
Document 2:::
Plant ecology is a subdiscipline of ecology that studies the distribution and abundance of plants, the effects of environmental factors upon the abundance of plants, and the interactions among plants and between plants and other organisms. Examples of these are the distribution of temperate deciduous forests in North America, the effects of drought or flooding upon plant survival, and competition among desert plants for water, or effects of herds of grazing animals upon the composition of grasslands.
A global overview of the Earth's major vegetation types is provided by O.W. Archibold. He recognizes 11 major vegetation types: tropical forests, tropical savannas, arid regions (deserts), Mediterranean ecosystems, temperate forest ecosystems, temperate grasslands, coniferous forests, tundra (both polar and high mountain), terrestrial wetlands, freshwater ecosystems and coastal/marine systems. This breadth of topics shows the complexity of plant ecology, since it includes plants from floating single-celled algae up to large canopy forming trees.
One feature that defines plants is photosynthesis. Photosynthesis is the process of a chemical reactions to create glucose and oxygen, which is vital for plant life. One of the most important aspects of plant ecology is the role plants have played in creating the oxygenated atmosphere of earth, an event that occurred some 2 billion years ago. It can be dated by the deposition of banded iron formations, distinctive sedimentary rocks with large amounts of iron oxide. At the same time, plants began removing carbon dioxide from the atmosphere, thereby initiating the process of controlling Earth's climate. A long term trend of the Earth has been toward increasing oxygen and decreasing carbon dioxide, and many other events in the Earth's history, like the first movement of life onto land, are likely tied to this sequence of events.
One of the early classic books on plant ecology was written by J.E. Weaver and F.E. Clements. It
Document 3:::
Plant functional types (PFTs) refers to a grouping or classification system often used by ecologists and climatologists to classify plant species based on their similar functions and performances in an ecosystem. It is a way to simplify the complexity of plant diversity and behaviour in ecological models by grouping plants into categories that share common functional characteristics. This simplification helps researchers model vegetation dynmaics which can be used in land use studies and climate models.
PFTs provide a finer level of modeling than biomes, which represent gross areas such as desert, savannah, deciduous forest. In creating models with PFTs, areas as small as 1 km2 are modeled by defining the predominant plant type for that area, interpreted from satellite data or other means. For each plant functional type, a number of key parameters are defined, such as fecundity, competitiveness, resorption (rate at which plant decays and returns nutrients to the soil after death), etc. The value of each parameter is determined or inferred from observable characteristics such as plant height, leaf area, etc.
Plant Functional Type (PFT) models have some limitations and problems. For example, it is difficult for climatologists and ecologists to determine which minimal set of plant characteristics best model the actual responses of the biosphere in response to climate changes. Furthermore, by oversimplifying species to a few key traits, researchers may not capture the full diversity and variability of plant species within a given ecosystem or represent rare or unique species. As such, researchers are developing more sophisticated models, such as trait-based models, to address these problems.
See also
Ecotone
Document 4:::
A Centre of Endemism is an area in which the ranges of restricted-range species overlap, or a localised area which has a high occurrence of endemics. Centres of endemism may overlap with biodiversity hotspots which are biogeographic regions characterized both by high levels of plant endemism and by serious levels of habitat loss. The exact delineation of centres of endemism is difficult and some overlap with one another. Centres of endemism are high conservation priority areas.
Examples of Centres of Endemism
Tanzania
A local centre of endemism is focussed on an area of lowland forests around the plateaux inland of Lindi in SE Tanzania, with between 40 and 91 species of vascular plants which are not found elsewhere.
Southern Africa
There are at least 19 centres of plant endemism, including the following:
Albany Centre of Plant Endemism
Barberton Centre of Plant Endemism
Cape Floristic Region
Drakensberg Alpine Centre
Hantam–Roggeveld Centre of Plant Endemism
Kaokoveld Centre of Endemism
Maputaland Centre of Plant Endemism
Pondoland Centre of Plant Endemism
Sekhukhuneland Centre of Endemism
Soutpansberg Centre of Plant Endemism
See also
List of ecoregions with high endemism
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
In a tropical rainforest, where are ferns common?
A. forrest floor
B. canopy
C. understory
D. emergent
Answer:
|
|
scienceQA-7297
|
multiple_choice
|
How long is a rowboat?
|
[
"4 centimeters",
"4 kilometers",
"4 millimeters",
"4 meters"
] |
D
|
The best estimate for the length of a rowboat is 4 meters.
4 millimeters and 4 centimeters are too short. 4 kilometers is too long.
|
Relavent Documents:
Document 0:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
Computer science and engineering (CSE) is an academic program at many universities which comprises computer science classes (e.g. data structures and algorithms) and computer engineering classes (e.g computer architecture). There is no clear division in computing between science and engineering, just like in the field of materials science and engineering. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both undergraduate as well postgraduate with specializations.
Academic courses
Academic programs vary between colleges, but typically include a combination of topics in computer science, computer engineering, and electrical engineering. Undergraduate courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most CSE programs require introductory mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, probability, and statistics, as well as the basics of electrical and electronic engineering, physics, and electromagnetism.
Example universities with CSE majors and departments
APJ Abdul Kalam Technological University
American International University-B
Document 3:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
Document 4:::
The Bachelor of Science in Aquatic Resources and Technology (B.Sc. in AQT) (or Bachelor of Aquatic Resource) is an undergraduate degree that prepares students to pursue careers in the public, private, or non-profit sector in areas such as marine science, fisheries science, aquaculture, aquatic resource technology, food science, management, biotechnology and hydrography. Post-baccalaureate training is available in aquatic resource management and related areas.
The Department of Animal Science and Export Agriculture, at the Uva Wellassa University of Badulla, Sri Lanka, has the largest enrollment of undergraduate majors in Aquatic Resources and Technology, with about 200 students as of 2014.
The Council on Education for Aquatic Resources and Technology includes undergraduate AQT degrees in the accreditation review of Aquatic Resources and Technology programs and schools.
See also
Marine Science
Ministry of Fisheries and Aquatic Resources Development
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How long is a rowboat?
A. 4 centimeters
B. 4 kilometers
C. 4 millimeters
D. 4 meters
Answer:
|
ai2_arc-487
|
multiple_choice
|
Which gas is given off by plants?
|
[
"Hydrogen",
"Nitrogen",
"Oxygen",
"Helium"
] |
C
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Photosynthesis systems are electronic scientific instruments designed for non-destructive measurement of photosynthetic rates in the field. Photosynthesis systems are commonly used in agronomic and environmental research, as well as studies of the global carbon cycle.
How photosynthesis systems function
Photosynthesis systems function by measuring gas exchange of leaves. Atmospheric carbon dioxide is taken up by leaves in the process of photosynthesis, where is used to generate sugars in a molecular pathway known as the Calvin cycle. This draw-down of induces more atmospheric to diffuse through stomata into the air spaces of the leaf. While stoma are open, water vapor can easily diffuse out of plant tissues, a process known as transpiration. It is this exchange of and water vapor that is measured as a proxy of photosynthetic rate.
The basic components of a photosynthetic system are the leaf chamber, infrared gas analyzer (IRGA), batteries and a console with keyboard, display and memory. Modern 'open system' photosynthesis systems also incorporate miniature disposable compressed gas cylinder and gas supply pipes. This is because external air has natural fluctuations in and water vapor content, which can introduce measurement noise. Modern 'open system' photosynthesis systems remove the and water vapour by passage over soda lime and Drierite, then add at a controlled rate to give a stable concentration. Some systems are also equipped with temperature control and a removable light unit, so the effect of these environmental variables can also be measured.
The leaf to be analysed is placed in the leaf chamber. The concentrations is measured by the infrared gas analyzer. The IRGA shines infrared light through a gas sample onto a detector. in the sample absorbs energy, so the reduction in the level of energy that reaches the detector indicates the concentration. Modern IRGAs take account of the fact that absorbs energy at similar wavelengths as . Modern IRG
Document 2:::
Guttation is the exudation of drops of xylem sap on the tips or edges of leaves of some vascular plants, such as grasses, and a number of fungi, which are not plants but were previously categorized as such and studied as part of botany.
Process
At night, transpiration usually does not occur, because most plants have their stomata closed. When there is a high soil moisture level, water will enter plant roots, because the water potential of the roots is lower than in the soil solution. The water will accumulate in the plant, creating a slight root pressure. The root pressure forces some water to exude through special leaf tip or edge structures, hydathodes or water glands, forming drops. Root pressure provides the impetus for this flow, rather than transpirational pull. Guttation is most noticeable when transpiration is suppressed and the relative humidity is high, such as during the night.
Guttation formation in fungi is important for visual identification, but the process causing it is unknown. However, due to its association with stages of rapid growth in the life cycle of fungi, it has been hypothesised that during rapid metabolism excess water produced by respiration is exuded.
Chemical content
Guttation fluid may contain a variety of organic and inorganic compounds, mainly sugars, and potassium. On drying, a white crust remains on the leaf surface.
Girolami et al. (2009) found that guttation drops from corn plants germinated from neonicotinoid-coated seeds could contain amounts of insecticide consistently higher than 10 mg/L, and up to 200 mg/L for the neonicotinoid imidacloprid. Concentrations this high are near those of active ingredients applied in field sprays for pest control and sometimes even higher. It was found that when bees consume guttation drops collected from plants grown from neonicotinoid-coated seeds, they die within a few minutes. This phenomenon may be a factor in bee deaths and, consequently, colony collapse disorder.
Nitrogen levels
Document 3:::
Excretion is a process in which metabolic waste
is eliminated from an organism. In vertebrates this is primarily carried out by the lungs, kidneys, and skin. This is in contrast with secretion, where the substance may have specific tasks after leaving the cell. Excretion is an essential process in all forms of life. For example, in mammals, urine is expelled through the urethra, which is part of the excretory system. In unicellular organisms, waste products are discharged directly through the surface of the cell.
During life activities such as cellular respiration, several chemical reactions take place in the body. These are known as metabolism. These chemical reactions produce waste products such as carbon dioxide, water, salts, urea and uric acid. Accumulation of these wastes beyond a level inside the body is harmful to the body. The excretory organs remove these wastes. This process of removal of metabolic waste from the body is known as excretion.
Green plants excrete carbon dioxide and water as respiratory products. In green plants, the carbon dioxide released during respiration gets used during photosynthesis. Oxygen is a by product generated during photosynthesis, and exits through stomata, root cell walls, and other routes. Plants can get rid of excess water by transpiration and guttation. It has been shown that the leaf acts as an 'excretophore' and, in addition to being a primary organ of photosynthesis, is also used as a method of excreting toxic wastes via diffusion. Other waste materials that are exuded by some plants — resin, saps, latex, etc. are forced from the interior of the plant by hydrostatic pressures inside the plant and by absorptive forces of plant cells. These latter processes do not need added energy, they act passively. However, during the pre-abscission phase, the metabolic levels of a leaf are high. Plants also excrete some waste substances into the soil around them.
In animals, the main excretory products are carbon dioxide, ammoni
Document 4:::
What a Plant Knows is a popular science book by Daniel Chamovitz, originally published in 2012, discussing the sensory system of plants. A revised edition was published in 2017.
Release details / Editions / Publication
Hardcover edition, 2012
Paperback version, 2013
Revised edition, 2017
What a Plant Knows has been translated and published in a number of languages.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which gas is given off by plants?
A. Hydrogen
B. Nitrogen
C. Oxygen
D. Helium
Answer:
|
|
sciq-3607
|
multiple_choice
|
In a hot water heater, burning fuel causes the water to get hot because combustion is what type of reaction?
|
[
"biochemical",
"geothermal",
"exothermic",
"endothermic"
] |
C
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Computer science and engineering (CSE) is an academic program at many universities which comprises computer science classes (e.g. data structures and algorithms) and computer engineering classes (e.g computer architecture). There is no clear division in computing between science and engineering, just like in the field of materials science and engineering. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both undergraduate as well postgraduate with specializations.
Academic courses
Academic programs vary between colleges, but typically include a combination of topics in computer science, computer engineering, and electrical engineering. Undergraduate courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most CSE programs require introductory mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, probability, and statistics, as well as the basics of electrical and electronic engineering, physics, and electromagnetism.
Example universities with CSE majors and departments
APJ Abdul Kalam Technological University
American International University-B
Document 2:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 3:::
Thermofluids is a branch of science and engineering encompassing four intersecting fields:
Heat transfer
Thermodynamics
Fluid mechanics
Combustion
The term is a combination of "thermo", referring to heat, and "fluids", which refers to liquids, gases and vapors. Temperature, pressure, equations of state, and transport laws all play an important role in thermofluid problems. Phase transition and chemical reactions may also be important in a thermofluid context. The subject is sometimes also referred to as "thermal fluids".
Heat transfer
Heat transfer is a discipline of thermal engineering that concerns the transfer of thermal energy from one physical system to another. Heat transfer is classified into various mechanisms, such as heat conduction, convection, thermal radiation, and phase-change transfer. Engineers also consider the transfer of mass of differing chemical species, either cold or hot, to achieve heat transfer.
Sections include :
Energy transfer by heat, work and mass
Laws of thermodynamics
Entropy
Refrigeration Techniques
Properties and nature of pure substances
Applications
Engineering : Predicting and analysing the performance of machines
Thermodynamics
Thermodynamics is the science of energy conversion involving heat and other forms of energy, most notably mechanical work. It studies and interrelates the macroscopic variables, such as temperature, volume and pressure, which describe physical, thermodynamic systems.
Fluid mechanics
Fluid Mechanics the study of the physical forces at work during fluid flow. Fluid mechanics can be divided into fluid kinematics, the study of fluid motion, and fluid kinetics, the study of the effect of forces on fluid motion. Fluid mechanics can further be divided into fluid statics, the study of fluids at rest, and fluid dynamics, the study of fluids in motion. Some of its more interesting concepts include momentum and reactive forces in fluid flow and fluid machinery theory and performance.
Sections include:
Flu
Document 4:::
Activation energy asymptotics (AEA), also known as large activation energy asymptotics, is an asymptotic analysis used in the combustion field utilizing the fact that the reaction rate is extremely sensitive to temperature changes due to the large activation energy of the chemical reaction.
History
The techniques were pioneered by the Russian scientists Yakov Borisovich Zel'dovich, David A. Frank-Kamenetskii and co-workers in the 30s, in their study on premixed flames and thermal explosions (Frank-Kamenetskii theory), but not popular to western scientists until the 70s. In the early 70s, due to the pioneering work of Williams B. Bush, Francis E. Fendell, Forman A. Williams, Amable Liñán and John F. Clarke, it became popular in western community and since then it was widely used to explain more complicated problems in combustion.
Method overview
In combustion processes, the reaction rate is dependent on temperature in the following form (Arrhenius law),
where is the activation energy, and is the universal gas constant. In general, the condition is satisfied, where is the burnt gas temperature. This condition forms the basis for activation energy asymptotics. Denoting for unburnt gas temperature, one can define the Zel'dovich number and heat release parameter as follows
In addition, if we define a non-dimensional temperature
such that approaching zero in the unburnt region and approaching unity in the burnt gas region (in other words, ), then the ratio of reaction rate at any temperature to reaction rate at burnt gas temperature is given by
Now in the limit of (large activation energy) with , the reaction rate is exponentially small i.e., and negligible everywhere, but non-negligible when . In other words, the reaction rate is negligible everywhere, except in a small region very close to burnt gas temperature, where . Thus, in solving the conservation equations, one identifies two different regimes, at leading order,
Outer convective-diffusive zone
I
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
In a hot water heater, burning fuel causes the water to get hot because combustion is what type of reaction?
A. biochemical
B. geothermal
C. exothermic
D. endothermic
Answer:
|
|
sciq-6833
|
multiple_choice
|
If a gas in a closed area experiences increases in pressure and decreases in temperatures, what other attribute of the gas will be affected?
|
[
"gravity",
"temperature",
"volume",
"velocity"
] |
C
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
A characteristic property is a chemical or physical property that helps identify and classify substances. The characteristic properties of a substance are always the same whether the sample being observed is large or small. Thus, conversely, if the property of a substance changes as the sample size changes, that property is not a characteristic property. Examples of physical properties that are not characteristic properties are mass and volume. Examples of characteristic properties include melting points, boiling points, density, viscosity, solubility, crystal shape, and color. Substances with characteristic properties can be separated. For example, in fractional distillation, liquids are separated using the boiling point. The water Boiling point is 212 degrees Fahrenheit.
Identifying a substance
Every characteristic property is unique to one given substance. Scientists use characteristic properties to identify unknown substances. However, characteristic properties are most useful for distinguishing between two or more substances, not identifying a single substance. For example, isopropanol and water can be distinguished by the characteristic property of odor. Characteristic properties are used because the sample size and the shape of the substance does not matter. For example, 1 gram of lead is the same color as 100 tons of lead.
See also
Intensive and extensive properties
Document 2:::
This is a list of gases at standard conditions, which means substances that boil or sublime at or below and 1 atm pressure and are reasonably stable.
List
This list is sorted by boiling point of gases in ascending order, but can be sorted on different values. "sub" and "triple" refer to the sublimation point and the triple point, which are given in the case of a substance that sublimes at 1 atm; "dec" refers to decomposition. "~" means approximately.
Known as gas
The following list has substances known to be gases, but with an unknown boiling point.
Fluoroamine
Trifluoromethyl trifluoroethyl trioxide CF3OOOCF2CF3 boils between 10 and 20°
Bis-trifluoromethyl carbonate boils between −10 and +10° possibly +12, freezing −60°
Difluorodioxirane boils between −80 and −90°.
Difluoroaminosulfinyl fluoride F2NS(O)F is a gas but decomposes over several hours
Trifluoromethylsulfinyl chloride CF3S(O)Cl
Nitrosyl cyanide ?−20° blue-green gas 4343-68-4
Thiazyl chloride NSCl greenish yellow gas; trimerises.
Document 3:::
The Timeline of the oil and gas industry in the United Kingdom is a selection of significant events in the history of the oil and gas sector in the United Kingdom.
Document 4:::
The thermodynamic properties of materials are intensive thermodynamic parameters which are specific to a given material. Each is directly related to a second order differential of a thermodynamic potential. Examples for a simple 1-component system are:
Compressibility (or its inverse, the bulk modulus)
Isothermal compressibility
Adiabatic compressibility
Specific heat (Note - the extensive analog is the heat capacity)
Specific heat at constant pressure
Specific heat at constant volume
Coefficient of thermal expansion
where P is pressure, V is volume, T is temperature, S is entropy, and N is the number of particles.
For a single component system, only three second derivatives are needed in order to derive all others, and so only three material properties are needed to derive all others. For a single component system, the "standard" three parameters are the isothermal compressibility , the specific heat at constant pressure , and the coefficient of thermal expansion .
For example, the following equations are true:
The three "standard" properties are in fact the three possible second derivatives of the Gibbs free energy with respect to temperature and pressure. Moreover, considering derivatives such as and the related Schwartz relations, shows that the properties triplet is not independent. In fact, one property function can be given as an expression of the two others, up to a reference state value.
The second principle of thermodynamics has implications on the sign of some thermodynamic properties such isothermal compressibility.
See also
List of materials properties (thermal properties)
Heat capacity ratio
Statistical mechanics
Thermodynamic equations
Thermodynamic databases for pure substances
Heat transfer coefficient
Latent heat
Specific heat of melting (Enthalpy of fusion)
Specific heat of vaporization (Enthalpy of vaporization)
Thermal mass
External links
The Dortmund Data Bank is a factual data bank for thermodynamic and t
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
If a gas in a closed area experiences increases in pressure and decreases in temperatures, what other attribute of the gas will be affected?
A. gravity
B. temperature
C. volume
D. velocity
Answer:
|
|
sciq-4126
|
multiple_choice
|
The hypothalamus and pituitary gland are located near the base of this organ?
|
[
"the liver",
"the brain",
"the heart",
"the lungs"
] |
B
|
Relavent Documents:
Document 0:::
H2.00.04.4.01001: Lymphoid tissue
H2.00.05.0.00001: Muscle tissue
H2.00.05.1.00001: Smooth muscle tissue
H2.00.05.2.00001: Striated muscle tissue
H2.00.06.0.00001: Nerve tissue
H2.00.06.1.00001: Neuron
H2.00.06.2.00001: Synapse
H2.00.06.2.00001: Neuroglia
h3.01: Bones
h3.02: Joints
h3.03: Muscles
h3.04: Alimentary system
h3.05: Respiratory system
h3.06: Urinary system
h3.07: Genital system
h3.08:
Document 1:::
In anatomy, a lobe is a clear anatomical division or extension of an organ (as seen for example in the brain, lung, liver, or kidney) that can be determined without the use of a microscope at the gross anatomy level. This is in contrast to the much smaller lobule, which is a clear division only visible under the microscope.
Interlobar ducts connect lobes and interlobular ducts connect lobules.
Examples of lobes
The four main lobes of the brain
the frontal lobe
the parietal lobe
the occipital lobe
the temporal lobe
The three lobes of the human cerebellum
the flocculonodular lobe
the anterior lobe
the posterior lobe
The two lobes of the thymus
The two and three lobes of the lungs
Left lung: superior and inferior
Right lung: superior, middle, and inferior
The four lobes of the liver
Left lobe of liver
Right lobe of liver
Quadrate lobe of liver
Caudate lobe of liver
The renal lobes of the kidney
Earlobes
Examples of lobules
the cortical lobules of the kidney
the testicular lobules of the testis
the lobules of the mammary gland
the pulmonary lobules of the lung
the lobules of the thymus
Document 2:::
The following diagram is provided as an overview of and topical guide to the human nervous system:
Human nervous system – the part of the human body that coordinates a person's voluntary and involuntary actions and transmits signals between different parts of the body. The human nervous system consists of two main parts: the central nervous system (CNS) and the peripheral nervous system (PNS). The CNS contains the brain and spinal cord. The PNS consists mainly of nerves, which are long fibers that connect the CNS to every other part of the body. The PNS includes motor neurons, mediating voluntary movement; the autonomic nervous system, comprising the sympathetic nervous system and the parasympathetic nervous system and regulating involuntary functions; and the enteric nervous system, a semi-independent part of the nervous system whose function is to control the gastrointestinal system.
Evolution of the human nervous system
Evolution of nervous systems
Evolution of human intelligence
Evolution of the human brain
Paleoneurology
Some branches of science that study the human nervous system
Neuroscience
Neurology
Paleoneurology
Central nervous system
The central nervous system (CNS) is the largest part of the nervous system and includes the brain and spinal cord.
Spinal cord
Brain
Brain – center of the nervous system.
Outline of the human brain
List of regions of the human brain
Principal regions of the vertebrate brain:
Peripheral nervous system
Peripheral nervous system (PNS) – nervous system structures that do not lie within the CNS.
Sensory system
A sensory system is a part of the nervous system responsible for processing sensory information. A sensory system consists of sensory receptors, neural pathways, and parts of the brain involved in sensory perception.
List of sensory systems
Sensory neuron
Perception
Visual system
Auditory system
Somatosensory system
Vestibular system
Olfactory system
Taste
Pain
Components of the nervous system
Neuron
I
Document 3:::
In a multicellular organism, an organ is a collection of tissues joined in a structural unit to serve a common function. In the hierarchy of life, an organ lies between tissue and an organ system. Tissues are formed from same type cells to act together in a function. Tissues of different types combine to form an organ which has a specific function. The intestinal wall for example is formed by epithelial tissue and smooth muscle tissue. Two or more organs working together in the execution of a specific body function form an organ system, also called a biological system or body system.
An organ's tissues can be broadly categorized as parenchyma, the functional tissue, and stroma, the structural tissue with supportive, connective, or ancillary functions. For example, the gland's tissue that makes the hormones is the parenchyma, whereas the stroma includes the nerves that innervate the parenchyma, the blood vessels that oxygenate and nourish it and carry away its metabolic wastes, and the connective tissues that provide a suitable place for it to be situated and anchored. The main tissues that make up an organ tend to have common embryologic origins, such as arising from the same germ layer. Organs exist in most multicellular organisms. In single-celled organisms such as members of the eukaryotes, the functional analogue of an organ is known as an organelle. In plants, there are three main organs.
The number of organs in any organism depends on the definition used. By one widely adopted definition, 79 organs have been identified in the human body.
Animals
Except for placozoans, multicellular animals including humans have a variety of organ systems. These specific systems are widely studied in human anatomy. The functions of these organ systems often share significant overlap. For instance, the nervous and endocrine system both operate via a shared organ, the hypothalamus. For this reason, the two systems are combined and studied as the neuroendocrine system. The sam
Document 4:::
The ovarian cortex is the outer portion of the ovary. The ovarian follicles are located within the ovarian cortex. The ovarian cortex is made up of connective tissue. Ovarian cortex tissue transplant has been performed to treat infertility.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The hypothalamus and pituitary gland are located near the base of this organ?
A. the liver
B. the brain
C. the heart
D. the lungs
Answer:
|
|
sciq-10458
|
multiple_choice
|
What is the causative agent of sleeping sickness in humans?
|
[
"trypanosoma brucei",
"escherichia coli",
"bacillus aerophilus",
"pseudomonas asplenii"
] |
A
|
Relavent Documents:
Document 0:::
Encephalitis lethargica is an atypical form of encephalitis. Also known as "sleeping sickness" or "sleepy sickness" (distinct from tsetse fly–transmitted sleeping sickness), it was first described in 1917 by neurologist Constantin von Economo and pathologist Jean-René Cruchet. The disease attacks the brain, leaving some victims in a statue-like condition, speechless and motionless. Between 1915 and 1926, an epidemic of encephalitis lethargica spread around the world. The exact number of people infected is unknown, but it is estimated that more than one million people contracted the disease during the epidemic, which directly caused more than 500,000 deaths. Most of those who survived never returned to their pre-morbid vigour.
Signs and symptoms
Encephalitis lethargica is characterized by high fever, sore throat, headache, lethargy, double vision, delayed physical and mental response, sleep inversion and catatonia. In severe cases, patients may enter a coma-like state (akinetic mutism). Patients may also experience abnormal eye movements ("oculogyric crises"), Parkinsonism, upper body weakness, muscular pains, tremors, neck rigidity, and behavioral changes including psychosis. Klazomania (a vocal tic) is sometimes present.
Cause
The causes of encephalitis lethargica are uncertain. Though it used to be believed that it was connected to the Spanish flu epidemic, modern research provides arguments against this claim. Some studies have explored its origins in an autoimmune response, and, separately or in relation to an immune response, links to pathologies of infectious disease—viral and bacterial, such as in the case of influenza, where a link with encephalitis is clear. Postencephalitic Parkinsonism was clearly documented to have followed an outbreak of encephalitis lethargica following the 1918 influenza pandemic; evidence for viral causation of the Parkinson's symptoms is circumstantial (epidemiologic, and finding influenza antigens in encephalitis lethargica pati
Document 1:::
The sleeping sickness of Kalachi, Kazakhstan (with the place or the syndrome sometimes called sleepy hollow) is a conjectured medical condition which causes a person to sleep for days or weeks at a time, together with other symptoms such as hallucinations, nausea, intoxicated behavior, disorientation and memory loss. The phenomenon was only reported in Kalachi and the nearby village of Krasnogorsk. It was first reported in March 2013 and by 2016 had affected about 150 people. The syndrome appeared to be non-communicable. The disease disappeared for some time but re-emerged in 2015, and affected all age groups.
Potential causes of the syndrome were suggested to be carbon monoxide poisoning or contamination of the ground water supply by chemicals used for military operations in the region.
Signs and symptoms
Other than excessive sleep, the disease causes hallucination, nausea and vomiting, and disorientation. Victims of the disease would sometimes act as if they were drunk, would experience memory loss about what they had done and experienced, and would often experience hallucinations like a "snail walking over their face". In a statement, a professor from Tomsk Polytechnic University, Leonid Rikhvanov, of the department of geo-ecology and geo-chemistry, said that radon gas from the mine could be the cause of the symptoms.
The affected people would fall asleep during day-to-day activities and always feel sleepy. As a local nurse described the phenomenon to an RT news crew, "You wake them up, they can speak to you, reply to you, but as soon as you stop talking and ask what bothers them, they just want to sleep, sleep, sleep."
Cause
Kazakh officials gave a report about the disease, stating that heightened levels of carbon monoxide, along with other hydrocarbons due to flooding of an abandoned Soviet-era uranium mine nearby, was causing the syndrome, by spreading through the village's air. Concentration of carbon monoxide and reduced oxygen in the air were concluded
Document 2:::
Freshers' flu is a name commonly given to a battery of illnesses contracted by new students (freshers) during the first few weeks at a university, and colleges of further education in some form; common symptoms include fever, sore throat, severe headache, coughing and general discomfort. The illnesses may or may not include actual flu and is often simply a bad cold.
Causes
The most likely cause is the convergence of large numbers of people arriving from all over the world; this is a particularly elevated risk due to the COVID-19 pandemic. The poor diet and heavy consumption of alcohol during freshers' week is also reported as a cause for many of the illnesses contracted during this time. "Stress, which may be induced by tiredness, combined with a poor diet, late nights and too much alcohol, can weaken the immune system and be a recipe for ill health. All this can make students and staff working with the students more susceptible to infections within their first weeks of term." In addition to this, nearly all university academic years in the UK commence around the end of September or beginning of October, which "marks the start of the annual flu season". The increased susceptibility to illness from late nights, heavy alcohol consumption and stress peaks 2–4 weeks after arrival at university and happens to coincide with the seasonal surge in the outbreaks of colds and flu in the Northern Hemisphere.
Other effects
As well as the usual viral effects, freshers' flu can also have some psychological effects. These effects arise where the stress of leaving home and other consequences of being independent, not to mention various levels of homesickness and the attempts at making new friends, can further weaken the immune system, increasing susceptibility to illness.
See also
Freshman 15
Document 3:::
Sickness behavior is a coordinated set of adaptive behavioral changes that develop in ill individuals during the course of an infection.
They usually, but not always, accompany fever and aid survival.
Such illness responses include lethargy, depression, anxiety, malaise, loss of appetite, sleepiness, hyperalgesia, reduction in grooming and failure to concentrate.
Sickness behavior is a motivational state that reorganizes the organism's priorities to cope with infectious pathogens.
It has been suggested as relevant to understanding depression, and some aspects of the suffering that occurs in cancer.
History
Sick animals have long been recognized by farmers as having different behavior. Initially it was thought that this was due to physical weakness that resulted from diverting energy to the body processes needed to fight infection. However, in the 1960s, it was shown that animals produced a blood-carried factor X that acted upon the brain to cause sickness behavior. In 1987, Benjamin L. Hart brought together a variety of research findings that argued for them being survival adaptations that if prevented would disadvantage an animal's ability to fight infection. In the 1980s, the blood-borne factor was shown to be proinflammatory cytokines produced by activated leukocytes in the immune system in response to lipopolysaccharides (a cell wall component of Gram-negative bacteria). These cytokines acted by various humoral and nerve routes upon the hypothalamus and other areas of the brain. Further research showed that the brain can also learn to control the various components of sickness behavior independently of immune activation..
In 2015, Shakhar and Shakhar suggested instead that sickness behavior developed primarily because it protected the kin of infected animals from transmissible diseases. According to this theory, termed the Eyam hypothesis, after the English Parish of Eyam, sickness behavior protects the social group of infected individuals by limiting their di
Document 4:::
Cause, also known as etiology () and aetiology, is the reason or origination of something.
The word etiology is derived from the Greek , aitiologia, "giving a reason for" (, aitia, "cause"; and , -logia).
Description
In medicine, etiology refers to the cause or causes of diseases or pathologies. Where no etiology can be ascertained, the disorder is said to be idiopathic.
Traditional accounts of the causes of disease may point to the "evil eye".
The Ancient Roman scholar Marcus Terentius Varro put forward early ideas about microorganisms in a 1st-century BC book titled On Agriculture.
Medieval thinking on the etiology of disease showed the influence of Galen and of Hippocrates. Medieval European doctors generally held the view that disease was related to the air and adopted a miasmatic approach to disease etiology.
Etiological discovery in medicine has a history in Robert Koch's demonstration that species of the pathogenic bacteria Mycobacterium tuberculosis causes the disease tuberculosis; Bacillus anthracis causes anthrax, and Vibrio cholerae causes cholera. This line of thinking and evidence is summarized in Koch's postulates. But proof of causation in infectious diseases is limited to individual cases that provide experimental evidence of etiology.
In epidemiology, several lines of evidence together are required to for causal inference. Austin Bradford Hill demonstrated a causal relationship between tobacco smoking and lung cancer, and summarized the line of reasoning in the Bradford Hill criteria, a group of nine principles to establish epidemiological causation. This idea of causality was later used in a proposal for a Unified concept of causation.
Disease causative agent
The infectious diseases are caused by infectious agents or pathogens. The infectious agents that cause disease fall into five groups: viruses, bacteria, fungi, protozoa, and helminths (worms).
The term can also refer to a toxin or toxic chemical that causes illness.
Chain of causatio
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the causative agent of sleeping sickness in humans?
A. trypanosoma brucei
B. escherichia coli
C. bacillus aerophilus
D. pseudomonas asplenii
Answer:
|
|
sciq-1433
|
multiple_choice
|
Iron will do what when it is exposed to oxygen and water?
|
[
"contract",
"rust",
"expand",
"become hot"
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
Document 2:::
The School of Textile and Clothing industries (ESITH) is a Moroccan engineering school, established in 1996, that focuses on textiles and clothing. It was created in collaboration with ENSAIT and ENSISA, as a result of a public private partnership designed to grow a key sector in the Moroccan economy. The partnership was successful and has been used as a model for other schools.
ESITH is the only engineering school in Morocco that provides a comprehensive program in textile engineering with internships for students at the Canadian Group CTT. Edith offers three programs in industrial engineering: product management, supply chain, and logistics, and textile and clothing
Document 3:::
Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams.
Course content
E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are:
Electrostatics
Conductors, capacitors, and dielectrics
Electric circuits
Magnetic fields
Electromagnetism.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with
Document 4:::
Engineering mathematics is a branch of applied mathematics concerning mathematical methods and techniques that are typically used in engineering and industry. Along with fields like engineering physics and engineering geology, both of which may belong in the wider category engineering science, engineering mathematics is an interdisciplinary subject motivated by engineers' needs both for practical, theoretical and other considerations outside their specialization, and to deal with constraints to be effective in their work.
Description
Historically, engineering mathematics consisted mostly of applied analysis, most notably: differential equations; real and complex analysis (including vector and tensor analysis); approximation theory (broadly construed, to include asymptotic, variational, and perturbative methods, representations, numerical analysis); Fourier analysis; potential theory; as well as linear algebra and applied probability, outside of analysis. These areas of mathematics were intimately tied to the development of Newtonian physics, and the mathematical physics of that period. This history also left a legacy: until the early 20th century subjects such as classical mechanics were often taught in applied mathematics departments at American universities, and fluid mechanics may still be taught in (applied) mathematics as well as engineering departments.
The success of modern numerical computer methods and software has led to the emergence of computational mathematics, computational science, and computational engineering (the last two are sometimes lumped together and abbreviated as CS&E), which occasionally use high-performance computing for the simulation of phenomena and the solution of problems in the sciences and engineering. These are often considered interdisciplinary fields, but are also of interest to engineering mathematics.
Specialized branches include engineering optimization and engineering statistics.
Engineering mathematics in tertiary educ
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Iron will do what when it is exposed to oxygen and water?
A. contract
B. rust
C. expand
D. become hot
Answer:
|
|
sciq-1610
|
multiple_choice
|
The amount of energy needed to raise the temperature of one gram of liquid water by 1°c is also known as?
|
[
"specific heat",
"calorie",
"Kelvin",
"mass"
] |
A
|
Relavent Documents:
Document 0:::
Thermofluids is a branch of science and engineering encompassing four intersecting fields:
Heat transfer
Thermodynamics
Fluid mechanics
Combustion
The term is a combination of "thermo", referring to heat, and "fluids", which refers to liquids, gases and vapors. Temperature, pressure, equations of state, and transport laws all play an important role in thermofluid problems. Phase transition and chemical reactions may also be important in a thermofluid context. The subject is sometimes also referred to as "thermal fluids".
Heat transfer
Heat transfer is a discipline of thermal engineering that concerns the transfer of thermal energy from one physical system to another. Heat transfer is classified into various mechanisms, such as heat conduction, convection, thermal radiation, and phase-change transfer. Engineers also consider the transfer of mass of differing chemical species, either cold or hot, to achieve heat transfer.
Sections include :
Energy transfer by heat, work and mass
Laws of thermodynamics
Entropy
Refrigeration Techniques
Properties and nature of pure substances
Applications
Engineering : Predicting and analysing the performance of machines
Thermodynamics
Thermodynamics is the science of energy conversion involving heat and other forms of energy, most notably mechanical work. It studies and interrelates the macroscopic variables, such as temperature, volume and pressure, which describe physical, thermodynamic systems.
Fluid mechanics
Fluid Mechanics the study of the physical forces at work during fluid flow. Fluid mechanics can be divided into fluid kinematics, the study of fluid motion, and fluid kinetics, the study of the effect of forces on fluid motion. Fluid mechanics can further be divided into fluid statics, the study of fluids at rest, and fluid dynamics, the study of fluids in motion. Some of its more interesting concepts include momentum and reactive forces in fluid flow and fluid machinery theory and performance.
Sections include:
Flu
Document 1:::
The heating value (or energy value or calorific value) of a substance, usually a fuel or food (see food energy), is the amount of heat released during the combustion of a specified amount of it.
The calorific value is the total energy released as heat when a substance undergoes complete combustion with oxygen under standard conditions. The chemical reaction is typically a hydrocarbon or other organic molecule reacting with oxygen to form carbon dioxide and water and release heat. It may be expressed with the quantities:
energy/mole of fuel
energy/mass of fuel
energy/volume of the fuel
There are two kinds of enthalpy of combustion, called high(er) and low(er) heat(ing) value, depending on how much the products are allowed to cool and whether compounds like are allowed to condense.
The high heat values are conventionally measured with a bomb calorimeter. Low heat values are calculated from high heat value test data. They may also be calculated as the difference between the heat of formation ΔH of the products and reactants (though this approach is somewhat artificial since most heats of formation are typically calculated from measured heats of combustion).
By convention, the (higher) heat of combustion is defined to be the heat released for the complete combustion of a compound in its standard state to form stable products in their standard states: hydrogen is converted to water (in its liquid state), carbon is converted to carbon dioxide gas, and nitrogen is converted to nitrogen gas. That is, the heat of combustion, ΔH°comb, is the heat of reaction of the following process:
(std.) + (c + - ) (g) → c (g) + (l) + (g)
Chlorine and sulfur are not quite standardized; they are usually assumed to convert to hydrogen chloride gas and or gas, respectively, or to dilute aqueous hydrochloric and sulfuric acids, respectively, when the combustion is conducted in a bomb calorimeter containing some quantity of water.
Ways of determination
Gross and net
Z
Document 2:::
The heat capacity rate is heat transfer terminology used in thermodynamics and different forms of engineering denoting the quantity of heat a flowing fluid of a certain mass flow rate is able to absorb or release per unit temperature change per unit time. It is typically denoted as C, listed from empirical data experimentally determined in various reference works, and is typically stated as a comparison between a hot and a cold fluid, Ch and Cc either graphically, or as a linearized equation. It is an important quantity in heat exchanger technology common to either heating or cooling systems and needs, and the solution of many real world problems such as the design of disparate items as different as a microprocessor and an internal combustion engine.
Basis
A hot fluid's heat capacity rate can be much greater than, equal to, or much less than the heat capacity rate of the same fluid when cold. In practice, it is most important in specifying heat-exchanger systems, wherein one fluid usually of dissimilar nature is used to cool another fluid such as the hot gases or steam cooled in a power plant by a heat sink from a water source—a case of dissimilar fluids, or for specifying the minimal cooling needs of heat transfer across boundaries, such as in air cooling.
As the ability of a fluid to resist change in temperature itself changes as heat transfer occurs changing its net average instantaneous temperature, it is a quantity of interest in designs which have to compensate for the fact that it varies continuously in a dynamic system. While itself varying, such change must be taken into account when designing a system for overall behavior to stimuli or likely environmental conditions, and in particular the worst-case conditions encountered under the high stresses imposed near the limits of operability— for example, an air-cooled engine in a desert climate on a very hot day.
If the hot fluid had a much larger heat capacity rate, then when hot and cold fluids went through
Document 3:::
The kelvin, symbol K, is a unit of measurement for temperature. The Kelvin scale is an absolute scale, which is defined such that 0 K is absolute zero and a change of thermodynamic temperature by 1 kelvin corresponds to a change of thermal energy by . The Boltzmann constant was exactly defined in the 2019 redefinition of the SI base units such that the triple point of water is . The kelvin is the base unit of temperature in the International System of Units (SI), used alongside its prefixed forms. It is named after the Belfast-born and University of Glasgow-based engineer and physicist William Thomson, 1st Baron Kelvin (1824–1907).
Historically, the Kelvin scale was developed from the Celsius scale, such that 273.15 K was 0 °C (the approximate melting point of ice) and a change of one kelvin was exactly equal to a change of one degree Celsius. This relationship remains accurate, but the Celsius, Fahrenheit, and Rankine scales are now defined in terms of the Kelvin scale. The kelvin is the primary unit of temperature for engineering and the physical sciences, while in most countries the Celsius scale remains the dominant scale outside of these fields. In the United States, outside of the physical sciences, the Fahrenheit scale predominates, with the kelvin or Rankine scale employed for absolute temperature.
History
Precursors
During the 18th century, multiple temperature scales were developed, notably Fahrenheit and centigrade (later Celsius). These scales predated much of the modern science of thermodynamics, including atomic theory and the kinetic theory of gases which underpin the concept of absolute zero. Instead, they chose defining points within the range of human experience that could be reproduced easily and with reasonable accuracy, but lacked any deep significance in thermal physics. In the case of the Celsius scale (and the long since defunct Newton scale and Réaumur scale) the melting point of water served as such a starting point, with Celsius be
Document 4:::
Mean kinetic temperature (MKT) is a simplified way of expressing the overall effect of temperature fluctuations during storage or transit of perishable goods. The MKT is widely used in the pharmaceutical industry.
The mean kinetic temperature can be expressed as:
Where:
is the mean kinetic temperature in kelvins
is the activation energy (in kJ mol−1)
is the gas constant (in J mol−1 K−1)
to are the temperatures at each of the sample points in kelvins
to are time intervals at each of the sample points
When the temperature readings are taken at the same interval (i.e., = = = ), the above equation is reduced to:
Where:
is the number of temperature sample points
Temperature
Pharmaceutical industry
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The amount of energy needed to raise the temperature of one gram of liquid water by 1°c is also known as?
A. specific heat
B. calorie
C. Kelvin
D. mass
Answer:
|
|
sciq-2177
|
multiple_choice
|
Energy transfer between what kinds of levels is generally rather inefficient?
|
[
"producer",
"apex",
"trophic",
"secondary"
] |
C
|
Relavent Documents:
Document 0:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work.
History
It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council.
Function
Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres.
STEM ambassadors
To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell.
Funding
STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments.
See also
The WISE Campaign
Engineering and Physical Sciences Research Council
National Centre for Excellence in Teaching Mathematics
Association for Science Education
Glossary of areas of mathematics
Glossary of astronomy
Glossary of biology
Glossary of chemistry
Glossary of engineering
Glossary of physics
Document 3:::
Computer science and engineering (CSE) is an academic program at many universities which comprises computer science classes (e.g. data structures and algorithms) and computer engineering classes (e.g computer architecture). There is no clear division in computing between science and engineering, just like in the field of materials science and engineering. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both undergraduate as well postgraduate with specializations.
Academic courses
Academic programs vary between colleges, but typically include a combination of topics in computer science, computer engineering, and electrical engineering. Undergraduate courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most CSE programs require introductory mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, probability, and statistics, as well as the basics of electrical and electronic engineering, physics, and electromagnetism.
Example universities with CSE majors and departments
APJ Abdul Kalam Technological University
American International University-B
Document 4:::
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Energy transfer between what kinds of levels is generally rather inefficient?
A. producer
B. apex
C. trophic
D. secondary
Answer:
|
|
sciq-9939
|
multiple_choice
|
Long distance runners try to maintain constant velocity with very little acceleration or deceleration to conserve what?
|
[
"momentum",
"fuel",
"pressure",
"energy"
] |
D
|
Relavent Documents:
Document 0:::
Combat endurance is the time that a military system or unit can remain in combat before having to withdraw due to depleted resources. The definition is not precise; for example the combat endurance of an aircraft, without qualification, is usually the time the aircraft can remain at an altitude suitable for combat, but in a particular theatre of operations it is the time it can remain in the area of combat. During the Battle of Britain, for example, the combat endurance of German fighters was the time they could remain over Britain, i.e., their inherent (endurance)less the time to travel from their base to Britain, and the time to return—about 15 minutes.
In addition to fuel the expenditure of ammunition and other consumables will reduce combat endurance, for example the limiting factors for a nuclear attack submarine are its torpedoes or for a nuclear aircraft carrier aviation fuel and aircraft munitions.
Military units will have a combat endurance, how long they can stay in the field for, measured by how long its logistics train can keep its component subunits supplied with food, fuel, ammunition and spare parts etc.
The United States Department of Defense and NATO define endurance as "the time an aircraft can continue flying, or a ground vehicle or ship can continue operating, under specified conditions, e.g., without refueling."
Combat endurance training is also used for a system of physical training associated with stamina.
Improving combat endurance
The improvements of combat endurance are largely concerned with better efficiency to the current platforms and they aim to bridge the gap between the resources available today and the future. Technology, therefore, dominates this field and one specific aspect that demonstrates this involves the technologies that enhance fuel efficiency. There are three improvement categories focused on this area:
fundamental: involves new vehicle configurations that affect overall aerodynamics and structural efficiency; n
Document 1:::
The physiology of marathons is typically associated with high demands on a marathon runner's cardiovascular system and their locomotor system. The marathon was conceived centuries ago and as of recent has been gaining popularity among many populations around the world. The 42.195 km (26.2 mile) distance is a physical challenge that entails distinct features of an individual's energy metabolism. Marathon runners finish at different times because of individual physiological characteristics.
The interaction between different energy systems captures the essence of why certain physiological characteristics of marathon runners exist. The differing efficiency of certain physiological features in marathon runners evidence the variety of finishing times among elite marathon runners that share similarities in many physiological characteristics. Aside from large aerobic capacities and other biochemical mechanisms, external factors such as the environment and proper nourishment of a marathon runner can further the insight as to why marathon performance is variable despite ideal physiological characteristics obtained by a runner.
History
The first marathon was perhaps a 25 mile run by Pheidippides, a Greek soldier who ran to Athens from the town of Marathon, Greece to deliver news of a battle victory over the Persians in 490 B.C. According to this belief, he dropped dead of exhaustion shortly after arriving in Athens. Thousands of years later, marathon running became part of world sports, starting at the inaugural Marathon in the 1896 Modern Olympic Games. After around 40 years of various distances, the 42.195 kilometer (26.2) mile trek became standard. The number of marathons in the United States has grown over 45 times in this period.
With an increase in popularity, the scientific field has a large basis to analyze some of the physiological characteristics and the factors influencing these traits that led to Pheidippides's death. The high physical and biochemical deman
Document 2:::
Running economy (RE) a complex, multifactorial concept that represents the sum of metabolic, cardiorespiratory, biomechanical and neuromuscular efficiency during running. Oxygen consumption (VO2) is the most commonly used method for measuring running economy, as the exchange of gases in the body, specifically oxygen and carbon dioxide, closely reflects energy metabolism. Those who are able to consume less oxygen while running at a given velocity are said to have a better running economy. However, straightforward oxygen usage does not account for whether the body is metabolising lipids or carbohydrates, which produce different amounts of energy per unit of oxygen; as such, accurate measurements of running economy must use and data to estimate the calorific content of the substrate that the oxygen is being used to respire.
In distance running, an athlete may attempt to improve performance through training designed to improve running economy. Running economy has been found to be a good predictor of race performance; it has been found to be a stronger correlate of performance than maximal oxygen uptake (VO2 max) in trained runners with the same values.
The idea of running economy is increasingly used to understand performance, as new technology can drastically lower running times over marathon distances, independently of physiology or even training. Factors affecting running economy include a runner’s biology, training regimens, equipment, and environment. The recent accomplishment of Eliud Kipchoge running a marathon in under two hours has enhanced interest in the subject.
Measurement and values
Measurement
Running Economy is calculated by measuring VO₂ while running on a treadmill at various constant speeds for anywhere between three and fifteen minutes. VO₂ is the amount of oxygen consumed in milliliters over one minute and normalized by kilogram of body weight. To compare running economies between individuals, VO₂ is interpolated to common running velocities
Document 3:::
Specific Physical Preparedness (abbreviated SPP), also referred to by Sports-specific Physical Preparedness is the status of being prepared for the movements in a specific activity (usually a sport).
Specific training includes movements specific to a sport that can only be learned through repetition of those movements. For instance, shooting a free throw, running a marathon, and performing a handstand all require dedicated work on those skills. An SPP phase generally follows a phase of General Physical Preparedness, or GPP, which lays out an athletic base from which to build.
Related movements that mimic certain aspects of the movement which can be specialized in and put together to form it are also part of specific training.
External links
Clubbell mention of SPP
Exercise physiology
Document 4:::
The Swolf is a composite measurement in sports swimming that reflects how fast and how efficiently somebody is swimming. In contrast, time per distance (speed) neglects swimming technique, and the number of swimming strokes per lap neglects the purpose of competitive swimming: Covering a given distance in the shortest time.
Background
Swolf is a portmanteau of "swim" and "golf". As in golf, a lower number of strokes is better. The Swolf score is the number of seconds (for a given lap, 25 or 50 meters), plus the number of swimming strokes made in the same distance.
After a swimmer has learned to swim longer distances at a constant, high power output, it becomes essential to improve the swimming efficiency: Achieving a higher acceleration per swimming stroke, and gliding a longer distance between the strokes. The Swolf then becomes a useful tool to measure training progress.
Due to different body dimensions, a comparison between two swimmers is rarely useful; the Swolf is rather a guide that reflects one's own training progress. In contrast to the earlier days, where swimmers had to count their own strokes, modern sports watches carry acceleration sensors and indicate the Swolf number of a given training unit.
Weblinks
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Long distance runners try to maintain constant velocity with very little acceleration or deceleration to conserve what?
A. momentum
B. fuel
C. pressure
D. energy
Answer:
|
|
sciq-7368
|
multiple_choice
|
What is a fold of the outer skin lining the shell called?
|
[
"stack",
"marble",
"mantle",
"cortex"
] |
C
|
Relavent Documents:
Document 0:::
A laminar organization describes the way certain tissues, such as bone membrane, skin, or brain tissues, are arranged in layers.
Types
Embryo
The earliest forms of laminar organization are shown in the diploblastic and triploblastic formation of the germ layers in the embryo. In the first week of human embryogenesis two layers of cells have formed, an external epiblast layer (the primitive ectoderm), and an internal hypoblast layer (primitive endoderm). This gives the early bilaminar disc. In the third week in the stage of gastrulation epiblast cells invaginate to form endoderm, and a third layer of cells known as mesoderm. Cells that remain in the epiblast become ectoderm. This is the trilaminar disc and the epiblast cells have given rise to the three germ layers.
Brain
In the brain a laminar organization is evident in the arrangement of the three meninges, the membranes that cover the brain and spinal cord. These membranes are the dura mater, arachnoid mater, and pia mater. The dura mater has two layers a periosteal layer near to the bone of the skull, and a meningeal layer next to the other meninges.
The cerebral cortex, the outer neural sheet covering the cerebral hemispheres can be described by its laminar organization, due to the arrangement of cortical neurons into six distinct layers.
Eye
The eye in mammals has an extensive laminar organization. There are three main layers – the outer fibrous tunic, the middle uvea, and the inner retina. These layers have sublayers with the retina having ten ranging from the outer choroid to the inner vitreous humor and including the retinal nerve fiber layer.
Skin
The human skin has a dense laminar organization. The outer epidermis has four or five layers.
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
In zoology, the epidermis is an epithelium (sheet of cells) that covers the body of a eumetazoan (animal more complex than a sponge). Eumetazoa have a cavity lined with a similar epithelium, the gastrodermis, which forms a boundary with the epidermis at the mouth.
Sponges have no epithelium, and therefore no epidermis or gastrodermis. The epidermis of a more complex invertebrate is just one layer deep, and may be protected by a non-cellular cuticle. The epidermis of a higher vertebrate has many layers, and the outer layers are reinforced with keratin and then die.
Document 3:::
The posterior surfaces of the ciliary processes are covered by a bilaminar layer of black pigment cells, which is continued forward from the retina, and is named the pars ciliaris retinae.
Document 4:::
A plate in animal anatomy may refer to several things:
Flat bones (examples: bony plates, dermal plates) of vertebrates
an appendage of the Stegosauria group of dinosaurs
articulated armoured plates covering the head of thorax of Placodermi (literally "plate-skinned"), an extinct class of prehistoric fish (including skull, thoracic and tooth plates)
bony shields of the Ostracoderms (armored jawless fishes) such as the dermal head armour of members of the class Pteraspidomorphi that include dorsal, ventral, rostral and pineal plates
plates of a carapace, such as the dermal plates of the shell of a turtle
dermal plates partly or completely covering the body of the fish in the order Gasterosteiformes that includes the sticklebacks and relatives
plates of dermal bones of the armadillo
Zygomatic plate, a bony plate derived from the flattened front part of the zygomatic arch (cheekbone) in rodent anatomy
Other flat structures
hairy plate-like keratin scales of the pangolin
Basal plate (disambiguation), several anatomy-related meanings
Other meanings in human anatomy
Alar plate, a neural structure in the embryonic nervous system
Cribriform plate, of the ethmoid bone (horizontal lamina) received into the ethmoidal notch of the frontal bone and roofs in the nasal cavities
Epiphyseal plate, a hyaline cartilage plate in the metaphysis at each end of a long bone
Lateral pterygoid plate of the sphenoid, a broad, thin and everted bone that forms the lateral part of a horseshoe like process that extends from the inferior aspect of the sphenoid bone
Nail plate, the hard and translucent portion of the nail
Perpendicular plate of the ethmoid bone (vertical plate), a thin, flattened lamina, polygonal in form, which descends from the under surface of the cribriform plate, and assists in forming the septum of the nose
Related structures
Scute, a bony external plate or scale overlaid with horn, as on the shell of a turtle, the skin of crocodilians and the feet of
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is a fold of the outer skin lining the shell called?
A. stack
B. marble
C. mantle
D. cortex
Answer:
|
|
sciq-8682
|
multiple_choice
|
What is the collective travel of sheep known as?
|
[
"herd",
"load",
"den",
"gaggle"
] |
A
|
Relavent Documents:
Document 0:::
A herd is a social group of certain animals of the same species, either wild or domestic. The form of collective animal behavior associated with this is called herding. These animals are known as gregarious animals.
The term herd is generally applied to mammals, and most particularly to the grazing ungulates that classically display this behaviour. Different terms are used for similar groupings in other species; in the case of birds, for example, the word is flocking, but flock may also be used for mammals, particularly sheep or goats. Large groups of carnivores are usually called packs, and in nature a herd is classically subject to predation from pack hunters.
Special collective nouns may be used for particular taxa (for example a flock of geese, if not in flight, is sometimes called a gaggle) but for theoretical discussions of behavioural ecology, the generic term herd can be used for all such kinds of assemblage.
The word herd, as a noun, can also refer to one who controls, possesses and has care for such groups of animals when they are domesticated. Examples of herds in this sense include shepherds (who tend to sheep), goatherds (who tend to goats), and cowherds (who tend to cattle).
The structure and size of herds
When an association of animals (or, by extension, people) is described as a herd, the implication is that the group tends to act together (for example, all moving in the same direction at a given time), but that this does not occur as a result of planning or coordination. Rather, each individual is choosing behaviour in correspondence with most other members, possibly through imitation or possibly because all are responding to the same external circumstances. A herd can be contrasted with a coordinated group where individuals have distinct roles. Many human groupings, such as army detachments or sports teams, show such coordination and differentiation of roles, but so do some animal groupings such as those of eusocial insects, which are coordina
Document 1:::
Herding is the act of bringing individual animals together into a group (herd), maintaining the group, and moving the group from place to place—or any combination of those. Herding can refer either to the process of animals forming herds in the wild, or to human intervention forming herds for some purpose. While the layperson uses the term "herding" to describe this human intervention, most individuals involved in the process term it mustering, "working stock", or droving.
Some animals instinctively gather together as a herd. A group of animals fleeing a predator will demonstrate herd behavior for protection; while some predators, such as wolves and dogs have instinctive herding abilities derived from primitive hunting instincts. Instincts in herding dogs and trainability can be measured at noncompetitive herding tests. Dogs exhibiting basic herding instincts can be trained to aid in herding and to compete in herding and stock dog trials. Sperm whales have also been observed teaming up to herd prey in a coordinated feeding behavior.
Herding is used in agriculture to manage domesticated animals. Herding can be performed by people or trained animals such as herding dogs that control the movement of livestock under the direction of a person. The people whose occupation it is to herd or control animals often have herd added to the name of the animal they are herding to describe their occupation (shepherd, goatherd, cowherd).
A competitive sport has developed in some countries where the combined skill of man and herding dog is tested and judged in a "trial", such as a sheepdog trial. Animals such as sheep, camel, yak, and goats are mostly reared. They provide milk, meat and other products to the herders and their families.
Document 2:::
Let's Count Goats! is a 2010 children's picture book by Mem Fox and illustrated by Jan Thomas. It is a counting book with the narrator inviting the reader to count goats that appear in the pictures as they engage in humanlike behaviour.
Reception
In a review of Let's Count Goats!, School Library Journal wrote "Fox and Thomas draw viewers in through catchy phrases and amusing pictures of goats that appear in a variety of shapes, sizes, and numbers", and called it "a clever counting lesson".
Let's Count Goats! has also been reviewed by Kirkus Reviews, Publishers Weekly, Booklist, Horn Book Guides, and Magpies.
Document 3:::
Sheep (: sheep) or domestic sheep (Ovis aries) are a domesticated, ruminant mammal typically kept as livestock. Although the term sheep can apply to other species in the genus Ovis, in everyday usage it almost always refers to domesticated sheep. Like all ruminants, sheep are members of the order Artiodactyla, the even-toed ungulates. Numbering a little over one billion, domestic sheep are also the most numerous species of sheep. An adult female is referred to as a ewe (), an intact male as a ram, occasionally a tup, a castrated male as a wether, and a young sheep as a lamb.
Sheep are most likely descended from the wild mouflon of Europe and Asia, with Iran being a geographic envelope of the domestication center. One of the earliest animals to be domesticated for agricultural purposes, sheep are raised for fleeces, meat (lamb, hogget or mutton) and milk. A sheep's wool is the most widely used animal fiber, and is usually harvested by shearing. In Commonwealth countries, ovine meat is called lamb when from younger animals and mutton when from older ones; in the United States, meat from both older and younger animals is usually called lamb. Sheep continue to be important for wool and meat today, and are also occasionally raised for pelts, as dairy animals, or as model organisms for science.
Sheep husbandry is practised throughout the majority of the inhabited world, and has been fundamental to many civilizations. In the modern era, Australia, New Zealand, the southern and central South American nations, and the British Isles are most closely associated with sheep production.
There is a large lexicon of unique terms for sheep husbandry which vary considerably by region and dialect. Use of the word sheep began in Middle English as a derivation of the Old English word . A group of sheep is called a flock. Many other specific terms for the various life stages of sheep exist, generally related to lambing, shearing, and age.
Being a key animal in the history of farming
Document 4:::
Dry Sheep Equivalent (DSE) is a standard unit frequently used in Australia to compare the feed requirements of different classes of stock or to assess the carrying capacity and potential productivity of a given farm or area of grazing land.
The unit represents the amount of feed required by a two-year-old, 45 kg (some sources state 50 kg) Merino sheep (wether or non-lactating, non-pregnant ewe) to maintain its weight. One DSE is equivalent to 7.60 megajoule (MJ) per day.
The carrying capacity of a farm is commonly determined in Australia by expressing the number of stock carried during a period of feed shortage in terms of their DSEs.
Benchmarking standards used by Grazing for Profit programmes quote that one labour unit (40 hours per week) is required for 6,000 DSE (other benchmarking standards set the figure at 7,000 DSE).
See also
Livestock grazing comparison
Sheep
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the collective travel of sheep known as?
A. herd
B. load
C. den
D. gaggle
Answer:
|
|
sciq-6878
|
multiple_choice
|
In what form is the heat absorbed when you heat ice and it reaches a temperature of 0 c?
|
[
"mechanical energy",
"geothermal energy",
"potential energy",
"radiation energy"
] |
C
|
Relavent Documents:
Document 0:::
Thermofluids is a branch of science and engineering encompassing four intersecting fields:
Heat transfer
Thermodynamics
Fluid mechanics
Combustion
The term is a combination of "thermo", referring to heat, and "fluids", which refers to liquids, gases and vapors. Temperature, pressure, equations of state, and transport laws all play an important role in thermofluid problems. Phase transition and chemical reactions may also be important in a thermofluid context. The subject is sometimes also referred to as "thermal fluids".
Heat transfer
Heat transfer is a discipline of thermal engineering that concerns the transfer of thermal energy from one physical system to another. Heat transfer is classified into various mechanisms, such as heat conduction, convection, thermal radiation, and phase-change transfer. Engineers also consider the transfer of mass of differing chemical species, either cold or hot, to achieve heat transfer.
Sections include :
Energy transfer by heat, work and mass
Laws of thermodynamics
Entropy
Refrigeration Techniques
Properties and nature of pure substances
Applications
Engineering : Predicting and analysing the performance of machines
Thermodynamics
Thermodynamics is the science of energy conversion involving heat and other forms of energy, most notably mechanical work. It studies and interrelates the macroscopic variables, such as temperature, volume and pressure, which describe physical, thermodynamic systems.
Fluid mechanics
Fluid Mechanics the study of the physical forces at work during fluid flow. Fluid mechanics can be divided into fluid kinematics, the study of fluid motion, and fluid kinetics, the study of the effect of forces on fluid motion. Fluid mechanics can further be divided into fluid statics, the study of fluids at rest, and fluid dynamics, the study of fluids in motion. Some of its more interesting concepts include momentum and reactive forces in fluid flow and fluid machinery theory and performance.
Sections include:
Flu
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
Engineering Equation Solver (EES) is a commercial software package used for solution of systems of simultaneous non-linear equations. It provides many useful specialized functions and equations for the solution of thermodynamics and heat transfer problems, making it a useful and widely used program for mechanical engineers working in these fields. EES stores thermodynamic properties, which eliminates iterative problem solving by hand through the use of code that calls properties at the specified thermodynamic properties. EES performs the iterative solving, eliminating the tedious and time-consuming task of acquiring thermodynamic properties with its built-in functions.
EES also includes parametric tables that allow the user to compare a number of variables at a time. Parametric tables can also be used to generate plots. EES can also integrate, both as a command in code and in tables. EES also provides optimization tools that minimize or maximize a chosen variable by varying a number of other variables. Lookup tables can be created to store information that can be accessed by a call in the code. EES code allows the user to input equations in any order and obtain a solution, but also can contain if-then statements, which can also be nested within each other to create if-then-else statements. Users can write functions for use in their code, and also procedures, which are functions with multiple outputs.
Adjusting the preferences allows the user choose a unit system, specify stop criteria, including the number of iterations, and also enable/disable unit checking and recommending units, among other options. Users can also specify guess values and variable limits to aid the iterative solving process and help EES quickly and successfully find a solution.
The program is developed by F-Chart Software, a commercial spin-off of Prof Sanford A Klein from Department of Mechanical Engineering
University of Wisconsin-Madison.
EES is included as attached software for a number
Document 3:::
A continuous cooling transformation (CCT) phase diagram is often used when heat treating steel. These diagrams are used to represent which types of phase changes will occur in a material as it is cooled at different rates. These diagrams are often more useful than time-temperature-transformation diagrams because it is more convenient to cool materials at a certain rate (temperature-variable cooling), than to cool quickly and hold at a certain temperature (isothermal cooling).
Types of continuous cooling diagrams
There are two types of continuous cooling diagrams drawn for practical purposes.
Type 1: This is the plot beginning with the transformation start point, cooling with a specific transformation fraction and ending with a transformation finish temperature for all products against transformation time for each cooling curve.
Type 2: This is the plot beginning with the transformation start point, cooling with specific transformation fraction and ending with a transformation finish temperature for all products against cooling rate or bar diameter of the specimen for each type of cooling medium..
See also
Isothermal transformation
Phase diagram
Document 4:::
A phase-change material (PCM) is a substance which releases/absorbs sufficient energy at phase transition to provide useful heat or cooling. Generally the transition will be from one of the first two fundamental states of matter - solid and liquid - to the other. The phase transition may also be between non-classical states of matter, such as the conformity of crystals, where the material goes from conforming to one crystalline structure to conforming to another, which may be a higher or lower energy state.
The energy released/absorbed by phase transition from solid to liquid, or vice versa, the heat of fusion is generally much higher than the sensible heat. Ice, for example, requires 333.55 J/g to melt, but then water will rise one degree further with the addition of just 4.18 J/g. Water/ice is therefore a very useful phase change material and has been used to store winter cold to cool buildings in summer since at least the time of the Achaemenid Empire.
By melting and solidifying at the phase-change temperature (PCT), a PCM is capable of storing and releasing large amounts of energy compared to sensible heat storage. Heat is absorbed or released when the material changes from solid to liquid and vice versa or when the internal structure of the material changes; PCMs are accordingly referred to as latent heat storage (LHS) materials.
There are two principal classes of phase-change material: organic (carbon-containing) materials derived either from petroleum, from plants or from animals; and salt hydrates, which generally either use natural salts from the sea or from mineral deposits or are by-products of other processes. A third class is solid to solid phase change.
PCMs are used in many different commercial applications where energy storage and/or stable temperatures are required, including, among others, heating pads, cooling for telephone switching boxes, and clothing.
By far the biggest potential market is for building heating and cooling. In this ap
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
In what form is the heat absorbed when you heat ice and it reaches a temperature of 0 c?
A. mechanical energy
B. geothermal energy
C. potential energy
D. radiation energy
Answer:
|
|
sciq-3769
|
multiple_choice
|
About 50% of all animal species died off between the mesozoic and which other era?
|
[
"precambrian",
"jurassic",
"cretaceous",
"cenozoic"
] |
D
|
Relavent Documents:
Document 0:::
Animals are multicellular, eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. As of 2022, 2.16 million living animal species have been described—of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are around 7.77 million animal species. Animals range in length from to . They have complex interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology.
Most living animal species are in Bilateria, a clade whose members have a bilaterally symmetric body plan. The Bilateria include the protostomes, containing animals such as nematodes, arthropods, flatworms, annelids and molluscs, and the deuterostomes, containing the echinoderms and the chordates, the latter including the vertebrates. Life forms interpreted as early animals were present in the Ediacaran biota of the late Precambrian. Many modern animal phyla became clearly established in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago.
Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on ad
Document 1:::
Timeline
Paleontology
Paleontology timelines
Document 2:::
This article is a list of biological species, subspecies, and evolutionary significant units that are known to have become extinct during the Holocene, the current geologic epoch, ordered by their known or approximate date of disappearance from oldest to most recent.
The Holocene is considered to have started with the Holocene glacial retreat around 11650 years Before Present ( BC). It is characterized by a general trend towards global warming, the expansion of anatomically modern humans (Homo sapiens) to all emerged land masses, the appearance of agriculture and animal husbandry, and a reduction in global biodiversity. The latter, dubbed the sixth mass extinction in Earth history, is largely attributed to increased human population and activity, and may have started already during the preceding Pleistocene epoch with the demise of the Pleistocene megafauna.
The following list is incomplete by necessity, since the majority of extinctions are thought to be undocumented, and for many others there isn't a definitive, widely accepted last, or most recent record. According to the species-area theory, the present rate of extinction may be up to 140,000 species per year.
10th millennium BC
9th millennium BC
8th millennium BC
7th millennium BC
6th millennium BC
5th millennium BC
4th millennium BC
3rd millennium BC
2nd millennium BC
1st millennium BC
1st millennium CE
1st–5th centuries
6th–10th centuries
2nd millennium CE
11th-12th century
13th-14th century
15th-16th century
17th century
18th century
19th century
1800s-1820s
1830s-1840s
1850s-1860s
1870s
1880s
1890s
20th century
1900s
1910s
1920s
1930s
1940s
1950s
1960s
1970s
1980s
1990s
3rd millennium CE
21st century
2000s
2010s
See also
List of extinct animals
Extinction event
Quaternary extinction event
Holocene extinction
Timeline of the evolutionary history of life
Timeline of environmental history
Index of environmental articles
List of environmental issues
Document 3:::
The Mesozoic–Cenozoic Radiation is the third major extended increase of biodiversity in the Phanerozoic, after the Cambrian Explosion and the Great Ordovician Biodiversification Event, which appeared to exceeded the equilibrium reached after the Ordovician radiation. Made known by its identification in marine invertebrates, this evolutionary radiation began in the Mesozoic, after the Permian extinctions, and continues to this date. This spectacular radiation affected both terrestrial and marine flora and fauna, during which the “modern” fauna came to replace much of the Paleozoic fauna. Notably, this radiation event was marked by the rise of angiosperms during the mid-Cretaceous, and the K-Pg extinction, which initiated the rapid increase in mammalian biodiversity.
Causes and significance
The exact causes of this extended increase in biodiversity are still being debated, however, the Mesozoic-Cenozoic radiation has often been related to large-scale paleogeographical changes. The fragmentation of the supercontinent Pangaea has been related to an increase in both marine and terrestrial biodiversity. The link between the fragmentation of supercontinents and biodiversity was first proposed by Valentine and Moores in 1972. They hypothesized that the isolation of terrestrial environments and the partitioning of oceanic water masses, as a result of the breaking up of Pangaea, resulted in an increase in allopatric speciation, which led to an increased biodiversity. These smaller landmasses, while individually being less diverse than a supercontinent, contain a high degree of endemic species, resulting in an overall higher biodiversity than a single landmass of equivalent size. It is therefore argued that, similarly to the Ordovician bio-diversification, the differentiation of biotas along environmental gradients caused by the fragmentation of a supercontinent, was a driving force behind the Mesozoic-Cenozoic radiation.
Part of the dramatic increase in biodiversity during
Document 4:::
The history of life on Earth is closely associated with environmental change on multiple spatial and temporal scales. Climate change is a long-term change in the average weather patterns that have come to define Earth’s local, regional and global climates. These changes have a broad range of observed effects that are synonymous with the term. Climate change is any significant long term change in the expected pattern, whether due to natural variability or as a result of human activity. Predicting the effects that climate change will have on plant biodiversity can be achieved using various models, however bioclimatic models are most commonly used.
Environmental conditions play a key role in defining the function and geographic distributions of plants, in combination with other factors, thereby modifying patterns of biodiversity. Changes in long term environmental conditions that can be collectively coined climate change are known to have had enormous impacts on current plant diversity patterns; further impacts are expected in the future. It is predicted that climate change will remain one of the major drivers of biodiversity patterns in the future. Climate change is thought to be one of several factors causing the currently ongoing human-triggered mass extinction, which is changing the distribution and abundance of many plants.
Palaeo context
The Earth has experienced a constantly changing climate in the time since plants first evolved. In comparison to the present day, this history has seen Earth as cooler, warmer, drier and wetter, and (carbon dioxide) concentrations have been both higher and lower. These changes have been reflected by constantly shifting vegetation, for example forest communities dominating most areas in interglacial periods, and herbaceous communities dominating during glacial periods. It has been shown through fossil records that past climatic change has been a major driver of the processes of speciation and extinction. The best known example
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
About 50% of all animal species died off between the mesozoic and which other era?
A. precambrian
B. jurassic
C. cretaceous
D. cenozoic
Answer:
|
|
ai2_arc-532
|
multiple_choice
|
Which of these is only found outside the solar system?
|
[
"planets",
"moons",
"nebulae",
"comets"
] |
C
|
Relavent Documents:
Document 0:::
This is a list of potentially habitable exoplanets. The list is mostly based on estimates of habitability by the Habitable Exoplanets Catalog (HEC), and data from the NASA Exoplanet Archive. The HEC is maintained by the Planetary Habitability Laboratory at the University of Puerto Rico at Arecibo. There is also a speculative list being developed of superhabitable planets.
Surface planetary habitability is thought to require orbiting at the right distance from the host star for liquid surface water to be present, in addition to various geophysical and geodynamical aspects, atmospheric density, radiation type and intensity, and the host star's plasma environment.
List
This is a list of exoplanets within the circumstellar habitable zone that are under 10 Earth masses and smaller than 2.5 Earth radii, and thus have a chance of being rocky. Note that inclusion on this list does not guarantee habitability, and in particular the larger planets are unlikely to have a rocky composition. Earth is included for comparison.
Note that mass and radius values prefixed with "~" have not been measured, but are estimated from a mass-radius relationship.
Previous candidates
Some exoplanet candidates detected by radial velocity that were originally thought to be potentially habitable were later found to most likely be artifacts of stellar activity. These include Gliese 581 d & g, Gliese 667 Ce & f, Gliese 682 b & c, Kapteyn b, and Gliese 832 c.
HD 85512 b was initially estimated to be potentially habitable, but updated models for the boundaries of the habitable zone placed the planet interior to the HZ, and it is now considered non-habitable. Kepler-69c has gone through a similar process; though initially estimated to be potentially habitable, it was quickly realized that the planet is more likely to be similar to Venus, and is thus no longer considered habitable. Several other planets, such as Gliese 180 b, also appear to be examples of planets once considered potentially habit
Document 1:::
This article is a list of notable unsolved problems in astronomy. Some of these problems are theoretical, meaning that existing theories may be incapable of explaining certain observed phenomena or experimental results. Others are experimental, meaning that experiments necessary to test proposed theory or investigate a phenomenon in greater detail have not yet been performed. Some pertain to unique events or occurrences that have not repeated themselves and whose causes remain unclear.
Planetary astronomy
Our solar system
Orbiting bodies and rotation:
Are there any non-dwarf planets beyond Neptune?
Why do extreme trans-Neptunian objects have elongated orbits?
Rotation rate of Saturn:
Why does the magnetosphere of Saturn rotate at a rate close to that at which the planet's clouds rotate?
What is the rotation rate of Saturn's deep interior?
Satellite geomorphology:
What is the origin of the chain of high mountains that closely follows the equator of Saturn's moon, Iapetus?
Are the mountains the remnant of hot and fast-rotating young Iapetus?
Are the mountains the result of material (either from the rings of Saturn or its own ring) that over time collected upon the surface?
Extra-solar
How common are Solar System-like planetary systems? Some observed planetary systems contain Super-Earths and Hot Jupiters that orbit very close to their stars. Systems with Jupiter-like planets in Jupiter-like orbits appear to be rare. There are several possibilities why Jupiter-like orbits are rare, including that data is lacking or the grand tack hypothesis.
Stellar astronomy and astrophysics
Solar cycle:
How does the Sun generate its periodically reversing large-scale magnetic field?
How do other Sol-like stars generate their magnetic fields, and what are the similarities and differences between stellar activity cycles and that of the Sun?
What caused the Maunder Minimum and other grand minima, and how does the solar cycle recover from a minimum state?
Coronal heat
Document 2:::
Planetary oceanography also called astro-oceanography or exo-oceanography is the study of oceans on planets and moons other than Earth. Unlike other planetary sciences like astrobiology, astrochemistry and planetary geology, it only began after the discovery of underground oceans in Saturn's moon Titan and Jupiter's moon Europa. This field remains speculative until further missions reach the oceans beneath the rock or ice layer of the moons. There are many theories about oceans or even ocean worlds of celestial bodies in the Solar System, from oceans made of diamond in Neptune to a gigantic ocean of liquid hydrogen that may exist underneath Jupiter's surface.
Early in their geologic histories, Mars and Venus are theorized to have had large water oceans. The Mars ocean hypothesis suggests that nearly a third of the surface of Mars was once covered by water, and a runaway greenhouse effect may have boiled away the global ocean of Venus. Compounds such as salts and ammonia dissolved in water lower its freezing point so that water might exist in large quantities in extraterrestrial environments as brine or convecting ice. Unconfirmed oceans are speculated beneath the surface of many dwarf planets and natural satellites; notably, the ocean of the moon Europa is estimated to have over twice the water volume of Earth's. The Solar System's giant planets are also thought to have liquid atmospheric layers of yet to be confirmed compositions. Oceans may also exist on exoplanets and exomoons, including surface oceans of liquid water within a circumstellar habitable zone. Ocean planets are a hypothetical type of planet with a surface completely covered with liquid.
Extraterrestrial oceans may be composed of water or other elements and compounds. The only confirmed large stable bodies of extraterrestrial surface liquids are the lakes of Titan, which are made of hydrocarbons instead of water. However, there is strong evidence for subsurface water oceans' existence elsewhere in t
Document 3:::
The Living Cosmos: Our Search for Life in the Universe is a non-fiction book by the astronomer Chris Impey that discusses the subject of astrobiology and efforts to discover life beyond Earth. It was published as a hardcover by Random House in 2007 and as a paperback by Cambridge University Press in 2011.
Summary
The Living Cosmos is a non-fiction book by University of Arizona professor of astronomy Chris Impey on the status of astrobiology. It summarizes the state of research as scientists trying to address one of the most profound questions we can ask about nature: Is there life in the universe beyond the Earth? The author interviewed dozens of leading researchers, and he includes material from the interviews and vignettes of the researchers in the book. The companion web site to the book contains articles and video clips on astrobiology produced by the author, as well as a glossary and links to other relevant sites.
The book begins with a review of the cosmic setting for life and reviews the insights of astronomy since Copernicus. The discovery that we live in a "biological universe" would be a continuation of the progression where there is nothing exceptional about the setting of the Earth and the events that have occurred on this planet.
Subsequent chapters consider the origin of life on Earth, and the physical extremes to which life as adapted. In astrobiology, it pays to think "outside the box" and imagine how strange life might be or whether post-biological evolution is possible, where the basis is mechanical or computational. A chapter on evolution shows how it is affected by the cosmic environment.
Possibilities of life in the Solar System are considered next, with emphasis on Mars, Titan, and outer moons harboring liquid water. Next is a summary of the rapidly changing state of play in the search for extrasolar planets or exoplanets. After centuries of speculation and decades of futile searching, planets around other stars were first discovered in 19
Document 4:::
The following tables list all minor planets and comets that have been visited by robotic spacecraft.
List of minor planets visited by spacecraft
A total of 17 minor planets (asteroids, dwarf planets, and Kuiper belt objects) have been visited by space probes. Moons (not directly orbiting the Sun) and planets are not minor planets and thus are not included in the table below.
Incidental flybys
In addition to the above listed objects, four asteroids have been imaged by spacecraft at distances too large to resolve features (over 100,000 km), and are labeled as such.
List of comets visited by spacecraft
{| class="wikitable sortable"
|-
! colspan=4 style="background-color:#D4E2FC;" | Comet
! colspan=5 style="background-color:#FFFF99;" | Space probe
|-
! rowspan=2 style="background-color:#edf3fe;" width=110 | Name
! rowspan=2 style="background-color:#edf3fe;" class="unsortable"| Image
! rowspan=2 style="background-color:#edf3fe; font-weight: normal;" | Dimensions(km)(a)
! rowspan=2 style="background-color:#edf3fe;" width=70 | Discoveryyear
! rowspan=2 style="background-color:#ffffcc;" | Name
! colspan=3 style="background-color:#ffffcc;"| Closest approach
! rowspan=2 style="background-color:#ffffcc;" class="unsortable"| Remarks
|-
! width=60 style="background-color:#ffffcc;" | year
! width=60 style="background-color:#ffffcc;" | in km
! width=60 style="background-color:#ffffcc; font-weight: normal;" | in radii(b)
|-
| 21P/Giacobini–Zinner
| bgcolor=#334d4c |
| align=center | 2
| align=center | 1900
| ICE
| align=center | 1985
| align=right | 7,800
| align=right | 7,800
| first flyby of a comet
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which of these is only found outside the solar system?
A. planets
B. moons
C. nebulae
D. comets
Answer:
|
|
sciq-8003
|
multiple_choice
|
A voltage source and a conductor are common to all what?
|
[
"cooling circuits",
"magnets",
"thermometers",
"electric circuits"
] |
D
|
Relavent Documents:
Document 0:::
Mathematical methods are integral to the study of electronics.
Mathematics in electronics
Electronics engineering careers usually include courses in calculus (single and multivariable), complex analysis, differential equations (both ordinary and partial), linear algebra and probability. Fourier analysis and Z-transforms are also subjects which are usually included in electrical engineering programs. Laplace transform can simplify computing RLC circuit behaviour.
Basic applications
A number of electrical laws apply to all electrical networks. These include
Faraday's law of induction: Any change in the magnetic environment of a coil of wire will cause a voltage (emf) to be "induced" in the coil.
Gauss's Law: The total of the electric flux out of a closed surface is equal to the charge enclosed divided by the permittivity.
Kirchhoff's current law: the sum of all currents entering a node is equal to the sum of all currents leaving the node or the sum of total current at a junction is zero
Kirchhoff's voltage law: the directed sum of the electrical potential differences around a circuit must be zero.
Ohm's law: the voltage across a resistor is the product of its resistance and the current flowing through it.at constant temperature.
Norton's theorem: any two-terminal collection of voltage sources and resistors is electrically equivalent to an ideal current source in parallel with a single resistor.
Thévenin's theorem: any two-terminal combination of voltage sources and resistors is electrically equivalent to a single voltage source in series with a single resistor.
Millman's theorem: the voltage on the ends of branches in parallel is equal to the sum of the currents flowing in every branch divided by the total equivalent conductance.
See also Analysis of resistive circuits.
Circuit analysis is the study of methods to solve linear systems for an unknown variable.
Circuit analysis
Components
There are many electronic components currently used and they all have thei
Document 1:::
In electrical engineering, electrical terms are associated into pairs called duals. A dual of a relationship is formed by interchanging voltage and current in an expression. The dual expression thus produced is of the same form, and the reason that the dual is always a valid statement can be traced to the duality of electricity and magnetism.
Here is a partial list of electrical dualities:
voltage – current
parallel – serial (circuits)
resistance – conductance
voltage division – current division
impedance – admittance
capacitance – inductance
reactance – susceptance
short circuit – open circuit
Kirchhoff's current law – Kirchhoff's voltage law.
Thévenin's theorem – Norton's theorem
History
The use of duality in circuit theory is due to Alexander Russell who published his ideas in 1904.
Examples
Constitutive relations
Resistor and conductor (Ohm's law)
Capacitor and inductor – differential form
Capacitor and inductor – integral form
Voltage division — current division
Impedance and admittance
Resistor and conductor
Capacitor and inductor
See also
Duality (electricity and magnetism)
Duality (mechanical engineering)
Dual impedance
Dual graph
Mechanical–electrical analogies
List of dualities
Document 2:::
Computer science and engineering (CSE) is an academic program at many universities which comprises computer science classes (e.g. data structures and algorithms) and computer engineering classes (e.g computer architecture). There is no clear division in computing between science and engineering, just like in the field of materials science and engineering. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both undergraduate as well postgraduate with specializations.
Academic courses
Academic programs vary between colleges, but typically include a combination of topics in computer science, computer engineering, and electrical engineering. Undergraduate courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most CSE programs require introductory mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, probability, and statistics, as well as the basics of electrical and electronic engineering, physics, and electromagnetism.
Example universities with CSE majors and departments
APJ Abdul Kalam Technological University
American International University-B
Document 3:::
The Bernard Price Memorial Lecture is the premier annual lecture of the South African Institute of Electrical Engineers. It is of general scientific or engineering interest and is given by an invited guest, often from overseas, at several of the major centres on South Africa. The main lecture and accompanying dinner are usually held at the University of Witwatersrand and it is also presented in the space of one week at other centres, typically Cape Town, Durban, East London and Port Elizabeth.
The Lecture is named in memory of the eminent electrical engineer Bernard Price. The first Lecture was held in 1951 and it has occurred as an annual event ever since.
Lecturers
1951 Basil Schonland
1952 A M Jacobs
1953 H J Van Eck
1954 J M Meek
1955 Frank Nabarro
1956 A L Hales
1957 P G Game
1958 Colin Cherry
1959 Thomas Allibone
1960 M G Say
1961 Willis Jackson
1963 W R Stevens
1964 William Pickering
1965 G H Rawcliffe
1966 Harold Bishop
1967 Eric Eastwood
1968 F J Lane
1969 A H Reeves
1970 Andrew R Cooper
1971 Herbert Haslegrave
1972 W J Bray
1973 R Noser
1974 D Kind
1975 L Kirchmayer
1976 S Jones
1977 J Johnson
1978 T G E Cockbain
1979 A R Hileman
1980 James Redmond
1981 L M Muntzing
1982 K F Raby
1983 R Isermann
1984 M N John
1985 J W L de Villiers
1986 Derek Roberts
1987 Wolfram Boeck
1988 Karl Gehring
1989 Leonard Sagan
1990 GKF Heyner
1991 P S Blythin
1992 P M Neches
1993 P Radley
1994 P R Rosen
1995 F P Sioshansi
1996 J Taylor
1997 M Chamia
1998 C Gellings
1999 M W Kennedy
2000 John Midwinter
2001 Pragasen Pillay
2002 Polina Bayvel
2003 Case Rijsdijk
2004 Frank Larkins
2005 Igor Aleksander
2006 Kevin Warwick
2007 Skip Hatfield
2008 Sami Solanki
2009 William Gruver
2010 Glenn Ricart
2011 Philippe Paelinck
2012 Nick Frydas
2013 Vint Cerf
2014 Ian Jandrell
2015 Saurabh Sinha
2016 Tshilidzi Marwala
2017 Fulufhelo Nelwamondo
2018 Ian Craig
2019 Robert Metcalfe
2020 Roger Price
Document 4:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A voltage source and a conductor are common to all what?
A. cooling circuits
B. magnets
C. thermometers
D. electric circuits
Answer:
|
|
sciq-4863
|
multiple_choice
|
Where part of the atmosphere is the uppermost mantle?
|
[
"lithosphere",
"isosphere",
"thermosphere",
"troposphere"
] |
A
|
Relavent Documents:
Document 0:::
A lithosphere () is the rigid, outermost rocky shell of a terrestrial planet or natural satellite. On Earth, it is composed of the crust and the lithospheric mantle, the topmost portion of the upper mantle that behaves elastically on time scales of up to thousands of years or more. The crust and upper mantle are distinguished on the basis of chemistry and mineralogy.
Earth's lithosphere
Earth's lithosphere, which constitutes the hard and rigid outer vertical layer of the Earth, includes the crust and the lithospheric mantle (or mantle lithosphere), the uppermost part of the mantle that is not convecting. The lithosphere is underlain by the asthenosphere which is the weaker, hotter, and deeper part of the upper mantle that is able to convect. The lithosphere–asthenosphere boundary is defined by a difference in response to stress. The lithosphere remains rigid for very long periods of geologic time in which it deforms elastically and through brittle failure, while the asthenosphere deforms viscously and accommodates strain through plastic deformation.
The thickness of the lithosphere is thus considered to be the depth to the isotherm associated with the transition between brittle and viscous behavior. The temperature at which olivine becomes ductile (~) is often used to set this isotherm because olivine is generally the weakest mineral in the upper mantle.
The lithosphere is subdivided horizontally into tectonic plates, which often include terranes accreted from other plates.
History of the concept
The concept of the lithosphere as Earth's strong outer layer was described by the English mathematician A. E. H. Love in his 1911 monograph "Some problems of Geodynamics" and further developed by the American geologist Joseph Barrell, who wrote a series of papers about the concept and introduced the term "lithosphere". The concept was based on the presence of significant gravity anomalies over continental crust, from which he inferred that there must exist a strong, s
Document 1:::
The core–mantle boundary (CMB) of Earth lies between the planet's silicate mantle and its liquid iron–nickel outer core, at a depth of below Earth's surface. The boundary is observed via the discontinuity in seismic wave velocities at that depth due to the differences between the acoustic impedances of the solid mantle and the molten outer core. P-wave velocities are much slower in the outer core than in the deep mantle while S-waves do not exist at all in the liquid portion of the core. Recent evidence suggests a distinct boundary layer directly above the CMB possibly made of a novel phase of the basic perovskite mineralogy of the deep mantle named post-perovskite. Seismic tomography studies have shown significant irregularities within the boundary zone and appear to be dominated by the African and Pacific Large Low-Shear-Velocity Provinces (LLSVP).
The uppermost section of the outer core is thought to be about 500–1,800 K hotter than the overlying mantle, creating a thermal boundary layer. The boundary is thought to harbor topography, much like Earth's surface, that is supported by solid-state convection within the overlying mantle. Variations in the thermal properties of the core-mantle boundary may affect how the outer core's iron-rich fluids flow, which are ultimately responsible for Earth's magnetic field.
The D″ region
The approx. 200 km thick layer of the lower mantle directly above the boundary is referred to as the D″ region ("D double-prime" or "D prime prime") and is sometimes included in discussions regarding the core–mantle boundary zone. The D″ name originates from geophysicist Keith Bullen's designations for the Earth's layers. His system was to label each layer alphabetically, A through G, with the crust as 'A' and the inner core as 'G'. In his 1942 publication of his model, the entire lower mantle was the D layer. In 1949, Bullen found his 'D' layer to actually be two different layers. The upper part of the D layer, about 1800 km thick, was r
Document 2:::
Aeronomy is the scientific study of the upper atmosphere of the Earth and corresponding regions of the atmospheres of other planets. It is a branch of both atmospheric chemistry and atmospheric physics. Scientists specializing in aeronomy, known as aeronomers, study the motions and chemical composition and properties of the Earth's upper atmosphere and regions of the atmospheres of other planets that correspond to it, as well as the interaction between upper atmospheres and the space environment. In atmospheric regions aeronomers study, chemical dissociation and ionization are important phenomena.
History
The mathematician Sydney Chapman introduced the term aeronomy to describe the study of the Earth's upper atmosphere in 1946 in a letter to the editor of Nature entitled "Some Thoughts on Nomenclature." The term became official in 1954 when the International Union of Geodesy and Geophysics adopted it. "Aeronomy" later also began to refer to the study of the corresponding regions of the atmospheres of other planets.
Branches
Aeronomy can be divided into three main branches: terrestrial aeronomy, planetary aeronomy, and comparative aeronomy.
Terrestrial aeronomy
Terrestrial aeronomy focuses on the Earth's upper atmosphere, which extends from the stratopause to the atmosphere's boundary with outer space and is defined as consisting of the mesosphere, thermosphere, and exosphere and their ionized component, the ionosphere. Terrestrial aeronomy contrasts with meteorology, which is the scientific study of the Earth's lower atmosphere, defined as the troposphere and stratosphere. Although terrestrial aeronomy and meteorology once were completely separate fields of scientific study, cooperation between terrestrial aeronomers and meteorologists has grown as discoveries made since the early 1990s have demonstrated that the upper and lower atmospheres have an impact on one another's physics, chemistry, and biology.
Terrestrial aeronomers study atmospheric tides and upper-
Document 3:::
The thermal history of Earth involves the study of the cooling history of Earth's interior. It is a sub-field of geophysics. (Thermal histories are also computed for the internal cooling of other planetary and stellar bodies.) The study of the thermal evolution of Earth's interior is uncertain and controversial in all aspects, from the interpretation of petrologic observations used to infer the temperature of the interior, to the fluid dynamics responsible for heat loss, to material properties that determine the efficiency of heat transport.
Overview
Observations that can be used to infer the temperature of Earth's interior range from the oldest rocks on Earth to modern seismic images of the inner core size. Ancient volcanic rocks can be associated with a depth and temperature of melting through their geochemical composition. Using this technique and some geological inferences about the conditions under which the rock is preserved, the temperature of the mantle can be inferred. The mantle itself is fully convective, so that the temperature in the mantle is basically constant with depth outside the top and bottom thermal boundary layers. This is not quite true because the temperature in any convective body under pressure must increase along an adiabat, but the adiabatic temperature gradient is usually much smaller than the temperature jumps at the boundaries. Therefore, the mantle is usually associated with a single or potential temperature that refers to the mid-mantle temperature extrapolated along the adiabat to the surface. The potential temperature of the mantle is estimated to be about 1350 C today. There is an analogous potential temperature of the core but since there are no samples from the core its present-day temperature relies on extrapolating the temperature along an adiabat from the inner core boundary, where the iron solidus is somewhat constrained.
Thermodynamics
The simplest mathematical formulation of the thermal history of Earth's interior i
Document 4:::
Atmospheric temperature is a measure of temperature at different levels of the Earth's atmosphere. It is governed by many factors, including incoming solar radiation, humidity and altitude. When discussing surface air temperature, the annual atmospheric temperature range at any geographical location depends largely upon the type of biome, as measured by the Köppen climate classification
Temperature versus altitude
Temperature varies greatly at different heights relative to Earth's surface and this variation in temperature characterizes the four layers that exist in the atmosphere. These layers include the troposphere, stratosphere, mesosphere, and thermosphere.
The troposphere is the lowest of the four layers, extending from the surface of the Earth to about into the atmosphere where the tropopause (the boundary between the troposphere stratosphere) is located. The width of the troposphere can vary depending on latitude, for example, the troposphere is thicker in the tropics (about ) because the tropics are generally warmer, and thinner at the poles (about ) because the poles are colder. Temperatures in the atmosphere decrease with height at an average rate of 6.5°C (11.7°F) per kilometer. Because the troposphere experiences its warmest temperatures closer to Earth's surface, there is great vertical movement of heat and water vapour, causing turbulence. This turbulence, in conjunction with the presence of water vapour, is the reason that weather occurs within the troposphere.
Following the tropopause is the stratosphere. This layer extends from the tropopause to the stratopause which is located at an altitude of about . Temperatures remain constant with height from the tropopause to an altitude of , after which they start to increase with height. This happening is referred to as an inversion and It is because of this inversion that the stratosphere is not characterised as turbulent. The stratosphere receives its warmth from the sun and the ozone layer which ab
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Where part of the atmosphere is the uppermost mantle?
A. lithosphere
B. isosphere
C. thermosphere
D. troposphere
Answer:
|
|
ai2_arc-855
|
multiple_choice
|
The liver converts glucose to glycogen for storage. Why is this function considered a chemical change?
|
[
"because the conversion transforms solids to liquids",
"because the conversion allows for less glucose in the liver",
"because the conversion changes one substance to a new one",
"because the conversion changes the shape of the liver cells"
] |
C
|
Relavent Documents:
Document 0:::
Biochemistry or biological chemistry is the study of chemical processes within and relating to living organisms. A sub-discipline of both chemistry and biology, biochemistry may be divided into three fields: structural biology, enzymology, and metabolism. Over the last decades of the 20th century, biochemistry has become successful at explaining living processes through these three disciplines. Almost all areas of the life sciences are being uncovered and developed through biochemical methodology and research. Biochemistry focuses on understanding the chemical basis which allows biological molecules to give rise to the processes that occur within living cells and between cells, in turn relating greatly to the understanding of tissues and organs, as well as organism structure and function. Biochemistry is closely related to molecular biology, which is the study of the molecular mechanisms of biological phenomena.
Much of biochemistry deals with the structures, bonding, functions, and interactions of biological macromolecules, such as proteins, nucleic acids, carbohydrates, and lipids. They provide the structure of cells and perform many of the functions associated with life. The chemistry of the cell also depends upon the reactions of small molecules and ions. These can be inorganic (for example, water and metal ions) or organic (for example, the amino acids, which are used to synthesize proteins). The mechanisms used by cells to harness energy from their environment via chemical reactions are known as metabolism. The findings of biochemistry are applied primarily in medicine, nutrition and agriculture. In medicine, biochemists investigate the causes and cures of diseases. Nutrition studies how to maintain health and wellness and also the effects of nutritional deficiencies. In agriculture, biochemists investigate soil and fertilizers, with the goal of improving crop cultivation, crop storage, and pest control. In recent decades, biochemical principles a
Document 1:::
Physical biochemistry is a branch of biochemistry that deals with the theory, techniques, and methodology used to study the physical chemistry of biomolecules.
It also deals with the mathematical approaches for the analysis of biochemical reaction and the modelling of biological systems. It provides insight into the structure of macromolecules, and how chemical structure influences the physical properties of a biological substance.
It involves the use of physics, physical chemistry principles, and methodology to study biological systems. It employs various physical chemistry techniques such as chromatography, spectroscopy, Electrophoresis, X-ray crystallography, electron microscopy, and hydrodynamics.
See also
Physical chemistry
Document 2:::
The metabolome refers to the complete set of small-molecule chemicals found within a biological sample. The biological sample can be a cell, a cellular organelle, an organ, a tissue, a tissue extract, a biofluid or an entire organism. The small molecule chemicals found in a given metabolome may include both endogenous metabolites that are naturally produced by an organism (such as amino acids, organic acids, nucleic acids, fatty acids, amines, sugars, vitamins, co-factors, pigments, antibiotics, etc.) as well as exogenous chemicals (such as drugs, environmental contaminants, food additives, toxins and other xenobiotics) that are not naturally produced by an organism.
In other words, there is both an endogenous metabolome and an exogenous metabolome. The endogenous metabolome can be further subdivided to include a "primary" and a "secondary" metabolome (particularly when referring to plant or microbial metabolomes). A primary metabolite is directly involved in the normal growth, development, and reproduction. A secondary metabolite is not directly involved in those processes, but usually has important ecological function. Secondary metabolites may include pigments, antibiotics or waste products derived from partially metabolized xenobiotics. The study of the metabolome is called metabolomics.
Origins
The word metabolome appears to be a blending of the words "metabolite" and "chromosome". It was constructed to imply that metabolites are indirectly encoded by genes or act on genes and gene products. The term "metabolome" was first used in 1998 and was likely coined to match with existing biological terms referring to the complete set of genes (the genome), the complete set of proteins (the proteome) and the complete set of transcripts (the transcriptome). The first book on metabolomics was published in 2003. The first journal dedicated to metabolomics (titled simply "Metabolomics") was launched in 2005 and is currently edited by Prof. Roy Goodacre. Some of the m
Document 3:::
Glycogen is a multibranched polysaccharide of glucose that serves as a form of energy storage in animals, fungi, and bacteria. It is the main storage form of glucose in the human body.
Glycogen functions as one of three regularly used forms of energy reserves, creatine phosphate being for very short-term, glycogen being for short-term and the triglyceride stores in adipose tissue (i.e., body fat) being for long-term storage. Protein, broken down into amino acids, is seldom used as a main energy source except during starvation and glycolytic crisis (see bioenergetic systems).
In humans, glycogen is made and stored primarily in the cells of the liver and skeletal muscle. In the liver, glycogen can make up 5–6% of the organ's fresh weight: the liver of an adult, weighing 1.5 kg, can store roughly 100–120 grams of glycogen. In skeletal muscle, glycogen is found in a low concentration (1–2% of the muscle mass): the skeletal muscle of an adult weighing 70 kg stores roughly 400 grams of glycogen. Small amounts of glycogen are also found in other tissues and cells, including the kidneys, red blood cells, white blood cells, and glial cells in the brain. The uterus also stores glycogen during pregnancy to nourish the embryo.
The amount of glycogen stored in the body mostly depends on oxidative type 1 fibres, physical training, basal metabolic rate, and eating habits. Different levels of resting muscle glycogen are reached by changing the number of glycogen particles, rather than increasing the size of existing particles though most glycogen particles at rest are smaller than their theoretical maximum.
Approximately 4 grams of glucose are present in the blood of humans at all times; in fasting individuals, blood glucose is maintained constant at this level at the expense of glycogen stores in the liver and skeletal muscle. Glycogen stores in skeletal muscle serve as a form of energy storage for the muscle itself; however, the breakdown of muscle glycogen impedes muscle
Document 4:::
Conversion and its related terms yield and selectivity are important terms in chemical reaction engineering. They are described as ratios of how much of a reactant has reacted (X — conversion, normally between zero and one), how much of a desired product was formed (Y — yield, normally also between zero and one) and how much desired product was formed in ratio to the undesired product(s) (S — selectivity).
There are conflicting definitions in the literature for selectivity and yield, so each author's intended definition should be verified.
Conversion can be defined for (semi-)batch and continuous reactors and as instantaneous and overall conversion.
Assumptions
The following assumptions are made:
The following chemical reaction takes place:
,
where and are the stoichiometric coefficients. For multiple parallel reactions, the definitions can also be applied, either per reaction or using the limiting reaction.
Batch reaction assumes all reactants are added at the beginning.
Semi-Batch reaction assumes some reactants are added at the beginning and the rest fed during the batch.
Continuous reaction assumes reactants are fed and products leave the reactor continuously and in steady state.
Conversion
Conversion can be separated into instantaneous conversion and overall conversion. For continuous processes the two are the same, for batch and semi-batch there are important differences. Furthermore, for multiple reactants, conversion can be defined overall or per reactant.
Instantaneous conversion
Semi-batch
In this setting there are different definitions. One definition regards the instantaneous conversion as the ratio of the instantaneously converted amount to
the amount fed at any point in time:
.
with as the change of moles with time of species i.
This ratio can become larger than 1. It can be used to indicate whether reservoirs are built
up and it is ideally close to 1. When the feed stops, its value is not defined.
In semi-batch polymerisation,
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The liver converts glucose to glycogen for storage. Why is this function considered a chemical change?
A. because the conversion transforms solids to liquids
B. because the conversion allows for less glucose in the liver
C. because the conversion changes one substance to a new one
D. because the conversion changes the shape of the liver cells
Answer:
|
|
sciq-3725
|
multiple_choice
|
How many forces do objects on earth have acting on them at all times?
|
[
"three",
"two",
"ten",
"four"
] |
B
|
Relavent Documents:
Document 0:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
Document 1:::
As described by the third of Newton's laws of motion of classical mechanics, all forces occur in pairs such that if one object exerts a force on another object, then the second object exerts an equal and opposite reaction force on the first. The third law is also more generally stated as: "To every action there is always opposed an equal reaction: or the mutual actions of two bodies upon each other are always equal, and directed to contrary parts." The attribution of which of the two forces is the action and which is the reaction is arbitrary. Either of the two can be considered the action, while the other is its associated reaction.
Examples
Interaction with ground
When something is exerting force on the ground, the ground will push back with equal force in the opposite direction. In certain fields of applied physics, such as biomechanics, this force by the ground is called 'ground reaction force'; the force by the object on the ground is viewed as the 'action'.
When someone wants to jump, he or she exerts additional downward force on the ground ('action'). Simultaneously, the ground exerts upward force on the person ('reaction'). If this upward force is greater than the person's weight, this will result in upward acceleration. When these forces are perpendicular to the ground, they are also called a normal force.
Likewise, the spinning wheels of a vehicle attempt to slide backward across the ground. If the ground is not too slippery, this results in a pair of friction forces: the 'action' by the wheel on the ground in backward direction, and the 'reaction' by the ground on the wheel in forward direction. This forward force propels the vehicle.
Gravitational forces
The Earth, among other planets, orbits the Sun because the Sun exerts a gravitational pull that acts as a centripetal force, holding the Earth to it, which would otherwise go shooting off into space. If the Sun's pull is considered an action, then Earth simultaneously exerts a reaction as a gravi
Document 2:::
The gravity of Earth, denoted by , is the net acceleration that is imparted to objects due to the combined effect of gravitation (from mass distribution within Earth) and the centrifugal force (from the Earth's rotation).
It is a vector quantity, whose direction coincides with a plumb bob and strength or magnitude is given by the norm .
In SI units this acceleration is expressed in metres per second squared (in symbols, m/s2 or m·s−2) or equivalently in newtons per kilogram (N/kg or N·kg−1). Near Earth's surface, the acceleration due to gravity, accurate to 2 significant figures, is . This means that, ignoring the effects of air resistance, the speed of an object falling freely will increase by about per second every second. This quantity is sometimes referred to informally as little (in contrast, the gravitational constant is referred to as big ).
The precise strength of Earth's gravity varies with location. The agreed upon value for is by definition. This quantity is denoted variously as , (though this sometimes means the normal gravity at the equator, ), , or simply (which is also used for the variable local value).
The weight of an object on Earth's surface is the downwards force on that object, given by Newton's second law of motion, or (). Gravitational acceleration contributes to the total gravity acceleration, but other factors, such as the rotation of Earth, also contribute, and, therefore, affect the weight of the object. Gravity does not normally include the gravitational pull of the Moon and Sun, which are accounted for in terms of tidal effects.
Variation in magnitude
A non-rotating perfect sphere of uniform mass density, or whose density varies solely with distance from the centre (spherical symmetry), would produce a gravitational field of uniform magnitude at all points on its surface. The Earth is rotating and is also not spherically symmetric; rather, it is slightly flatter at the poles while bulging at the Equator: an oblate spheroid.
Document 3:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 4:::
<noinclude>
Physics education research (PER) is a form of discipline-based education research specifically related to the study of the teaching and learning of physics, often with the aim of improving the effectiveness of student learning. PER draws from other disciplines, such as sociology, cognitive science, education and linguistics, and complements them by reflecting the disciplinary knowledge and practices of physics. Approximately eighty-five institutions in the United States conduct research in science and physics education.
Goals
One primary goal of PER is to develop pedagogical techniques and strategies that will help students learn physics more effectively and help instructors to implement these techniques. Because even basic ideas in physics can be confusing, together with the possibility of scientific misconceptions formed from teaching through analogies, lecturing often does not erase common misconceptions about physics that students acquire before they are taught physics. Research often focuses on learning more about common misconceptions that students bring to the physics classroom so that techniques can be devised to help students overcome these misconceptions.
In most introductory physics courses, mechanics is usually the first area of physics that is taught. Newton's laws of motion about interactions between forces and objects are central to the study of mechanics. Many students hold the Aristotelian misconception that a net force is required to keep a body moving; instead, motion is modeled in modern physics with Newton's first law of inertia, stating that a body will keep its state of rest or movement unless a net force acts on the body. Like students who hold this misconception, Newton arrived at his three laws of motion through empirical analysis, although he did it with an extensive study of data that included astronomical observations. Students can erase such as misconception in a nearly frictionless environment, where they find that
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How many forces do objects on earth have acting on them at all times?
A. three
B. two
C. ten
D. four
Answer:
|
|
sciq-10924
|
multiple_choice
|
Streamlines are smooth and continuous when flow is laminar, but break up and mix when flow is what?
|
[
"atmospheric",
"turbulent",
"volcanic",
"slow"
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In scientific visualization skin friction lines are used to visualize flows on 3D-surfaces. They are obtained by calculating the streamlines of a derived vector field on the surface, the wall shear stress. Skin friction arises from the friction of the fluid against the "skin" of the object that is moving through it and forms a vector at each point on the surface. A skin friction line is a curve on the surface tangent to skin friction vectors. A limit streamline is a streamline where the distance normal to the surface tends to zero. Limit streamlines and skin friction lines coincide.
The lines can be visualized by placing a viscous film on the surface.
The skin friction lines may exhibit a number of different types of singularities: attachment nodes, detachment nodes, isotropic nodes, saddle points, and foci.
Document 2:::
Streamlines, streaklines and pathlines are field lines in a fluid flow.
They differ only when the flow changes with time, that is, when the flow is not steady.
Considering a velocity vector field in three-dimensional space in the framework of continuum mechanics, we have that:
Streamlines are a family of curves whose tangent vectors constitute the velocity vector field of the flow. These show the direction in which a massless fluid element will travel at any point in time.
Streaklines are the loci of points of all the fluid particles that have passed continuously through a particular spatial point in the past. Dye steadily injected into the fluid at a fixed point (as in dye tracing) extends along a streakline.
Pathlines are the trajectories that individual fluid particles follow. These can be thought of as "recording" the path of a fluid element in the flow over a certain period. The direction the path takes will be determined by the streamlines of the fluid at each moment in time.
Timelines are the lines formed by a set of fluid particles that were marked at a previous instant in time, creating a line or a curve that is displaced in time as the particles move.
By definition, different streamlines at the same instant in a flow do not intersect, because a fluid particle cannot have two different velocities at the same point. However, pathlines are allowed to intersect themselves or other pathlines (except the starting and end points of the different pathlines, which need to be distinct). Streaklines can also intersect themselves and other streaklines.
Streamlines and timelines provide a snapshot of some flowfield characteristics, whereas streaklines and pathlines depend on the -history of the flow. However, often sequences of timelines (and streaklines) at different instants—being presented either in a single image or with a video stream—may be used to provide insight in the flow and its history.
If a line, curve or closed curve is used as start point for
Document 3:::
Large woody debris (LWD) are the logs, sticks, branches, and other wood that falls into streams and rivers. This debris can influence the flow and the shape of the stream channel. Large woody debris, grains, and the shape of the bed of the stream are the three main providers of flow resistance, and are thus, a major influence on the shape of the stream channel. Some stream channels have less LWD than they would naturally because of removal by watershed managers for flood control and aesthetic reasons.
The study of woody debris is important for its forestry management implications. Plantation thinning can reduce the potential for recruitment of LWD into proximal streams. The presence of large woody debris is important in the formation of pools which serve as salmon habitat in the Pacific Northwest. Entrainment of the large woody debris in a stream can also cause erosion and scouring around and under the LWD. The amount of scouring and erosion is determined by the ratio of the diameter of the piece, to the depth of the stream, and the embedding and orientation of the piece.
Influence on stream flow around bends
Large woody debris slow the flow through a bend in the stream, while accelerating flow in the constricted area downstream of the obstruction.
See also
Beaver dam
Coarse woody debris
Driftwood
Log jam
Stream restoration
Document 4:::
A grassed waterway is a to 48-metre-wide (157 ft) native grassland strip of green belt. It is generally installed in the thalweg, the deepest continuous line along a valley or watercourse, of a cultivated dry valley in order to control erosion. A study carried out on a grassed waterway during 8 years in Bavaria showed that it can lead to several other types of positive impacts, e.g. on biodiversity.
Distinctions
Confusion between "grassed waterway" and "vegetative filter strips" should be avoided. The latter are generally narrower (only a few metres wide) and rather installed along rivers as well as along or within cultivated fields. However, buffer strip can be a synonym, with shrubs and trees added to the plant component, as does a riparian zone.
Runoff and erosion mitigation
Runoff generated on cropland during storms or long winter rains concentrates in the thalweg where it can lead to rill or gully erosion.
Rills and gullies further concentrate runoff and speed up its transfer, which can worsen damage occurring downstream. This can result in a muddy flood.
In this context, a grassed waterway allows increasing soil cohesion and roughness. It also prevents the formation of rills and gullies. Furthermore, it can slow down runoff and allow its re-infiltration during long winter rains. In contrast, its infiltration capacity is generally not sufficient to reinfiltrate runoff produced by heavy spring and summer storms. It can therefore be useful to combine it with extra measures, like the installation of earthen dams across the grassed waterway, in order to buffer runoff temporarily.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Streamlines are smooth and continuous when flow is laminar, but break up and mix when flow is what?
A. atmospheric
B. turbulent
C. volcanic
D. slow
Answer:
|
|
sciq-6311
|
multiple_choice
|
The ostrich, kiwi, rhea, cassowary, and moa are examples of what kind of birds?
|
[
"flightless",
"raptors",
"predators",
"prehistoric"
] |
A
|
Relavent Documents:
Document 0:::
The difficulty of defining or measuring intelligence in non-human animals makes the subject difficult to study scientifically in birds. In general, birds have relatively large brains compared to their head size. The visual and auditory senses are well developed in most species, though the tactile and olfactory senses are well realized only in a few groups. Birds communicate using visual signals as well as through the use of calls and song. The testing of intelligence in birds is therefore usually based on studying responses to sensory stimuli.
The corvids (ravens, crows, jays, magpies, etc.) and psittacines (parrots, macaws, and cockatoos) are often considered the most intelligent birds, and are among the most intelligent animals in general. Pigeons, finches, domestic fowl, and birds of prey have also been common subjects of intelligence studies.
Studies
Bird intelligence has been studied through several attributes and abilities. Many of these studies have been on birds such as quail, domestic fowl, and pigeons kept under captive conditions. It has, however, been noted that field studies have been limited, unlike those of the apes. Birds in the crow family (corvids) as well as parrots (psittacines) have been shown to live socially, have long developmental periods, and possess large forebrains, all of which have been hypothesized to allow for greater cognitive abilities.
Counting has traditionally been considered an ability that shows intelligence. Anecdotal evidence from the 1960s has suggested that crows can count up to 3. Researchers need to be cautious, however, and ensure that birds are not merely demonstrating the ability to subitize, or count a small number of items quickly. Some studies have suggested that crows may indeed have a true numerical ability. It has been shown that parrots can count up to 6.
Cormorants used by Chinese fishermen were given every eighth fish as a reward, and found to be able to keep count up to 7. E.H. Hoh wrote in Natural Histo
Document 1:::
The following is a glossary of common English language terms used in the description of birds—warm-blooded vertebrates of the class Aves and the only living dinosaurs, characterized by , the ability to in all but the approximately 60 extant species of flightless birds, toothless, , the laying of hard-shelled eggs, a high metabolic rate, a four-chambered heart and a strong yet lightweight skeleton.
Among other details such as size, proportions and shape, terms defining bird features developed and are used to describe features unique to the class—especially evolutionary adaptations that developed to aid flight. There are, for example, numerous terms describing the complex structural makeup of feathers (e.g., , and ); types of feathers (e.g., , and feathers); and their growth and loss (e.g., , and ).
There are thousands of terms that are unique to the study of birds. This glossary makes no attempt to cover them all, concentrating on terms that might be found across descriptions of multiple bird species by bird enthusiasts and ornithologists. Though words that are not unique to birds are also covered, such as or , they are defined in relation to other unique features of external bird anatomy, sometimes called . As a rule, this glossary does not contain individual entries on any of the approximately 9,700 recognized living individual bird species of the world.
A
B
C
D
{| border="1"
|-
|carnivores (sometimes called faunivores): birds that predominantly forage for the meat of vertebrates—generally hunters as in certain birds of prey—including eagles, owls and shrikes, though piscivores, insectivores and crustacivores may be called specialized types of carnivores.
|-
|crustacivores: birds that forage for and eat crustaceans, such as crab-plovers and some rails.
|-
|detritivores: birds that forage for and eat decomposing material, such as vultures. It is usually used as a more general term than "saprovore" (defined below), which often connotes the eating of de
Document 2:::
Significant work has gone into analyzing the effects of climate change on birds. Like other animal groups, birds are affected by anthropogenic (human-caused) climate change. The research includes tracking the changes in species' life cycles over decades in response to the changing world, evaluating the role of differing evolutionary pressures and even comparing museum specimens with modern birds to track changes in appearance and body structure. Predictions of range shifts caused by the direct and indirect impacts of climate change on bird species are amongst the most important, as they are crucial for informing animal conservation work, required to minimize extinction risk from climate change.
Climate change mitigation options can also have varying impacts on birds. However, even the environmental impact of wind power is estimated to be much less threatening to birds than the continuing effects of climate change.
Causes
Climate change has raised the temperature of the Earth by about since the Industrial Revolution. As the extent of future greenhouse gas emissions and mitigation actions determines the climate change scenario taken, warming may increase from present levels by less than with rapid and comprehensive mitigation (the Paris Agreement goal) to around ( from the preindustrial) by the end of the century with very high and continually increasing greenhouse gas emissions.
Effects
Physical changes
Birds are a group of warm-blooded vertebrates constituting the class Aves, characterized by feathers, toothless beaked jaws, the laying of hard-shelled eggs, a high metabolic rate, a four-chambered heart, and a strong yet lightweight skeleton.
Climate change has already altered the appearance of some birds by facilitating changes to their feathers. A comparison of museum specimens of juvenile passerines from 1800s with juveniles of the same species today had shown that these birds now complete the switch from their nesting feathers to adult feathers ea
Document 3:::
This is a list of the fastest flying birds in the world. A bird's velocity is necessarily variable; a hunting bird will reach much greater speeds while diving to catch prey than when flying horizontally. The bird that can achieve the greatest airspeed is the peregrine falcon, able to exceed in its dives. A close relative of the common swift, the white-throated needletail (Hirundapus caudacutus), is commonly reported as the fastest bird in level flight with a reported top speed of . This record remains unconfirmed as the measurement methods have never been published or verified. The record for the fastest confirmed level flight by a bird is held by the common swift.
Birds by flying speed
See also
List of birds by flight heights
Note
Document 4:::
Around 350 BCE, Aristotle and other philosophers of the time attempted to explain the aerodynamics of avian flight. Even after the discovery of the ancestral bird Archaeopteryx which lived over 150 million years ago, debates still persist regarding the evolution of flight. There are three leading hypotheses pertaining to avian flight: Pouncing Proavis model, Cursorial model, and Arboreal model.
In March 2018, scientists reported that Archaeopteryx was likely capable of flight, but in a manner substantially different from that of modern birds.
Flight characteristics
For flight to occur, four physical forces (thrust and drag, lift and weight) must be favorably combined. In order for birds to balance these forces, certain physical characteristics are required. Asymmetrical wing feathers, found on all flying birds with the exception of hummingbirds, help in the production of thrust and lift. Anything that moves through the air produces drag due to friction. The aerodynamic body of a bird can reduce drag, but when stopping or slowing down a bird will use its tail and feet to increase drag. Weight is the largest obstacle birds must overcome in order to fly. An animal can more easily attain flight by reducing its absolute weight. Birds evolved from other theropod dinosaurs that had already gone through a phase of size reduction during the Middle Jurassic, combined with rapid evolutionary changes. Flying birds during their evolution further reduced relative weight through several characteristics such as the loss of teeth, shrinkage of the gonads out of mating season, and fusion of bones. Teeth were replaced by a lightweight bill made of keratin, the food being processed by the bird's gizzard. Other advanced physical characteristics evolved for flight are a keel for the attachment of flight muscles and an enlarged cerebellum for fine motor coordination. These were gradual changes, though, and not strict conditions for flight: the first birds had teeth, at best a small keel
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The ostrich, kiwi, rhea, cassowary, and moa are examples of what kind of birds?
A. flightless
B. raptors
C. predators
D. prehistoric
Answer:
|
|
sciq-2977
|
multiple_choice
|
Which members of the food chain break down remains of plants and other organisms when they die?
|
[
"Respiration",
"nematodes",
"decomposers",
"fluxes"
] |
C
|
Relavent Documents:
Document 0:::
The trophic level of an organism is the position it occupies in a food web. A food chain is a succession of organisms that eat other organisms and may, in turn, be eaten themselves. The trophic level of an organism is the number of steps it is from the start of the chain. A food web starts at trophic level 1 with primary producers such as plants, can move to herbivores at level 2, carnivores at level 3 or higher, and typically finish with apex predators at level 4 or 5. The path along the chain can form either a one-way flow or a food "web". Ecological communities with higher biodiversity form more complex trophic paths.
The word trophic derives from the Greek τροφή (trophē) referring to food or nourishment.
History
The concept of trophic level was developed by Raymond Lindeman (1942), based on the terminology of August Thienemann (1926): "producers", "consumers", and "reducers" (modified to "decomposers" by Lindeman).
Overview
The three basic ways in which organisms get food are as producers, consumers, and decomposers.
Producers (autotrophs) are typically plants or algae. Plants and algae do not usually eat other organisms, but pull nutrients from the soil or the ocean and manufacture their own food using photosynthesis. For this reason, they are called primary producers. In this way, it is energy from the sun that usually powers the base of the food chain. An exception occurs in deep-sea hydrothermal ecosystems, where there is no sunlight. Here primary producers manufacture food through a process called chemosynthesis.
Consumers (heterotrophs) are species that cannot manufacture their own food and need to consume other organisms. Animals that eat primary producers (like plants) are called herbivores. Animals that eat other animals are called carnivores, and animals that eat both plants and other animals are called omnivores.
Decomposers (detritivores) break down dead plant and animal material and wastes and release it again as energy and nutrients into
Document 1:::
Consumer–resource interactions are the core motif of ecological food chains or food webs, and are an umbrella term for a variety of more specialized types of biological species interactions including prey-predator (see predation), host-parasite (see parasitism), plant-herbivore and victim-exploiter systems. These kinds of interactions have been studied and modeled by population ecologists for nearly a century. Species at the bottom of the food chain, such as algae and other autotrophs, consume non-biological resources, such as minerals and nutrients of various kinds, and they derive their energy from light (photons) or chemical sources. Species higher up in the food chain survive by consuming other species and can be classified by what they eat and how they obtain or find their food.
Classification of consumer types
The standard categorization
Various terms have arisen to define consumers by what they eat, such as meat-eating carnivores, fish-eating piscivores, insect-eating insectivores, plant-eating herbivores, seed-eating granivores, and fruit-eating frugivores and omnivores are meat eaters and plant eaters. An extensive classification of consumer categories based on a list of feeding behaviors exists.
The Getz categorization
Another way of categorizing consumers, proposed by South African American ecologist Wayne Getz, is based on a biomass transformation web (BTW) formulation that organizes resources into five components: live and dead animal, live and dead plant, and particulate (i.e. broken down plant and animal) matter. It also distinguishes between consumers that gather their resources by moving across landscapes from those that mine their resources by becoming sessile once they have located a stock of resources large enough for them to feed on during completion of a full life history stage.
In Getz's scheme, words for miners are of Greek etymology and words for gatherers are of Latin etymology. Thus a bestivore, such as a cat, preys on live animal
Document 2:::
Decomposers are organisms that break down dead or decaying organisms; they carry out decomposition, a process possible by only certain kingdoms, such as fungi. Like herbivores and predators, decomposers are heterotrophic, meaning that they use organic substrates to get their energy, carbon and nutrients for growth and development. While the terms decomposer and detritivore are often interchangeably used, detritivores ingest and digest dead matter internally, while decomposers directly absorb nutrients through external chemical and biological processes. Thus, invertebrates such as earthworms, woodlice, and sea cucumbers are technically detritivores, not decomposers, since they are unable to absorb nutrients without ingesting them.
Fungi
The primary decomposer of litter in many ecosystems is fungi. Unlike bacteria, which are unicellular organisms and are decomposers as well, most saprotrophic fungi grow as a branching network of hyphae. While bacteria are restricted to growing and feeding on the exposed surfaces of organic matter, fungi can use their hyphae to penetrate larger pieces of organic matter, below the surface. Additionally, only wood-decay fungi have evolved the enzymes necessary to decompose lignin, a chemically complex substance found in wood. These two factors make fungi the primary decomposers in forests, where litter has high concentrations of lignin and often occurs in large pieces. Fungi decompose organic matter by releasing enzymes to break down the decaying material, after which they absorb the nutrients in the decaying material. Hyphae are used to break down matter and absorb nutrients and are also used in reproduction. When two compatible fungi hyphae grow close to each other, they will then fuse together for reproduction, and form another fungus.
See also
Chemotroph
Micro-animals
Microorganism
Document 3:::
The soil food web is the community of organisms living all or part of their lives in the soil. It describes a complex living system in the soil and how it interacts with the environment, plants, and animals.
Food webs describe the transfer of energy between species in an ecosystem. While a food chain examines one, linear, energy pathway through an ecosystem, a food web is more complex and illustrates all of the potential pathways. Much of this transferred energy comes from the sun. Plants use the sun’s energy to convert inorganic compounds into energy-rich, organic compounds, turning carbon dioxide and minerals into plant material by photosynthesis. Plant flowers exude energy-rich nectar above ground and plant roots exude acids, sugars, and ectoenzymes into the rhizosphere, adjusting the pH and feeding the food web underground.
Plants are called autotrophs because they make their own energy; they are also called producers because they produce energy available for other organisms to eat. Heterotrophs are consumers that cannot make their own food. In order to obtain energy they eat plants or other heterotrophs.
Above ground food webs
In above ground food webs, energy moves from producers (plants) to primary consumers (herbivores) and then to secondary consumers (predators). The phrase, trophic level, refers to the different levels or steps in the energy pathway. In other words, the producers, consumers, and decomposers are the main trophic levels. This chain of energy transferring from one species to another can continue several more times, but eventually ends. At the end of the food chain, decomposers such as bacteria and fungi break down dead plant and animal material into simple nutrients.
Methodology
The nature of soil makes direct observation of food webs difficult. Since soil organisms range in size from less than 0.1 mm (nematodes) to greater than 2 mm (earthworms) there are many different ways to extract them. Soil samples are often taken using a metal
Document 4:::
In biology, detritus () is dead particulate organic material, as distinguished from dissolved organic material. Detritus typically includes the bodies or fragments of bodies of dead organisms, and fecal material. Detritus typically hosts communities of microorganisms that colonize and decompose (i.e. remineralize) it. In terrestrial ecosystems it is present as leaf litter and other organic matter that is intermixed with soil, which is denominated "soil organic matter". The detritus of aquatic ecosystems is organic substances that is suspended in the water and accumulates in depositions on the floor of the body of water; when this floor is a seabed, such a deposition is denominated "marine snow".
Theory
The corpses of dead plants or animals, material derived from animal tissues (e.g. molted skin), and fecal matter gradually lose their form due to physical processes and the action of decomposers, including grazers, bacteria, and fungi. Decomposition, the process by which organic matter is decomposed, occurs in several phases. Micro- and macro-organisms that feed on it rapidly consume and absorb materials such as proteins, lipids, and sugars that are low in molecular weight, while other compounds such as complex carbohydrates are decomposed more slowly. The decomposing microorganisms degrade the organic materials so as to gain the resources they require for their survival and reproduction. Accordingly, simultaneous to microorganisms' decomposition of the materials of dead plants and animals is their assimilation of decomposed compounds to construct more of their biomass (i.e. to grow their own bodies). When microorganisms die, fine organic particles are produced, and if small animals that feed on microorganisms eat these particles they collect inside the intestines of the consumers, and change shape into large pellets of dung. As a result of this process, most of the materials of dead organisms disappear and are not visible and recognizable in any form, but are pres
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which members of the food chain break down remains of plants and other organisms when they die?
A. Respiration
B. nematodes
C. decomposers
D. fluxes
Answer:
|
|
sciq-8161
|
multiple_choice
|
At any specific time, the rate at which a reaction is proceeding is known as its what?
|
[
"instantaneous rate",
"immediate rate",
"emitted rate",
"spontaneous rate"
] |
A
|
Relavent Documents:
Document 0:::
An elementary reaction is a chemical reaction in which one or more chemical species react directly to form products in a single reaction step and with a single transition state. In practice, a reaction is assumed to be elementary if no reaction intermediates have been detected or need to be postulated to describe the reaction on a molecular scale. An apparently elementary reaction may be in fact a stepwise reaction, i.e. a complicated sequence of chemical reactions, with reaction intermediates of variable lifetimes.
In a unimolecular elementary reaction, a molecule dissociates or isomerises to form the products(s)
At constant temperature, the rate of such a reaction is proportional to the concentration of the species
In a bimolecular elementary reaction, two atoms, molecules, ions or radicals, and , react together to form the product(s)
The rate of such a reaction, at constant temperature, is proportional to the product of the concentrations of the species and
The rate expression for an elementary bimolecular reaction is sometimes referred to as the Law of Mass Action as it was first proposed by Guldberg and Waage in 1864. An example of this type of reaction is a cycloaddition reaction.
This rate expression can be derived from first principles by using collision theory for ideal gases. For the case of dilute fluids equivalent results have been obtained from simple probabilistic arguments.
According to collision theory the probability of three chemical species reacting simultaneously with each other in a termolecular elementary reaction is negligible. Hence such termolecular reactions are commonly referred as non-elementary reactions and can be broken down into a more fundamental set of bimolecular reactions, in agreement with the law of mass action. It is not always possible to derive overall reaction schemes, but solutions based on rate equations are often possible in terms of steady-state or Michaelis-Menten approximations.
Notes
Chemical kinetics
Phy
Document 1:::
Activation energy asymptotics (AEA), also known as large activation energy asymptotics, is an asymptotic analysis used in the combustion field utilizing the fact that the reaction rate is extremely sensitive to temperature changes due to the large activation energy of the chemical reaction.
History
The techniques were pioneered by the Russian scientists Yakov Borisovich Zel'dovich, David A. Frank-Kamenetskii and co-workers in the 30s, in their study on premixed flames and thermal explosions (Frank-Kamenetskii theory), but not popular to western scientists until the 70s. In the early 70s, due to the pioneering work of Williams B. Bush, Francis E. Fendell, Forman A. Williams, Amable Liñán and John F. Clarke, it became popular in western community and since then it was widely used to explain more complicated problems in combustion.
Method overview
In combustion processes, the reaction rate is dependent on temperature in the following form (Arrhenius law),
where is the activation energy, and is the universal gas constant. In general, the condition is satisfied, where is the burnt gas temperature. This condition forms the basis for activation energy asymptotics. Denoting for unburnt gas temperature, one can define the Zel'dovich number and heat release parameter as follows
In addition, if we define a non-dimensional temperature
such that approaching zero in the unburnt region and approaching unity in the burnt gas region (in other words, ), then the ratio of reaction rate at any temperature to reaction rate at burnt gas temperature is given by
Now in the limit of (large activation energy) with , the reaction rate is exponentially small i.e., and negligible everywhere, but non-negligible when . In other words, the reaction rate is negligible everywhere, except in a small region very close to burnt gas temperature, where . Thus, in solving the conservation equations, one identifies two different regimes, at leading order,
Outer convective-diffusive zone
I
Document 2:::
Grote–Hynes theory is a theory of reaction rate in a solution phase. This rate theory was developed by James T. Hynes with his graduate student Richard F. Grote in 1980.
The theory is based on the generalized Langevin equation (GLE). This theory introduced the concept of frequency dependent friction for chemical rate processes in solution phase. Because of inclusion of the frequency dependent friction instead of constant friction, the theory successfully predicts the rate constant including where the reaction barrier is large and of high frequency, where the diffusion over the barrier starts decoupling from viscosity of the medium. This was the weakness of Kramer's rate theory, which underestimated the reaction rate having large barrier with high frequency.
Document 3:::
The Hatta number (Ha) was developed by Shirôji Hatta, who taught at Tohoku University. It is a dimensionless parameter that compares the rate of reaction in a liquid film to the rate of diffusion through the film. For a second order reaction (), the maximum rate of reaction assumes that the liquid film is saturated with gas at the interfacial concentration ; thus, the maximum rate of reaction is .
For a reaction order in and order in :
It is an important parameter used in Chemical Reaction Engineering.
Document 4:::
Activation, in chemistry and biology, is the process whereby something is prepared or excited for a subsequent reaction.
Chemistry
In chemistry, "activation" refers to the reversible transition of a molecule into a nearly identical chemical or physical state, with the defining characteristic being that this resultant state exhibits an increased propensity to undergo a specified chemical reaction. Thus, activation is conceptually the opposite of protection, in which the resulting state exhibits a decreased propensity to undergo a certain reaction.
The energy of activation specifies the amount of free energy the reactants must possess (in addition to their rest energy) in order to initiate their conversion into corresponding products—that is, in order to reach the transition state for the reaction. The energy needed for activation can be quite small, and often it is provided by the natural random thermal fluctuations of the molecules themselves (i.e. without any external sources of energy).
The branch of chemistry that deals with this topic is called chemical kinetics.
Biology
Biochemistry
In biochemistry, activation, specifically called bioactivation, is where enzymes or other biologically active molecules acquire the ability to perform their biological function, such as inactive proenzymes being converted into active enzymes that are able to catalyze their substrates' reactions into products. Bioactivation may also refer to the process where inactive prodrugs are converted into their active metabolites, or the toxication of protoxins into actual toxins.
An enzyme may be reversibly or irreversibly bioactivated. A major mechanism of irreversible bioactivation is where a piece of a protein is cut off by cleavage, producing an enzyme that will then stay active. A major mechanism of reversible bioactivation is substrate presentation where an enzyme translocates near its substrate. Another reversible reaction is where a cofactor binds to an enzyme, which then rem
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
At any specific time, the rate at which a reaction is proceeding is known as its what?
A. instantaneous rate
B. immediate rate
C. emitted rate
D. spontaneous rate
Answer:
|
|
ai2_arc-462
|
multiple_choice
|
A group of students plans to build a model of a local pond habitat. Which model best represents an environment similar to a pond?
|
[
"a sealed plastic bottle containing insects and algae from a pond",
"a classroom aquarium containing plants and animals bought from a store",
"a classroom aquarium containing freshwater, non-native plants, and non-native animals",
"a small plastic outdoor pool containing freshwater, native plants, and native animals"
] |
D
|
Relavent Documents:
Document 0:::
The University of Michigan Biological Station (UMBS) is a research and teaching facility operated by the University of Michigan. It is located on the south shore of Douglas Lake in Cheboygan County, Michigan. The station consists of 10,000 acres (40 km2) of land near Pellston, Michigan in the northern Lower Peninsula of Michigan and 3,200 acres (13 km2) on Sugar Island in the St. Mary's River near Sault Ste. Marie, in the Upper Peninsula. It is one of only 28 Biosphere Reserves in the United States.
Overview
Founded in 1909, it has grown to include approximately 150 buildings, including classrooms, student cabins, dormitories, a dining hall, and research facilities. Undergraduate and graduate courses are available in the spring and summer terms. It has a full-time staff of 15.
In the 2000s, UMBS is increasingly focusing on the measurement of climate change. Its field researchers are gauging the impact of global warming and increased levels of atmospheric carbon dioxide on the ecosystem of the upper Great Lakes region, and are using field data to improve the computer models used to forecast further change. Several archaeological digs have been conducted at the station as well.
UMBS field researchers sometimes call the station "bug camp" amongst themselves. This is believed to be due to the number of mosquitoes and other insects present. It is also known as "The Bio-Station".
The UMBS is also home to Michigan's most endangered species and one of the most endangered species in the world: the Hungerford's Crawling Water Beetle. The species lives in only five locations in the world, two of which are in Emmet County. One of these, a two and a half mile stretch downstream from the Douglas Road crossing of the East Branch of the Maple River supports the only stable population of the Hungerford's Crawling Water Beetle, with roughly 1000 specimens. This area, though technically not part of the UMBS is largely within and along the boundary of the University of Michigan
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
Abernathy Field Station is a outdoor ecology classroom serving Washington & Jefferson College (W&J College).
The facility, located southeast from the campus in Washington, Pennsylvania, is home to several different ecosystems, including mixed deciduous forest, conifers, several springseeps, two perennial streams, wetlands, and a mowed field. These ecosystems support a diverse slate of wildlife, including birds, salamanders, fish, small mammals, white-tailed deer, various insects, and over 100 trees. The facility is equipped with a NexSens-brand real-time weather station and stream monitoring system to provide background data for research.
The Abernathy Field Station is operated by W&J College, allowing faculty and students to study the structure and function of the ecosystems and wildlife in it through coursework and independent research projects. Students may not conduct research at the facility without faculty supervision. Access to the land has been provided to the college by Dr. Ernest and Janet Abernathy, and the college has committed itself to preserving the ecological integrity of the land while utilizing it as an outdoor classroom.
In 2008, W&J College received a $1 million grant from the Howard Hughes Medical Institute which would provide funding for long-term ecological monitoring at the Field Station.
Gallery
Document 3:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
Document 4:::
The Bachelor of Science in Aquatic Resources and Technology (B.Sc. in AQT) (or Bachelor of Aquatic Resource) is an undergraduate degree that prepares students to pursue careers in the public, private, or non-profit sector in areas such as marine science, fisheries science, aquaculture, aquatic resource technology, food science, management, biotechnology and hydrography. Post-baccalaureate training is available in aquatic resource management and related areas.
The Department of Animal Science and Export Agriculture, at the Uva Wellassa University of Badulla, Sri Lanka, has the largest enrollment of undergraduate majors in Aquatic Resources and Technology, with about 200 students as of 2014.
The Council on Education for Aquatic Resources and Technology includes undergraduate AQT degrees in the accreditation review of Aquatic Resources and Technology programs and schools.
See also
Marine Science
Ministry of Fisheries and Aquatic Resources Development
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A group of students plans to build a model of a local pond habitat. Which model best represents an environment similar to a pond?
A. a sealed plastic bottle containing insects and algae from a pond
B. a classroom aquarium containing plants and animals bought from a store
C. a classroom aquarium containing freshwater, non-native plants, and non-native animals
D. a small plastic outdoor pool containing freshwater, native plants, and native animals
Answer:
|
|
sciq-7694
|
multiple_choice
|
What do we call the region on the lung root formed by the entrance of the nerves at the hilum?
|
[
"brain plexus",
"heart plexus",
"pulmonary plexus",
"renal plexus"
] |
C
|
Relavent Documents:
Document 0:::
In human anatomy, the hilum (; : hila), sometimes formerly called a hilus (; : hili), is a depression or fissure where structures such as blood vessels and nerves enter an organ. Examples include:
Hilum of kidney, admits the renal artery, vein, ureter, and nerves
Splenic hilum, on the surface of the spleen, admits the splenic artery, vein, lymph vessels, and nerves
Hilum of lung, a triangular depression where the structures which form the root of the lung enter and leave the viscus
Hilum of lymph node, the portion of a lymph node where the efferent vessels exit
Hilus of dentate gyrus, part of hippocampus that contains the mossy cells.
Anatomy
Document 1:::
The pulmonary branches of the vagus nerve can be divided into two groups: anterior and posterior.
Anterior
The Anterior Bronchial Branches (rami bronchiales anteriores; anterior or ventral pulmonary branches), two or three in number, and of small size, are distributed on the anterior surface of the root of the lung.
They join with filaments from the sympathetic, and form the anterior pulmonary plexus.
Posterior
The Posterior Bronchial Branches (rami bronchiales posteriores; posterior or dorsal pulmonary branches), more numerous and larger than the anterior, are distributed on the posterior surface of the root of the lung; they are joined by filaments from the third and fourth (sometimes also from the first and second) thoracic ganglia of the sympathetic trunk, and form the posterior pulmonary plexus.
Branches from this plexus accompany the ramifications of the bronchi through the substance of the lung.
Document 2:::
A shallow, longitudinal groove separating the developing gray matter into a basal and alar plates along the length of the neural tube. The sulcus limitans extends the length of the spinal cord and through the mesencephalon.
Document 3:::
The splenic plexus (lienal plexus in older texts) is formed by branches from the celiac plexus, the left celiac ganglion, and from the right vagus nerve.
It accompanies the lienal artery to the spleen, giving off, in its course, subsidiary plexuses along the various branches of the artery.
Document 4:::
The suprarenal plexus is formed by branches from the celiac plexus, from the celiac ganglion, and from the phrenic and greater splanchnic nerves, a ganglion being formed at the point of junction with the latter nerve.
The plexus supplies the suprarenal gland, being distributed chiefly to its medullary portion; its branches are remarkable for their large size in comparison with that of the organ they supply.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do we call the region on the lung root formed by the entrance of the nerves at the hilum?
A. brain plexus
B. heart plexus
C. pulmonary plexus
D. renal plexus
Answer:
|
|
sciq-992
|
multiple_choice
|
Because their embryos are surrounded by a thin membrane, reptiles are considered what?
|
[
"vertebrates",
"amniotes",
"lineages",
"carnivorous"
] |
B
|
Relavent Documents:
Document 0:::
Reptiles arose about 320 million years ago during the Carboniferous period. Reptiles, in the traditional sense of the term, are defined as animals that have scales or scutes, lay land-based hard-shelled eggs, and possess ectothermic metabolisms. So defined, the group is paraphyletic, excluding endothermic animals like birds that are descended from early traditionally-defined reptiles. A definition in accordance with phylogenetic nomenclature, which rejects paraphyletic groups, includes birds while excluding mammals and their synapsid ancestors. So defined, Reptilia is identical to Sauropsida.
Though few reptiles today are apex predators, many examples of apex reptiles have existed in the past. Reptiles have an extremely diverse evolutionary history that has led to biological successes, such as dinosaurs, pterosaurs, plesiosaurs, mosasaurs, and ichthyosaurs.
First reptiles
Rise from water
Reptiles first arose from earlier tetrapods in the swamps of the late Carboniferous (Early Pennsylvanian - Bashkirian). Increasing evolutionary pressure and the vast untouched niches of the land powered the evolutionary changes in amphibians to gradually become more and more land-based. Environmental selection propelled the development of certain traits, such as a stronger skeletal structure, muscles, and more protective coating (scales) became more favorable; the basic foundation of reptiles were founded. The evolution of lungs and legs are the main transitional steps towards reptiles, but the development of hard-shelled external eggs replacing the amphibious water bound eggs is the defining feature of the class Reptilia and is what allowed these amphibians to fully leave water. Another major difference from amphibians is the increased brain size, more specifically, the enlarged cerebrum and cerebellum. Although their brain size is small when compared to birds and mammals, these enhancements prove vital in hunting strategies of reptiles. The increased size of these two regions
Document 1:::
The "Standard Event System" (SES) to Study Vertebrate Embryos was developed in 2009 to establish a common language in comparative embryology. Homologous developmental characters are defined therein and should be recognisable in all vertebrate embryos. The SES includes a protocol on how to describe and depict vertebrate embryonic characters. The SES was initially developed for external developmental characters of organogenesis, particularly for turtle embryos. However, it is expandable both taxonomically and in regard to anatomical or molecular characters. This article should act as an overview on the species staged with SES and document the expansions of this system. New entries need to be validated based on the citation of scientific publications. The guideline on how to establish new SES-characters and to describe species can be found in the original paper of Werneburg (2009).
SES-characters are used to reconstruct ancestral developmental sequences in evolution such as that of the last common ancestor of placental mammals. Also the plasticity of developmental characters can be documented and analysed.
SES-staged species
Overview on the vertebrate species staged with SES.
SES-characters
New SES-characters are continuously described in new publications. Currently, characters of organogenesis are described for Vertebrata (V), Gnathostomata (G), Tetrapoda (T), Amniota (A), Sauropsida (S), Squamata (SQ), Mammalia (M), and Monotremata (MO). In total, 166 SES-characters are currently defined.
Document 2:::
Iguania is an infraorder of squamate reptiles that includes iguanas, chameleons, agamids, and New World lizards like anoles and phrynosomatids. Using morphological features as a guide to evolutionary relationships, the Iguania are believed to form the sister group to the remainder of the Squamata, which comprise nearly 11,000 named species, roughly 2000 of which are iguanians. However, molecular information has placed Iguania well within the Squamata as sister taxa to the Anguimorpha and closely related to snakes. The order has been under debate and revisions after being classified by Charles Lewis Camp in 1923 due to difficulties finding adequate synapomorphic morphological characteristics. Most Iguanias are arboreal but there are several terrestrial groups. They usually have primitive fleshy, non-prehensile tongues, although the tongue is highly modified in chameleons. The group has a fossil record that extends back to the Early Jurassic (the oldest known member is Bharatagama, which lived about 190 million years ago in what is now India). Today they are scattered occurring in Madagascar, the Fiji and Friendly Islands and Western Hemisphere.
Classification
The Iguania currently include these extant families:
Clade Acrodonta
Family Agamidae – agamid lizards, Old World arboreal lizards
Family Chamaeleonidae – chameleons
Clade Pleurodonta – American arboreal lizards, chuckwallas, iguanas
Family Leiocephalidae
Genus Leiocephalus: curly-tailed lizards
Family Corytophanidae – helmet lizards
Family Crotaphytidae – collared lizards, leopard lizards
Family Hoplocercidae – dwarf and spinytail iguanas
Family Iguanidae – marine, Fijian, Galapagos land, spinytail, rock, desert, green, and chuckwalla iguanas
Family Tropiduridae – tropidurine lizards
subclade of Tropiduridae Tropidurini – neotropical ground lizards
Family Dactyloidae – anoles
Family Polychrotidae
subclade of Polychrotidae Polychrus
Family Phrynosomatidae – North American spiny lizards
Family Liolaem
Document 3:::
A herpetarium is a zoological exhibition space for reptiles and amphibians, most commonly a dedicated area of a larger zoo. A herpetarium which specializes in snakes is an ophidiarium or serpentarium, which are more common as stand-alone entities also known as snake farms. Many snake farms milk snakes for venom for medical and scientific research.
Notable herpetariums
Alice Springs Reptile Centre in Alice Springs, Australia
Armadale Reptile Centre in Perth, Australia
Australian Reptile Park in Somersby, Australia
Chennai Snake Park Trust in Chennai, India
Crocodiles of the World in Brize Norton, UK
Crocosaurus Cove in Darwin, Australia
Clyde Peeling's Reptiland in Allenwood, Pennsylvania
Kentucky Reptile Zoo in Slade, Kentucky
The LAIR at the Los Angeles Zoo in Los Angeles, California
Serpent Safari in Gurnee, Illinois
Saint Louis Zoo Herpetarium in St. Louis, Missouri
Staten Island Zoo Serpentarium in New York City, New York
World of Reptiles at the Bronx Zoo in New York City, New York
See also
Herpetoculture
Bill Haast (founder of Miami Serpentarium)
Document 4:::
Osteoderms are bony deposits forming scales, plates, or other structures based in the dermis. Osteoderms are found in many groups of extant and extinct reptiles and amphibians, including lizards, crocodilians, frogs, temnospondyls (extinct amphibians), various groups of dinosaurs (most notably ankylosaurs and stegosaurians), phytosaurs, aetosaurs, placodonts, and hupehsuchians (marine reptiles with possible ichthyosaur affinities).
Osteoderms are uncommon in mammals, although they have occurred in many xenarthrans (armadillos and the extinct glyptodonts and mylodontid ground sloths). The heavy, bony osteoderms have evolved independently in many different lineages. The armadillo osteoderm is believed to develop in subcutaneous dermal tissues. These varied structures should be thought of as anatomical analogues, not homologues, and do not necessarily indicate monophyly. The structures are however derived from scutes, common to all classes of amniotes and are an example of what has been termed deep homology. In many cases, osteoderms may function as defensive armor. Osteoderms are composed of bone tissue, and are derived from a scleroblast neural crest cell population during embryonic development of the organism. The scleroblastic neural crest cell population shares some homologous characteristics associated with the dermis. Neural crest cells, through epithelial-to-mesenchymal transition, are thought to contribute to osteoderm development.
The osteoderms of modern crocodilians are heavily vascularized, and can function as both armor and as heat-exchangers, allowing these large reptiles to rapidly raise or lower their temperature. Another function is to neutralize acidosis, caused by being submerged under water for longer periods of time and leading to the accumulation of carbon dioxide in the blood. The calcium and magnesium in the dermal bone will release alkaline ions into the bloodstream, acting as a buffer against acidification of the body fluids.
See also
Ex
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Because their embryos are surrounded by a thin membrane, reptiles are considered what?
A. vertebrates
B. amniotes
C. lineages
D. carnivorous
Answer:
|
|
sciq-6663
|
multiple_choice
|
When different types of tissues work together to perform a unique function, what do they form?
|
[
"Brian",
"organ",
"organs",
"produce"
] |
B
|
Relavent Documents:
Document 0:::
In a multicellular organism, an organ is a collection of tissues joined in a structural unit to serve a common function. In the hierarchy of life, an organ lies between tissue and an organ system. Tissues are formed from same type cells to act together in a function. Tissues of different types combine to form an organ which has a specific function. The intestinal wall for example is formed by epithelial tissue and smooth muscle tissue. Two or more organs working together in the execution of a specific body function form an organ system, also called a biological system or body system.
An organ's tissues can be broadly categorized as parenchyma, the functional tissue, and stroma, the structural tissue with supportive, connective, or ancillary functions. For example, the gland's tissue that makes the hormones is the parenchyma, whereas the stroma includes the nerves that innervate the parenchyma, the blood vessels that oxygenate and nourish it and carry away its metabolic wastes, and the connective tissues that provide a suitable place for it to be situated and anchored. The main tissues that make up an organ tend to have common embryologic origins, such as arising from the same germ layer. Organs exist in most multicellular organisms. In single-celled organisms such as members of the eukaryotes, the functional analogue of an organ is known as an organelle. In plants, there are three main organs.
The number of organs in any organism depends on the definition used. By one widely adopted definition, 79 organs have been identified in the human body.
Animals
Except for placozoans, multicellular animals including humans have a variety of organ systems. These specific systems are widely studied in human anatomy. The functions of these organ systems often share significant overlap. For instance, the nervous and endocrine system both operate via a shared organ, the hypothalamus. For this reason, the two systems are combined and studied as the neuroendocrine system. The sam
Document 1:::
H2.00.04.4.01001: Lymphoid tissue
H2.00.05.0.00001: Muscle tissue
H2.00.05.1.00001: Smooth muscle tissue
H2.00.05.2.00001: Striated muscle tissue
H2.00.06.0.00001: Nerve tissue
H2.00.06.1.00001: Neuron
H2.00.06.2.00001: Synapse
H2.00.06.2.00001: Neuroglia
h3.01: Bones
h3.02: Joints
h3.03: Muscles
h3.04: Alimentary system
h3.05: Respiratory system
h3.06: Urinary system
h3.07: Genital system
h3.08:
Document 2:::
Splanchnology is the study of the visceral organs, i.e. digestive, urinary, reproductive and respiratory systems.
The term derives from the Neo-Latin splanchno-, from the Greek σπλάγχνα, meaning "viscera". More broadly, splanchnology includes all the components of the Neuro-Endo-Immune (NEI) Supersystem. An organ (or viscus) is a collection of tissues joined in a structural unit to serve a common function. In anatomy, a viscus is an internal organ, and viscera is the plural form. Organs consist of different tissues, one or more of which prevail and determine its specific structure and function. Functionally related organs often cooperate to form whole organ systems.
Viscera are the soft organs of the body. There are organs and systems of organs that differ in structure and development but they are united for the performance of a common function. Such functional collection of mixed organs, form an organ system. These organs are always made up of special cells that support its specific function. The normal position and function of each visceral organ must be known before the abnormal can be ascertained.
Healthy organs all work together cohesively and gaining a better understanding of how, helps to maintain a healthy lifestyle. Some functions cannot be accomplished only by one organ. That is why organs form complex systems. The system of organs is a collection of homogeneous organs, which have a common plan of structure, function, development, and they are connected to each other anatomically and communicate through the NEI supersystem.
Document 3:::
Outline
h1.00: Cytology
h2.00: General histology
H2.00.01.0.00001: Stem cells
H2.00.02.0.00001: Epithelial tissue
H2.00.02.0.01001: Epithelial cell
H2.00.02.0.02001: Surface epithelium
H2.00.02.0.03001: Glandular epithelium
H2.00.03.0.00001: Connective and supportive tissues
H2.00.03.0.01001: Connective tissue cells
H2.00.03.0.02001: Extracellular matrix
H2.00.03.0.03001: Fibres of connective tissues
H2.00.03.1.00001: Connective tissue proper
H2.00.03.1.01001: Ligaments
H2.00.03.2.00001: Mucoid connective tissue; Gelatinous connective tissue
H2.00.03.3.00001: Reticular tissue
H2.00.03.4.00001: Adipose tissue
H2.00.03.5.00001: Cartilage tissue
H2.00.03.6.00001: Chondroid tissue
H2.00.03.7.00001: Bone tissue; Osseous tissue
H2.00.04.0.00001: Haemotolymphoid complex
H2.00.04.1.00001: Blood cells
H2.00.04.1.01001: Erythrocyte; Red blood cell
H2.00.04.1.02001: Leucocyte; White blood cell
H2.00.04.1.03001: Platelet; Thrombocyte
H2.00.04.2.00001: Plasma
H2.00.04.3.00001: Blood cell production
H2.00.04.4.00001: Postnatal sites of haematopoiesis
H2.00.04.4.01001: Lymphoid tissue
H2.00.05.0.00001: Muscle tissue
H2.00.05.1.00001: Smooth muscle tissue
Document 4:::
A cell type is a classification used to identify cells that share morphological or phenotypical features. A multicellular organism may contain cells of a number of widely differing and specialized cell types, such as muscle cells and skin cells, that differ both in appearance and function yet have identical genomic sequences. Cells may have the same genotype, but belong to different cell types due to the differential regulation of the genes they contain. Classification of a specific cell type is often done through the use of microscopy (such as those from the cluster of differentiation family that are commonly used for this purpose in immunology). Recent developments in single cell RNA sequencing facilitated classification of cell types based on shared gene expression patterns. This has led to the discovery of many new cell types in e.g. mouse cortex, hippocampus, dorsal root ganglion and spinal cord.
Animals have evolved a greater diversity of cell types in a multicellular body (100–150 different cell types), compared
with 10–20 in plants, fungi, and protists. The exact number of cell types is, however, undefined, and the Cell Ontology, as of 2021, lists over 2,300 different cell types.
Multicellular organisms
All higher multicellular organisms contain cells specialised for different functions. Most distinct cell types arise from a single totipotent cell that differentiates into hundreds of different cell types during the course of development. Differentiation of cells is driven by different environmental cues (such as cell–cell interaction) and intrinsic differences (such as those caused by the uneven distribution of molecules during division). Multicellular organisms are composed of cells that fall into two fundamental types: germ cells and somatic cells. During development, somatic cells will become more specialized and form the three primary germ layers: ectoderm, mesoderm, and endoderm. After formation of the three germ layers, cells will continue to special
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
When different types of tissues work together to perform a unique function, what do they form?
A. Brian
B. organ
C. organs
D. produce
Answer:
|
|
sciq-7982
|
multiple_choice
|
The heart contracts rhythmically to pump what to the lungs and the rest of the body?
|
[
"Chyle",
"Bile",
"blood",
"Cerumen"
] |
C
|
Relavent Documents:
Document 0:::
The cardiac cycle is the performance of the human heart from the beginning of one heartbeat to the beginning of the next. It consists of two periods: one during which the heart muscle relaxes and refills with blood, called diastole, following a period of robust contraction and pumping of blood, called systole. After emptying, the heart relaxes and expands to receive another influx of blood returning from the lungs and other systems of the body, before again contracting to pump blood to the lungs and those systems. A normally performing heart must be fully expanded before it can efficiently pump again. Assuming a healthy heart and a typical rate of 70 to 75 beats per minute, each cardiac cycle, or heartbeat, takes about 0.8 second to complete the cycle.
There are two atrial and two ventricle chambers of the heart; they are paired as the left heart and the right heart—that is, the left atrium with the left ventricle, the right atrium with the right ventricle—and they work in concert to repeat the cardiac cycle continuously (see cycle diagram at right margin). At the start of the cycle, during ventricular diastole–early, the heart relaxes and expands while receiving blood into both ventricles through both atria; then, near the end of ventricular diastole–late, the two atria begin to contract (atrial systole), and each atrium pumps blood into the ventricle below it. During ventricular systole the ventricles are contracting and vigorously pulsing (or ejecting) two separated blood supplies from the heart—one to the lungs and one to all other body organs and systems—while the two atria are relaxed (atrial diastole). This precise coordination ensures that blood is efficiently collected and circulated throughout the body.
The mitral and tricuspid valves, also known as the atrioventricular, or AV valves, open during ventricular diastole to permit filling. Late in the filling period the atria begin to contract (atrial systole) forcing a final crop of blood into the ventric
Document 1:::
Cardiophysics is an interdisciplinary science that stands at the junction of cardiology and medical physics, with researchers using the methods of, and theories from, physics to study cardiovascular system at different levels of its organisation, from the molecular scale to whole organisms. Being formed historically as part of systems biology, cardiophysics designed to reveal connections between the physical mechanisms, underlying the organization of the cardiovascular system, and biological features of its functioning.
Zbigniew R. Struzik seems to be a first author who used the term in a scientific publication in 2004.
One can use interchangeably also the terms cardiovascular physics.
See also
Medical physics
Important publications in medical physics
Biomedicine
Biomedical engineering
Physiome
Nanomedicine
Document 2:::
Cardiac muscle (also called heart muscle or myocardium) is one of three types of vertebrate muscle tissues, with the other two being skeletal muscle and smooth muscle. It is an involuntary, striated muscle that constitutes the main tissue of the wall of the heart. The cardiac muscle (myocardium) forms a thick middle layer between the outer layer of the heart wall (the pericardium) and the inner layer (the endocardium), with blood supplied via the coronary circulation. It is composed of individual cardiac muscle cells joined by intercalated discs, and encased by collagen fibers and other substances that form the extracellular matrix.
Cardiac muscle contracts in a similar manner to skeletal muscle, although with some important differences. Electrical stimulation in the form of a cardiac action potential triggers the release of calcium from the cell's internal calcium store, the sarcoplasmic reticulum. The rise in calcium causes the cell's myofilaments to slide past each other in a process called excitation-contraction coupling.
Diseases of the heart muscle known as cardiomyopathies are of major importance. These include ischemic conditions caused by a restricted blood supply to the muscle such as angina, and myocardial infarction.
Structure
Gross anatomy
Cardiac muscle tissue or myocardium forms the bulk of the heart. The heart wall is a three-layered structure with a thick layer of myocardium sandwiched between the inner endocardium and the outer epicardium (also known as the visceral pericardium). The inner endocardium lines the cardiac chambers, covers the cardiac valves, and joins with the endothelium that lines the blood vessels that connect to the heart. On the outer aspect of the myocardium is the epicardium which forms part of the pericardial sac that surrounds, protects, and lubricates the heart.
Within the myocardium, there are several sheets of cardiac muscle cells or cardiomyocytes. The sheets of muscle that wrap around the left ventricle clos
Document 3:::
Merry L. Lindsey is an American cardiac physiologist. She is the Stokes-Shackleford Professor and Chair of the University of Nebraska Medical Center Department of Cellular and Integrative Physiology and the director of the Center for Heart and Vascular Research. In 2021, Lindsey was appointed editor-in-chief of the American Journal of Physiology. Heart and Circulatory Physiology.
Early life and education
Lindsey was born Stuart, Florida in 1970 and raised in South Florida, where she attended South Fork High School. Following high school, Lindsey earned her undergraduate degree in biology from Boston University and her PhD in cardiovascular sciences from Baylor College of Medicine.
Career
Upon completing her PhD, Lindsey worked at the Medical University of South Carolina as an assistant professor before joining the faculty at the University of Texas Health Science Center. In 2019, she left the Mississippi Center for Heart Research to accept an appointment as the Stokes-Shackleford Professor and Chair of the Department of Cellular and Integrative Physiology at the University of Nebraska Medical Center. Upon joining the department, Lindsey also became the founding director of the Center for Heart and Vascular Research. She joined Meharry Medical College as the dean of the School of Graduate Studies and Research.
In 2021, Lindsey was appointed editor-in-chief of the American Journal of Physiology. Heart and Circulatory Physiology, a journal published by the American Physiological Society. She received the Vincenzo Panagia Distinguished Lecture Award from the Institute of Cardiovascular Sciences at St-Boniface Hospital Research in 2021, and the Distinguished Investigator Award from the British Society for Cardiovascular Research in 2022.
Document 4:::
A cardiac function curve is a graph showing the relationship between right atrial pressure (x-axis) and cardiac output (y-axis).Superimposition of the cardiac function curve and venous return curve is used in one hemodynamic model.
Shape of curve
It shows a steep relationship at relatively low filling pressures and a plateau, where further stretch is not possible and so increases in pressure have little effect on output. The pressures where there is a steep relationship lie within the normal range of right atrial pressure (RAP) found in the healthy human during life. This range is about -1 to +2 mmHg. The higher pressures normally occur only in disease, in conditions such as heart failure, where the heart is unable to pump forward all the blood returning to it and so the pressure builds up in the right atrium and the great veins. Swollen neck veins are often an indicator of this type of heart failure.
At low right atrial pressures this graph serves as a graphic demonstration of the Frank–Starling mechanism, that is as more blood is returned to the heart, more blood is pumped from it without extrinsic signals.
Changes in the cardiac function curve
In vivo however, extrinsic factors such as an increase in activity of the sympathetic nerves, and a decrease in vagal tone cause the heart to beat more frequently and more forcefully. This alters the cardiac function curve, shifting it upwards. This allows the heart to cope with the required cardiac output at a relatively low right atrial pressure. We get what is known as a family of cardiac function curves, as the heart rate increases before the plateau is reached, and without the RAP having to rise dramatically to stretch the heart more and get the Starling effect.
In vivo sympathetic outflow within the myocardium is probably best described by the time honored description of the sinoatrial tree branching out to Purkinges fibers. Parasympathetic inflow within the myocardium is probably best described by influ
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The heart contracts rhythmically to pump what to the lungs and the rest of the body?
A. Chyle
B. Bile
C. blood
D. Cerumen
Answer:
|
|
sciq-268
|
multiple_choice
|
What is the second most abundant element in the earth's crust?
|
[
"nitrogen",
"helium",
"silicon",
"carbon"
] |
C
|
Relavent Documents:
Document 0:::
Carbon is a primary component of all known life on Earth, representing approximately 45–50% of all dry biomass. Carbon compounds occur naturally in great abundance on Earth. Complex biological molecules consist of carbon atoms bonded with other elements, especially oxygen and hydrogen and frequently also nitrogen, phosphorus, and sulfur (collectively known as CHNOPS).
Because it is lightweight and relatively small in size, carbon molecules are easy for enzymes to manipulate. It is frequently assumed in astrobiology that if life exists elsewhere in the Universe, it will also be carbon-based. Critics refer to this assumption as carbon chauvinism.
Characteristics
Carbon is capable of forming a vast number of compounds, more than any other element, with almost ten million compounds described to date, and yet that number is but a fraction of the number of theoretically possible compounds under standard conditions. The enormous diversity of carbon-containing compounds, known as organic compounds, has led to a distinction between them and compounds that do not contain carbon, known as inorganic compounds. The branch of chemistry that studies organic compounds is known as organic chemistry.
Carbon is the 15th most abundant element in the Earth's crust, and the fourth most abundant element in the universe by mass, after hydrogen, helium, and oxygen. Carbon's widespread abundance, its ability to form stable bonds with numerous other elements, and its unusual ability to form polymers at the temperatures commonly encountered on Earth enables it to serve as a common element of all known living organisms. In a 2018 study, carbon was found to compose approximately 550 billion tons of all life on Earth. It is the second most abundant element in the human body by mass (about 18.5%) after oxygen.
The most important characteristics of carbon as a basis for the chemistry of life are that each carbon atom is capable of forming up to four valence bonds with other atoms simultaneously
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
See also
List of minerals
Document 3:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
Document 4:::
An ecosphere is a planetary closed ecological system. In this global ecosystem, the various forms of energy and matter that constitute a given planet interact on a continual basis. The forces of the four Fundamental interactions cause the various forms of matter to settle into identifiable layers. These layers are referred to as component spheres with the type and extent of each component sphere varying significantly from one particular ecosphere to another. Component spheres that represent a significant portion of an ecosphere are referred to as a primary component spheres. For instance, Earth's ecosphere consists of five primary component spheres which are the Geosphere, Hydrosphere, Biosphere, Atmosphere, and Magnetosphere.
Types of component spheres
Geosphere
The layer of an ecosphere that exists at a Terrestrial planet's Center of mass and which extends radially outward until ending in a solid and spherical layer known as the Crust (geology).
This includes rocks and minerals that are present on the Earth as well as parts of soil and skeletal remains of animals that have become fossilized over the years. This is all about process how rocks metamorphosize. They go through solids to weathered to washing away and back to being buried and resurrected. The primary agent driving these processes is the movement of Earth’s tectonic plates, which creates mountains, volcanoes, and ocean basins. The inner core of the Earth contains liquid iron, which is an important factor in the geosphere as well as the magnetosphere.
Hydrosphere
The total mass of water, regardless of phase (e.g. liquid, solid, gas), that exists within an ecosphere. It's possible for the hydrosphere to be highly distributed throughout other component spheres such as the geosphere and atmosphere.
There are about 1.4 billion km of water on Earth. That includes liquid water in the ocean, lakes, and rivers. It includes frozen water in snow, ice, and glaciers, and water that’s underground in soils and rocks
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the second most abundant element in the earth's crust?
A. nitrogen
B. helium
C. silicon
D. carbon
Answer:
|
|
sciq-166
|
multiple_choice
|
What distinguishing characteristic of annelid anatomy shows specialization and adaptation?
|
[
"compression",
"beautiful",
"asymmetry",
"segmentation"
] |
D
|
Relavent Documents:
Document 0:::
The protocerebrum is the first segment of the panarthropod brain.
Recent studies suggest that it comprises two regions.
Region associated with the expression of six3
six3 is a transcription factor that marks the anteriormost part of the developing body in a whole host of Metazoa.
In the panarthropod brain, the anteriormost (rostralmost) part of the germband expresses six3. This region is described as medial, and corresponds to the annelid prostomium.
In arthropods, it contains the pars intercerebralis and pars lateralis.
six3 is associated with the euarthropod labrum and the onychophoran frontal appendages (antennae).
Region associated with the expression of orthodenticle
The other region expresses homologues of orthodenticle, Otx or otd. This region is more caudal and lateral, and bears the eyes.
Orthodenticle is associated with the protocerebral bridge, part of the central complex, traditionally a marker of the prosocerebrum.
In the annelid brain, Otx expression characterises the peristomium, but also creeps forwards into the regions of the prostomium that bear the larval eyes.
Names of regions
Inconsistent use of the terms archicerebrum and the prosocerebrum makes them confusing.
The regions were defined by Siewing (1963): the archicerebrum as containing the ocular lobes and the mushroom bodies (= corpora pedunculata), and the prosocerebrum as comprising the central complex.
The archicerebrum has traditionally been equated with the anteriormost, 'non-segmental' part of the protocerebrum, equivalent to the acron in older terminology.
The prosocerebrum is then equivalent to the 'segmental' part of the protocerebrum, bordered by segment polarity genes such as engrailed, and (on one interpretation) bearing modified segmental appendages (= camera-type eyes).
But Urbach and Technau (2003) complicate the matter by seeing the prosocerebrum (central complex) + labrum as the anteriormost region
Strausfeld 2016 identifies the anteriormost part of the b
Document 1:::
Polydactyly in stem-tetrapods should here be understood as having more than five digits to the finger or foot, a condition that was the natural state of affairs in the earliest stegocephalians during the evolution of terrestriality. The polydactyly in these largely aquatic animals is not to be confused with polydactyly in the medical sense, i.e. it was not an anomaly in the sense it was not a congenital condition of having more than the typical number of digits for a given taxon. Rather, it appears to be a result of the early evolution from a limb with a fin rather than digits.
"Living tetrapods, such as the frogs, turtles, birds and mammals, are a subgroup of the tetrapod lineage. The lineage also includes finned and limbed tetrapods that are more closely related to living tetrapods than to living lungfishes." Tetrapods evolved from animals with fins such as found in lobe-finned fishes. From this condition a new pattern of limb formation evolved, where the development axis of the limb rotated to sprout secondary axes along the lower margin, giving rise to a variable number of very stout skeletal supports for a paddle-like foot. The condition is thought to have arisen from the loss of the fin ray-forming proteins actinodin 1 and actinodin 2 or modification of the expression of HOXD13. It is still unknown why exactly this happens. "SHH is produced by the mesenchymal cells of the zone of polarizing activity (ZPA) found at the posterior margin of the limbs of all vertebrates with paired appendages, including the most primitive chondrichthyian fishes. Its expression is driven by a well-conserved limb-specific enhancer called the ZRS (zone of polarizing region activity regulatory sequence) that is located approximately 1 Mb upstream of the coding sequence of Shh."
Devonian taxa were polydactylous. Acanthostega had eight digits on both the hindlimbs and forelimbs. Ichthyostega, which was both more derived and more specialized, had seven digits on the hindlimb, though th
Document 2:::
A cnidariologist is a zoologist specializing in Cnidaria, a group of freshwater and marine aquatic animals that include the sea anemones, corals, and jellyfish.
Examples
Edward Thomas Browne (1866-1937)
Henry Bryant Bigelow (1879-1967)
Randolph Kirkpatrick (1863–1950)
Kamakichi Kishinouye (1867-1929)
Paul Lassenius Kramp (1887-1975)
Alfred G. Mayer (1868-1922)
See also
Document 3:::
Myomeres are blocks of skeletal muscle tissue arranged in sequence, commonly found in aquatic chordates. Myomeres are separated from adjacent myomeres by connective fascia (myosepta) and most easily seen in larval fishes or in the olm. Myomere counts are sometimes used for identifying specimens, since their number corresponds to the number of vertebrae in the adults. Location varies, with some species containing these only near the tails, while some have them located near the scapular or pelvic girdles. Depending on the species, myomeres could be arranged in an epaxial or hypaxial manner. Hypaxial refers to ventral muscles and related structures while epaxial refers to more dorsal muscles. The horizontal septum divides these two regions in vertebrates from cyclostomes to gnathostomes. In terrestrial chordates, the myomeres become fused as well as indistinct, due to the disappearance of myosepta.
Shape
The shape of myomeres varies by species. Myomeres are commonly zig-zag, "V" (lancelets), "W" (fishes), or straight (tetrapods)– shaped muscle fibers. Generally, cyclostome myomeres are arranged in vertical strips while those of jawed fishes are folded in a complex matter due to swimming capability evolution. Specifically, myomeres of elasmobranchs and eels are “W”-shaped. Contrastingly, myomeres of tetrapods run vertically and do not display complex folding. Another species with simply-lain myomeres are mudpuppies. Myomeres overlap each other in succession, meaning myomere activation also allows neighboring myomeres to activate.
Myomeres are made up of myoglobin-rich dark muscle as well as white muscle. Dark muscle, generally, functions as slow-twitch muscle fibers while white muscle is composed of fast-twitch fibers.
Function
Specifically, three types of myomeres in fish-like chordates include amphioxine (lancelet), cyclostomine (jawless fish), and gnathostomine (jawed fish). A common function shared by all of these is that they function to flex the body lateral
Document 4:::
Meristics is an area of zoology and botany which relates to counting quantitative features of animals and plants, such as the number of fins or scales in fish. A meristic (countable trait) can be used to describe a particular species, or used to identify an unknown species. Meristic traits are often described in a shorthand notation called a meristic formula.
Meristic characters are the countable structures occurring in series (e.g. myomeres, vertebrae, fin rays). These characters are among the characters most commonly used for differentiation of species and populations. In the salmonids, scale counts have been most widely used for the differentiation of populations within species. In rainbow and steelhead trout the most notable differences among populations occur in counts of scales. Meristic comparison is used in phenetic and cladistic analysis.
Meristic analysis
A meristic study is often a difficult task. For example, counting the features of a fish is not as easy as it may appear. Many meristic analyses are performed on dead fish that have been preserved in alcohol. Meristic traits are less easily observed on living fish, though it is possible. On very small fish, a microscope may be required.
Ichthyologists follow a basic set of rules when performing a meristic analysis, to remove as much ambiguity as possible. The specific practice, however, may vary depending on the type of fish. The methodology for counting meristic traits should be described by the specialist who performs the analysis.
Meristic formula
A meristic formula is a shorthand method of describing the way the bones (rays) of a bony fish's fins are arranged. It is comparable to the floral formula for flowers.
Spine counts are given in Roman numerals, e.g. XI-XIV. Ray counts are given in Arabic numerals, e.g. 11–14.
The meristic formula of the dusky spinefoot (Siganus luridus) is: D, XIV+10; A, VII+8-9; P, 16–17; V, I+3+I; GR, 18-22
This means the fish has 14 spiny rays (bones) in the first p
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What distinguishing characteristic of annelid anatomy shows specialization and adaptation?
A. compression
B. beautiful
C. asymmetry
D. segmentation
Answer:
|
|
sciq-8870
|
multiple_choice
|
What do electrons lose during their transfer from organic compounds to oxygen?
|
[
"potential energy",
"actual energy",
"thermal energy",
"mechanical energy"
] |
A
|
Relavent Documents:
Document 0:::
Outer sphere refers to an electron transfer (ET) event that occurs between chemical species that remain separate and intact before, during, and after the ET event. In contrast, for inner sphere electron transfer the participating redox sites undergoing ET become connected by a chemical bridge. Because the ET in outer sphere electron transfer occurs between two non-connected species, the electron is forced to move through space from one redox center to the other.
Marcus theory
The main theory that describes the rates of outer sphere electron transfer was developed by Rudolph A. Marcus in the 1950s. A major aspect of Marcus theory is the dependence of the electron transfer rate on the thermodynamic driving force (difference in the redox potentials of the electron-exchanging sites). For most reactions, the rates increase with increased driving force. A second aspect is that the rate of outer sphere electron-transfer depends inversely on the "reorganizational energy." Reorganization energy describes the changes in bond lengths and angles that are required for the oxidant and reductant to switch their oxidation states. This energy is assessed by measurements of the self-exchange rates (see below).
Outer sphere electron transfer is the most common type of electron transfer, especially in biochemistry, where redox centers are separated by several (up to about 11) angstroms by intervening protein. In biochemistry, there are two main types of outer sphere ET: ET between two biological molecules or fixed distance electron transfer, in which the electron transfers within a single biomolecule (e.g., intraprotein).
Examples
Self-exchange
Outer sphere electron transfer can occur between chemical species that are identical except for their oxidation state. This process is termed self-exchange. An example is the degenerate reaction between the tetrahedral ions permanganate and manganate:
[MnO4]− + [Mn*O4]2− → [MnO4]2− + [Mn*O4]−
For octahedral metal complexes, the rate co
Document 1:::
Electron transfer (ET) occurs when an electron relocates from an atom or molecule to another such chemical entity. ET is a mechanistic description of certain kinds of redox reactions involving transfer of electrons.
Electrochemical processes are ET reactions. ET reactions are relevant to photosynthesis and respiration and commonly involve transition metal complexes. In organic chemistry ET is a step in some commercial polymerization reactions. It is foundational to photoredox catalysis.
Classes of electron transfer
Inner-sphere electron transfer
In inner-sphere ET, the two redox centers are covalently linked during the ET. This bridge can be permanent, in which case the electron transfer event is termed intramolecular electron transfer. More commonly, however, the covalent linkage is transitory, forming just prior to the ET and then disconnecting following the ET event. In such cases, the electron transfer is termed intermolecular electron transfer. A famous example of an inner sphere ET process that proceeds via a transitory bridged intermediate is the reduction of [CoCl(NH3)5]2+ by [Cr(H2O)6]2+. In this case, the chloride ligand is the bridging ligand that covalently connects the redox partners.
Outer-sphere electron transfer
In outer-sphere ET reactions, the participating redox centers are not linked via any bridge during the ET event. Instead, the electron "hops" through space from the reducing center to the acceptor. Outer sphere electron transfer can occur between different chemical species or between identical chemical species that differ only in their oxidation state. The latter process is termed self-exchange. As an example, self-exchange describes the degenerate reaction between permanganate and its one-electron reduced relative manganate:
[MnO4]− + [Mn*O4]2− → [MnO4]2− + [Mn*O4]−
In general, if electron transfer is faster than ligand substitution, the reaction will follow the outer-sphere electron transfer.
Often occurs when one/both re
Document 2:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 3:::
Adiabatic electron-transfer is a type of oxidation-reduction processes. The mechanism is ubiquitous in nature in both the inorganic and biological spheres. Adiabatic electron-transfers proceed without making or breaking chemical bonds. Adiabatic electron-transfer can occur by either optical or thermal mechanisms. Electron transfer during a collision between an oxidant and a reductant occurs adiabatically on a continuous potential-energy surface.
History
Noel Hush is often credited with formulation of the theory of adiabatic electron-transfer.
Figure 1 sketches the basic elements of adiabatic electron-transfer theory. Two chemical species (ions, molecules, polymers, protein cofactors, etc.) labelled D (for “donor”) and A (for “acceptor”) become a distance R apart, either through collisions, covalent bonding, location in a material, protein or polymer structure, etc. A and D have different chemical environments. Each polarizes their surrounding condensed media. Electron-transfer theories describe the influence of a variety of parameters on the rate of electron-transfer. All electrochemical reactions occur by this mechanism. Adiabatic electron-transfer theory stresses that intricately coupled to such charge transfer is the ability of any D-A system to absorb or emit light. Hence fundamental understanding of any electrochemical process demands simultaneous understanding of the optical processes that the system can undergo.
Figure 2 sketches what happens if light is absorbed by just one of the chemical species, taken to be the charge donor. This produces an excited state of the donor. As the donor and acceptor are close to each other and surrounding matter, they experience a coupling . If the free energy change is favorable, this coupling facilitates primary charge separation to produce D+-A− , producing charged species. In this way, solar energy is captured and converted to electrical energy. This process is typical of natural photosynthesis as well as modern o
Document 4:::
Electron-rich is jargon that is used in multiple related meanings with either or both kinetic and thermodynamic implications:
with regards to electron-transfer, electron-rich species have low ionization energy and/or are reducing agents. Tetrakis(dimethylamino)ethylene is an electron-rich alkene because, unlike ethylene, it forms isolable radical cation. In contrast, electron-poor alkene tetracyanoethylene is an electron acceptor, forming isolable anions.
with regards to acid-base reactions, electron-rich species have high pKa's and react with weak Lewis acids.
with regards to nucleophilic substitution reactions, electron-rich species are relatively strong nucleophiles, as judged by rates of attack by electrophiles. For example, compared to benzene, pyrrole is more rapidly attacked by electrophiles. Pyrrole is therefore considered to be an electron-rich aromatic ring. Similarly, benzene derivatives with electron-donating groups (EDGs) are attacked by electrophiles faster than in benzene. The electron-donating vs electron-withdrawing influence of various functional groups have been extensively parameterized in linear free energy relationships.
with regards to Lewis acidity, electron-rich species are strong Lewis bases.
See also
Electron-withdrawing group
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do electrons lose during their transfer from organic compounds to oxygen?
A. potential energy
B. actual energy
C. thermal energy
D. mechanical energy
Answer:
|
|
sciq-6860
|
multiple_choice
|
Representing a leap in scientific understanding, einstein described what as a dent in the fabric of space and time?
|
[
"motion",
"light",
"energy",
"gravity"
] |
D
|
Relavent Documents:
Document 0:::
The Meaning of Relativity: Four Lectures Delivered at Princeton University, May 1921 is a book published by Princeton University Press in 1922 that compiled the 1921 Stafford Little Lectures at Princeton University, given by Albert Einstein. The lectures were translated into English by Edwin Plimpton Adams. The lectures and the subsequent book were Einstein's last attempt to provide a comprehensive overview of his theory of relativity and is his only book that provides an accessible overview of the physics and mathematics of general relativity. Einstein explained his goal in the preface of the book's German edition by stating he "wanted to summarize the principal thoughts and mathematical methods of relativity theory" and that his "principal aim was to let the fundamentals in the entire train of thought of the theory emerge clearly". Among other reviews, the lectures were the subject of the 2017 book The Formative Years of Relativity: The History and Meaning of Einstein's Princeton Lectures by Hanoch Gutfreund and Jürgen Renn.
Background
The book contains four of Einstein's Stafford Little Lectures that were given at Princeton University in 1921. The lectures follow a series of 1915 publications by Einstein developing the theory of general relativity. During this time, there were still many controversial issues surrounding the theories and he was still defending several of his views. The lectures and the subsequent book were Einstein's last attempt to provide a comprehensive overview of his theory of relativity. It is also his only book that provides an overview of the physics and mathematics of general relativity in a comprehensive manner that was accessible to non-specialists. Einstein explained his goal in the preface of the book's German edition by stating he "wanted to summarize the principal thoughts and mathematical methods of relativity theory" and that his "principal aim was to let the fundamentals in the entire train of thought of the theory emerge clea
Document 1:::
Beyond Einstein: The Cosmic Quest for the Theory of the Universe is a book by Michio Kaku, a theoretical physicist from the City College of New York, and Jennifer Trainer Thompson. It focuses on the development of superstring theory, which might become the unified field theory of the strong force, the weak force, electromagnetism and gravity. The book was initially published on February 1, 1987, by Bantam Books.
Overview
Beyond Einstein tries to explain the basics of superstring theory. Michio Kaku analyzes the history of theoretical physics and the struggle to formulate a unified field theory. He posits that the superstring theory might be the only theory that can unite quantum mechanics and general relativity in one theory.
Document 2:::
The Einstein Papers Project (EPP) produces the historical edition of the writings and correspondence of Albert Einstein. The EPP collects, transcribes, translates, annotates, and publishes materials from Einstein's literary estate and a multitude of other repositories, which hold Einstein-related historical sources. The staff of the project is an international collaborative group of scholars, editors, researchers, and administrators working on the ongoing authoritative edition, The Collected Papers of Albert Einstein (CPAE).
The EPP was established by Princeton University Press (PUP) in 1977 at the Institute for Advanced Study. The founding editor of the project was professor of physics John Stachel. In 1984, the project moved from Princeton to Stachel's home institution, Boston University. The first volume of the CPAE was published by PUP in 1987. The following year, historian of science Martin J. Klein of Yale University was appointed senior editor of the project. Volumes 1-6 and 8 of the series were completed during the project's time in Boston.
In 2000, professor of history Diana Kormos-Buchwald was appointed general editor and director of the EPP and established offices for the project at the California Institute of Technology (Caltech) In Pasadena, California. Volumes 7 and 9-16 of the CPAE have been completed since the project's move to Caltech. (Volume 11 in the series is a comprehensive index and bibliography to Volumes 1-10).
The CPAE volumes include Einstein's books, his published and unpublished scientific and non-scientific articles, his lecture and research notebooks, travel diaries, book reviews, appeals, and reliable records of his lectures, speeches, interviews with the press, and other oral statements. The volumes also include his professional, personal, and political correspondence. Each annotated volume, referred to as the documentary edition, presents full text documents in their original language, primarily German. Introductions, endnotes, t
Document 3:::
In philosophy, philosophy of physics deals with conceptual and interpretational issues in modern physics, many of which overlap with research done by certain kinds of theoretical physicists. Philosophy of physics can be broadly divided into three areas:
interpretations of quantum mechanics: mainly concerning issues with how to formulate an adequate response to the measurement problem and understand what the theory says about reality.
the nature of space and time: Are space and time substances, or purely relational? Is simultaneity conventional or only relative? Is temporal asymmetry purely reducible to thermodynamic asymmetry?
inter-theoretic relations: the relationship between various physical theories, such as thermodynamics and statistical mechanics. This overlaps with the issue of scientific reduction.
Philosophy of space and time
The existence and nature of space and time (or space-time) are central topics in the philosophy of physics.
Time
Time is often thought to be a fundamental quantity (that is, a quantity which cannot be defined in terms of other quantities), because time seems like a fundamentally basic concept, such that one cannot define it in terms of anything simpler. However, certain theories such as loop quantum gravity claim that spacetime is emergent. As Carlo Rovelli, one of the founders of loop quantum gravity has said: "No more fields on spacetime: just fields on fields". Time is defined via measurement—by its standard time interval. Currently, the standard time interval (called "conventional second", or simply "second") is defined as 9,192,631,770 oscillations of a hyperfine transition in the 133 caesium atom. (ISO 31-1). What time is and how it works follows from the above definition. Time then can be combined mathematically with the fundamental quantities of space and mass to define concepts such as velocity, momentum, energy, and fields.
Both Newton and Galileo,
as well as most people up until the 20th century, thought that time wa
Document 4:::
The idea of a fourth dimension has been a factor in the evolution of modern art, but use of concepts relating to higher dimensions has been little discussed by academics in the literary world. From the late 19th century onwards, many writers began to make use of possibilities opened up by the exploration of such concepts as hypercube geometry. While many writers took the fourth dimension to be one of time (as it is commonly considered today), others preferred to think of it in spatial terms, and some associated the new mathematics with wider changes in modern culture.
In science fiction, a higher "dimension" often refers to parallel or alternate universes or other imagined planes of existence. This usage is derived from the idea that to travel to parallel/alternate universes/planes of existence one must travel in a direction/dimension besides the standard ones. In effect, the other universes/planes are just a small distance away from our own, but the distance is in a fourth (or higher) spatial (or non-spatial) dimension, not the standard ones. Fifth and higher dimensions are used in the same way; for example; the Superman foe Mister Mxyzptlk comes from the fifth dimension.
Early influence
Edgar Allan Poe wrote an essay on cosmology titled Eureka (1848) which said that "space and duration are one". This is the first known instance of suggesting space and time to be different perceptions of one thing. Poe arrived at this conclusion after approximately 90 pages of reasoning but employed no mathematics.
Theoretical physicist James Clerk Maxwell is best known for his work in formulating the equations of electromagnetism. He was also a prize-winning poet, and in his last poem Paradoxical Ode; Maxwell muses on connections between science, religion and nature, touching upon higher-dimensions along the way:
::Since all the tools for my untying
In four-dimensioned space are lying,
Where playful fancy intersperses
Whole avenues of universes..
Excerpt from Maxwell's Parado
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Representing a leap in scientific understanding, einstein described what as a dent in the fabric of space and time?
A. motion
B. light
C. energy
D. gravity
Answer:
|
|
sciq-4728
|
multiple_choice
|
How do amphibians reproduce?
|
[
"biologically",
"they don't",
"sexually",
"asexually"
] |
C
|
Relavent Documents:
Document 0:::
AmphibiaWeb is an American non-profit website that provides information about amphibians. It is run by a group of universities working with the California Academy of Sciences: San Francisco State University, the University of California at Berkeley, University of Florida at Gainesville, and University of Texas at Austin.
AmphibiaWeb's goal is to provide a single page for every species of amphibian in the world so research scientists, citizen scientists and conservationists can collaborate. It added its 7000th animal in 2012, a glass frog from Peru. As of 2022, it hosted more than 8,400 species located worldwide.
Beginning
Scientist David Wake founded AmphibiaWeb in 2000. Wake had been inspired by the decline of amphibian populations across the world. He founded it at the Digital Library Project at the University of California at Berkeley in 2000. Wake came to consider AmphibiaWeb part of his legacy.
Uses
AmphibiaWeb provides information to the IUCN, CalPhotos, Encyclopedia of Life and iNaturalist, and the database is cited in scientific publications.
Document 1:::
Roshd Biological Education is a quarterly science educational magazine covering recent developments in biology and biology education for a biology teacher Persian -speaking audience. Founded in 1985, it is published by The Teaching Aids Publication Bureau, Organization for Educational Planning and Research, Ministry of Education, Iran. Roshd Biological Education has an editorial board composed of Iranian biologists, experts in biology education, science journalists and biology teachers.
It is read by both biology teachers and students, as a way of launching innovations and new trends in biology education, and helping biology teachers to teach biology in better and more effective ways.
Magazine layout
As of Autumn 2012, the magazine is laid out as follows:
Editorial—often offering a view of point from editor in chief on an educational and/or biological topics.
Explore— New research methods and results on biology and/or education.
World— Reports and explores on biological education worldwide.
In Brief—Summaries of research news and discoveries.
Trends—showing how new technology is altering the way we live our lives.
Point of View—Offering personal commentaries on contemporary topics.
Essay or Interview—often with a pioneer of a biological and/or educational researcher or an influential scientific educational leader.
Muslim Biologists—Short histories of Muslim Biologists.
Environment—An article on Iranian environment and its problems.
News and Reports—Offering short news and reports events on biology education.
In Brief—Short articles explaining interesting facts.
Questions and Answers—Questions about biology concepts and their answers.
Book and periodical Reviews—About new publication on biology and/or education.
Reactions—Letter to the editors.
Editorial staff
Mohammad Karamudini, editor in chief
History
Roshd Biological Education started in 1985 together with many other magazines in other science and art. The first editor was Dr. Nouri-Dalooi, th
Document 2:::
An associated reproductive pattern is a seasonal change in reproduction which is highly correlated with a change in gonad and associated hormone.
Notable Model Organisms
Parthenogenic Whiptail Lizards
Document 3:::
Advanced Placement (AP) Biology (also known as AP Bio) is an Advanced Placement biology course and exam offered by the College Board in the United States. For the 2012–2013 school year, the College Board unveiled a new curriculum with a greater focus on "scientific practices".
This course is designed for students who wish to pursue an interest in the life sciences. The College Board recommends successful completion of high school biology and high school chemistry before commencing AP Biology, although the actual prerequisites vary from school to school and from state to state. This course, nevertheless, is considered very challenging and one of the most difficult AP classes, as shown with AP Finals grade distributions.
Topic outline
The exam covers the following 8 units. The percentage indicates the portion of the multiple-choice section of the exam focused on each content area:
The course is based on and tests six skills, called scientific practices which include:
In addition to the topics above, students are required to be familiar with general lab procedure. Students should know how to collect data, analyze data to form conclusions, and apply those conclusions.
Exam
Students are allowed to use a four-function, scientific, or graphing calculator.
The exam has two sections: a 90 minute multiple choice section and a 90 minute free response section. There are 60 multiple choice questions and six free responses, two long and four short. Both sections are worth 50% of the score.
Score distribution
Commonly used textbooks
Biology, AP Edition by Sylvia Mader (2012, hardcover )
Life: The Science of Biology (Sadava, Heller, Orians, Purves, and Hillis, )
Campbell Biology AP Ninth Edition (Reece, Urry, Cain, Wasserman, Minorsky, and Andrew Jackson )
See also
Glossary of biology
A.P Bio (TV Show)
Document 4:::
Sexual characteristics are physical traits of an organism (typically of a sexually dimorphic organism) which are indicative of or resultant from biological sexual factors. These include both primary sex characteristics, such as gonads, and secondary sex characteristics.
Humans
In humans, sex organs or primary sexual characteristics, which are those a person is born with, can be distinguished from secondary sex characteristics, which develop later in life, usually during puberty. The development of both is controlled by sex hormones produced by the body after the initial fetal stage where the presence or absence of the Y-chromosome and/or the SRY gene determine development.
Male primary sex characteristics are the penis, the scrotum and the ability to ejaculate when matured. Female primary sex characteristics are the vagina, uterus, fallopian tubes, clitoris, cervix, and the ability to give birth and menstruate when matured.
Hormones that express sexual differentiation in humans include:
estrogens
progesterone
androgens such as testosterone
The following table lists the typical sexual characteristics in humans (even though some of these can also appear in other animals as well):
Other organisms
In invertebrates and plants, hermaphrodites (which have both male and female reproductive organs either at the same time or during their life cycle) are common, and in many cases, the norm.
In other varieties of multicellular life (e.g. the fungi division, Basidiomycota) sexual characteristics can be much more complex, and may involve many more than two sexes. For details on the sexual characteristics of fungi, see: Hypha and Plasmogamy.
Secondary sex characteristics in non-human animals include manes of male lions, long tail feathers of male peafowl, the tusks of male narwhals, enlarged proboscises in male elephant seals and proboscis monkeys, the bright facial and rump coloration of male mandrills, and horns in many goats and antelopes.
See also
Mammalian gesta
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How do amphibians reproduce?
A. biologically
B. they don't
C. sexually
D. asexually
Answer:
|
|
ai2_arc-440
|
multiple_choice
|
What circuit does not allow an electrical current to flow through it?
|
[
"closed",
"open",
"parallel",
"series"
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams.
Course content
E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are:
Electrostatics
Conductors, capacitors, and dielectrics
Electric circuits
Magnetic fields
Electromagnetism.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with
Document 2:::
Mathematical methods are integral to the study of electronics.
Mathematics in electronics
Electronics engineering careers usually include courses in calculus (single and multivariable), complex analysis, differential equations (both ordinary and partial), linear algebra and probability. Fourier analysis and Z-transforms are also subjects which are usually included in electrical engineering programs. Laplace transform can simplify computing RLC circuit behaviour.
Basic applications
A number of electrical laws apply to all electrical networks. These include
Faraday's law of induction: Any change in the magnetic environment of a coil of wire will cause a voltage (emf) to be "induced" in the coil.
Gauss's Law: The total of the electric flux out of a closed surface is equal to the charge enclosed divided by the permittivity.
Kirchhoff's current law: the sum of all currents entering a node is equal to the sum of all currents leaving the node or the sum of total current at a junction is zero
Kirchhoff's voltage law: the directed sum of the electrical potential differences around a circuit must be zero.
Ohm's law: the voltage across a resistor is the product of its resistance and the current flowing through it.at constant temperature.
Norton's theorem: any two-terminal collection of voltage sources and resistors is electrically equivalent to an ideal current source in parallel with a single resistor.
Thévenin's theorem: any two-terminal combination of voltage sources and resistors is electrically equivalent to a single voltage source in series with a single resistor.
Millman's theorem: the voltage on the ends of branches in parallel is equal to the sum of the currents flowing in every branch divided by the total equivalent conductance.
See also Analysis of resistive circuits.
Circuit analysis is the study of methods to solve linear systems for an unknown variable.
Circuit analysis
Components
There are many electronic components currently used and they all have thei
Document 3:::
The circuit topology of an electronic circuit is the form taken by the network of interconnections of the circuit components. Different specific values or ratings of the components are regarded as being the same topology. Topology is not concerned with the physical layout of components in a circuit, nor with their positions on a circuit diagram; similarly to the mathematical concept of topology, it is only concerned with what connections exist between the components. There may be numerous physical layouts and circuit diagrams that all amount to the same topology.
Strictly speaking, replacing a component with one of an entirely different type is still the same topology. In some contexts, however, these can loosely be described as different topologies. For instance, interchanging inductors and capacitors in a low-pass filter results in a high-pass filter. These might be described as high-pass and low-pass topologies even though the network topology is identical. A more correct term for these classes of object (that is, a network where the type of component is specified but not the absolute value) is prototype network.
Electronic network topology is related to mathematical topology. In particular, for networks which contain only two-terminal devices, circuit topology can be viewed as an application of graph theory. In a network analysis of such a circuit from a topological point of view, the network nodes are the vertices of graph theory, and the network branches are the edges of graph theory.
Standard graph theory can be extended to deal with active components and multi-terminal devices such as integrated circuits. Graphs can also be used in the analysis of infinite networks.
Circuit diagrams
The circuit diagrams in this article follow the usual conventions in electronics; lines represent conductors, filled small circles represent junctions of conductors, and open small circles represent terminals for connection to the outside world. In most cases, imped
Document 4:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What circuit does not allow an electrical current to flow through it?
A. closed
B. open
C. parallel
D. series
Answer:
|
|
sciq-4169
|
multiple_choice
|
The sequence of bases in a gene translates to the sequence of what protein components?
|
[
"protein acids",
"amino acids",
"rna acids",
"molecular acids"
] |
B
|
Relavent Documents:
Document 0:::
A nucleic acid sequence is a succession of bases within the nucleotides forming alleles within a DNA (using GACT) or RNA (GACU) molecule. This succession is denoted by a series of a set of five different letters that indicate the order of the nucleotides. By convention, sequences are usually presented from the 5' end to the 3' end. For DNA, with its double helix, there are two possible directions for the notated sequence; of these two, the sense strand is used. Because nucleic acids are normally linear (unbranched) polymers, specifying the sequence is equivalent to defining the covalent structure of the entire molecule. For this reason, the nucleic acid sequence is also termed the primary structure.
The sequence represents biological information. Biological deoxyribonucleic acid represents the information which directs the functions of an organism.
Nucleic acids also have a secondary structure and tertiary structure. Primary structure is sometimes mistakenly referred to as "primary sequence". However there is no parallel concept of secondary or tertiary sequence.
Nucleotides
Nucleic acids consist of a chain of linked units called nucleotides. Each nucleotide consists of three subunits: a phosphate group and a sugar (ribose in the case of RNA, deoxyribose in DNA) make up the backbone of the nucleic acid strand, and attached to the sugar is one of a set of nucleobases. The nucleobases are important in base pairing of strands to form higher-level secondary and tertiary structures such as the famed double helix.
The possible letters are A, C, G, and T, representing the four nucleotide bases of a DNA strand – adenine, cytosine, guanine, thymine – covalently linked to a phosphodiester backbone. In the typical case, the sequences are printed abutting one another without gaps, as in the sequence AAAGTCTGAC, read left to right in the 5' to 3' direction. With regards to transcription, a sequence is on the coding strand if it has the same order as the transcribed RNA.
Document 1:::
Biomolecular structure is the intricate folded, three-dimensional shape that is formed by a molecule of protein, DNA, or RNA, and that is important to its function. The structure of these molecules may be considered at any of several length scales ranging from the level of individual atoms to the relationships among entire protein subunits. This useful distinction among scales is often expressed as a decomposition of molecular structure into four levels: primary, secondary, tertiary, and quaternary. The scaffold for this multiscale organization of the molecule arises at the secondary level, where the fundamental structural elements are the molecule's various hydrogen bonds. This leads to several recognizable domains of protein structure and nucleic acid structure, including such secondary-structure features as alpha helixes and beta sheets for proteins, and hairpin loops, bulges, and internal loops for nucleic acids.
The terms primary, secondary, tertiary, and quaternary structure were introduced by Kaj Ulrik Linderstrøm-Lang in his 1951 Lane Medical Lectures at Stanford University.
Primary structure
The primary structure of a biopolymer is the exact specification of its atomic composition and the chemical bonds connecting those atoms (including stereochemistry). For a typical unbranched, un-crosslinked biopolymer (such as a molecule of a typical intracellular protein, or of DNA or RNA), the primary structure is equivalent to specifying the sequence of its monomeric subunits, such as amino acids or nucleotides.
The primary structure of a protein is reported starting from the amino N-terminus to the carboxyl C-terminus, while the primary structure of DNA or RNA molecule is known as the nucleic acid sequence reported from the 5' end to the 3' end.
The nucleic acid sequence refers to the exact sequence of nucleotides that comprise the whole molecule. Often, the primary structure encodes sequence motifs that are of functional importance. Some examples of such motif
Document 2:::
A sequence in biology is the one-dimensional ordering of monomers, covalently linked within a biopolymer; it is also referred to as the primary structure of a biological macromolecule. While it can refer to many different molecules, the term sequence is most often used to refer to a DNA sequence.
See also
Protein sequence
DNA sequence
Genotype
Self-incompatibility in plants
List of geneticists
Human Genome Project
Dot plot (bioinformatics)
Multiplex Ligation-dependent Probe Amplification
Sequence analysis
Molecular biology
Document 3:::
In molecular biology, a library is a collection of DNA fragments that is stored and propagated in a population of micro-organisms through the process of molecular cloning. There are different types of DNA libraries, including cDNA libraries (formed from reverse-transcribed RNA), genomic libraries (formed from genomic DNA) and randomized mutant libraries (formed by de novo gene synthesis where alternative nucleotides or codons are incorporated). DNA library technology is a mainstay of current molecular biology, genetic engineering, and protein engineering, and the applications of these libraries depend on the source of the original DNA fragments. There are differences in the cloning vectors and techniques used in library preparation, but in general each DNA fragment is uniquely inserted into a cloning vector and the pool of recombinant DNA molecules is then transferred into a population of bacteria (a Bacterial Artificial Chromosome or BAC library) or yeast such that each organism contains on average one construct (vector + insert). As the population of organisms is grown in culture, the DNA molecules contained within them are copied and propagated (thus, "cloned").
Terminology
The term "library" can refer to a population of organisms, each of which carries a DNA molecule inserted into a cloning vector, or alternatively to the collection of all of the cloned vector molecules.
cDNA libraries
A cDNA library represents a sample of the mRNA purified from a particular source (either a collection of cells, a particular tissue, or an entire organism), which has been converted back to a DNA template by the use of the enzyme reverse transcriptase. It thus represents the genes that were being actively transcribed in that particular source under the physiological, developmental, or environmental conditions that existed when the mRNA was purified. cDNA libraries can be generated using techniques that promote "full-length" clones or under conditions that generate shorter f
Document 4:::
The nucleic acid notation currently in use was first formalized by the International Union of Pure and Applied Chemistry (IUPAC) in 1970. This universally accepted notation uses the Roman characters G, C, A, and T, to represent the four nucleotides commonly found in deoxyribonucleic acids (DNA).
Given the rapidly expanding role for genetic sequencing, synthesis, and analysis in biology, some researchers have developed alternate notations to further support the analysis and manipulation of genetic data. These notations generally exploit size, shape, and symmetry to accomplish these objectives.
IUPAC notation
Degenerate base symbols in biochemistry are an IUPAC representation for a position on a DNA sequence that can have multiple possible alternatives. These should not be confused with non-canonical bases because each particular sequence will have in fact one of the regular bases. These are used to encode the consensus sequence of a population of aligned sequences and are used for example in phylogenetic analysis to summarise into one multiple sequences or for BLAST searches, even though IUPAC degenerate symbols are masked (as they are not coded).
Under the commonly used IUPAC system, nucleobases are represented by the first letters of their chemical names: guanine, cytosine, adenine, and thymine. This shorthand also includes eleven "ambiguity" characters associated with every possible combination of the four DNA bases. The ambiguity characters were designed to encode positional variations in order to report DNA sequencing errors, consensus sequences, or single-nucleotide polymorphisms. The IUPAC notation, including ambiguity characters and suggested mnemonics, is shown in Table 1.
Despite its broad and nearly universal acceptance, the IUPAC system has a number of limitations, which stem from its reliance on the Roman alphabet. The poor legibility of upper-case Roman characters, which are generally used when displaying genetic data, may be chief among these limi
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The sequence of bases in a gene translates to the sequence of what protein components?
A. protein acids
B. amino acids
C. rna acids
D. molecular acids
Answer:
|
|
sciq-8070
|
multiple_choice
|
What type of organism causes many common diseases such as strep throat and food-borne illnesses?
|
[
"prion",
"virus",
"bacteria",
"parasite"
] |
C
|
Relavent Documents:
Document 0:::
In biology, a pathogen (, "suffering", "passion" and , "producer of"), in the oldest and broadest sense, is any organism or agent that can produce disease. A pathogen may also be referred to as an infectious agent, or simply a germ.
The term pathogen came into use in the 1880s. Typically, the term pathogen is used to describe an infectious microorganism or agent, such as a virus, bacterium, protozoan, prion, viroid, or fungus. Small animals, such as helminths and insects, can also cause or transmit disease. However, these animals are usually referred to as parasites rather than pathogens. The scientific study of microscopic organisms, including microscopic pathogenic organisms, is called microbiology, while parasitology refers to the scientific study of parasites and the organisms that host them.
There are several pathways through which pathogens can invade a host. The principal pathways have different episodic time frames, but soil has the longest or most persistent potential for harboring a pathogen.
Diseases in humans that are caused by infectious agents are known as pathogenic diseases. Not all diseases are caused by pathogens, such as black lung from exposure to the pollutant coal dust, genetic disorders like sickle cell disease, and autoimmune diseases like lupus.
Pathogenicity
Pathogenicity is the potential disease-causing capacity of pathogens, involving a combination of infectivity (pathogen's ability to infect hosts) and virulence (severity of host disease). Koch's postulates are used to establish causal relationships between microbial pathogens and diseases. Whereas meningitis can be caused by a variety of bacterial, viral, fungal, and parasitic pathogens, cholera is only caused by some strains of Vibrio cholerae. Additionally, some pathogens may only cause disease in hosts with an immunodeficiency. These opportunistic infections often involve hospital-acquired infections among patients already combating another condition.
Infectivity involves path
Document 1:::
The host–pathogen interaction is defined as how microbes or viruses sustain themselves within host organisms on a molecular, cellular, organismal or population level. This term is most commonly used to refer to disease-causing microorganisms although they may not cause illness in all hosts. Because of this, the definition has been expanded to how known pathogens survive within their host, whether they cause disease or not.
On the molecular and cellular level, microbes can infect the host and divide rapidly, causing disease by being there and causing a homeostatic imbalance in the body, or by secreting toxins which cause symptoms to appear. Viruses can also infect the host with virulent DNA, which can affect normal cell processes (transcription, translation, etc.), protein folding, or evading the immune response.
Pathogenicity
Pathogen history
One of the first pathogens observed by scientists was Vibrio cholerae, described in detail by Filippo Pacini in 1854. His initial findings were just drawings of the bacteria but, up until 1880, he published many other papers concerning the bacteria. He described how it causes diarrhea as well as developed effective treatments against it. Most of these findings went unnoticed until Robert Koch rediscovered the organism in 1884 and linked it to the disease.
was discovered by Leeuwenhoeck in the 1600s< but was not found to be pathogenic until the 1970s, when an EPA-sponsored symposium was held following a large outbreak in Oregon involving the parasite. Since then, many other organisms have been identified as pathogens, such as H. pylori and E. coli, which have allowed scientists to develop antibiotics to combat these harmful microorganisms.
Types of pathogens
Pathogens include bacteria, fungi, protozoa, helminths, and viruses.
Each of these different types of organisms can then be further classified as a pathogen based on its mode of transmission. This includes the following: food borne, airborne, waterborne, blood-bor
Document 2:::
Freshers' flu is a name commonly given to a battery of illnesses contracted by new students (freshers) during the first few weeks at a university, and colleges of further education in some form; common symptoms include fever, sore throat, severe headache, coughing and general discomfort. The illnesses may or may not include actual flu and is often simply a bad cold.
Causes
The most likely cause is the convergence of large numbers of people arriving from all over the world; this is a particularly elevated risk due to the COVID-19 pandemic. The poor diet and heavy consumption of alcohol during freshers' week is also reported as a cause for many of the illnesses contracted during this time. "Stress, which may be induced by tiredness, combined with a poor diet, late nights and too much alcohol, can weaken the immune system and be a recipe for ill health. All this can make students and staff working with the students more susceptible to infections within their first weeks of term." In addition to this, nearly all university academic years in the UK commence around the end of September or beginning of October, which "marks the start of the annual flu season". The increased susceptibility to illness from late nights, heavy alcohol consumption and stress peaks 2–4 weeks after arrival at university and happens to coincide with the seasonal surge in the outbreaks of colds and flu in the Northern Hemisphere.
Other effects
As well as the usual viral effects, freshers' flu can also have some psychological effects. These effects arise where the stress of leaving home and other consequences of being independent, not to mention various levels of homesickness and the attempts at making new friends, can further weaken the immune system, increasing susceptibility to illness.
See also
Freshman 15
Document 3:::
Evolution of Infectious Disease is a 1993 book by the evolutionary biologist Paul W. Ewald. In this book, Ewald contests the traditional view that parasites should evolve toward benign coexistence with their hosts. He draws on various studies that contradict this dogma and asserts his theory based on fundamental evolutionary principles. This book provides one of the first in-depth presentations of insights from evolutionary biology on various fields in health science, including epidemiology and medicine.
Infectious diseases
Infectious disease are illnesses induced by another organism. Such diseases range from mild to severe cases. The onset of infectious disease can be induced by bacteria, viruses, fungi, and parasites. Several examples of infectious diseases are as follows: tuberculosis, chickenpox, mumps, meningitis, measles, and malaria. Infectious diseases can be obtained through many routes of transmission such as inhalation, open wounds, sores, ingestion, sexual intercourse, and insect bites. Author, Paul Ewald used his book to expound upon infectious diseases in humans and animals, explain various routes of transmission as well as epidemiology as a whole. Epidemiology is defined as the study of the onset, distribution, and control of diseases. Evolutionary epidemiology focuses on the distribution of infectious diseases whereas Darwinian epidemiology focuses on human beings as hosts of infectious diseases. To fully comprehend both aspects of epidemiology, it is necessary to understand how organisms induce these diseases as well as how infected organisms counteract.
Evolution
The extensive research about pathogens shows that they can evolve within a month, whereas animal hosts such as humans take centuries to make large evolutionary changes. Parasite virulence and host resistance are variables that strongly impact a pathogen's ability to replicate and be distributed to many hosts. Parasite virulence is the level of harm a host endures due to a virus, bact
Document 4:::
Sexually transmitted infections (STIs), also referred to as sexually transmitted diseases (STDs), are infections that are commonly spread by sexual activity, especially vaginal intercourse, anal sex and oral sex. The most prevalent STIs may be carried by a significant fraction of the human population.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of organism causes many common diseases such as strep throat and food-borne illnesses?
A. prion
B. virus
C. bacteria
D. parasite
Answer:
|
|
sciq-3756
|
multiple_choice
|
Recognition of pathogens is a function of what type of response?
|
[
"immune",
"hormones",
"inhalation",
"digestion"
] |
A
|
Relavent Documents:
Document 0:::
Immunopathology is a branch of medicine that deals with immune responses associated with disease. It includes the study of the pathology of an organism, organ system, or disease with respect to the immune system, immunity, and immune responses. In biology, it refers to damage caused to an organism by its own immune response, as a result of an infection. It could be due to mismatch between pathogen and host species, and often occurs when an animal pathogen infects a human (e.g. avian flu leads to a cytokine storm which contributes to the increased mortality rate).
Types of Immunity
In all vertebrates, there are two different kinds of immunities: Innate and Adaptive immunity. Innate immunity is used to fight off non-changing antigens and is therefore considered nonspecific. It is usually a more immediate response than the adaptive immune system, usually responding within minutes to hours. It is composed of physical blockades such as the skin, but also contains nonspecific immune cells such as dendritic cells, macrophages, and basophils. The second form of immunity is Adaptive immunity. This form of immunity requires recognition of the foreign antigen before a response is produced. Once the antigen is recognized, a specific response is produced in order to destroy the specific antigen. Because of its tailored response characteristic, adaptive immunity is considered to be specific immunity. A key part of adaptive immunity that separates it from innate is the use of memory to combat the antigen in the future. When the antigen is originally introduced, the organism does not have any receptors for the antigen so it must generate them from the first time the antigen is present. The immune system then builds a memory of that antigen, which enables it to recognize the antigen quicker in the future and be able to combat it quicker and more efficiently. The more the system is exposed to the antigen, the quicker it will build up its responsiveness. Nested within Adaptive immu
Document 1:::
MicrobeLibrary is a permanent collection of over 1400 original peer-reviewed resources for teaching undergraduate microbiology. It is provided by the American Society for Microbiology, Washington DC, United States.
Contents include curriculum activities; images and animations; reviews of books, websites and other resources; and articles from Focus on Microbiology Education, Microbiology Education and Microbe. Around 40% of the materials are free to educators and students, the remainder require a subscription. the service is suspended with the message to:
"Please check back with us in 2017".
External links
MicrobeLibrary
Microbiology
Document 2:::
A microbiologist (from Greek ) is a scientist who studies microscopic life forms and processes. This includes study of the growth, interactions and characteristics of microscopic organisms such as bacteria, algae, fungi, and some types of parasites and their vectors. Most microbiologists work in offices and/or research facilities, both in private biotechnology companies and in academia. Most microbiologists specialize in a given topic within microbiology such as bacteriology, parasitology, virology, or immunology.
Duties
Microbiologists generally work in some way to increase scientific knowledge or to utilise that knowledge in a way that improves outcomes in medicine or some industry. For many microbiologists, this work includes planning and conducting experimental research projects in some kind of laboratory setting. Others may have a more administrative role, supervising scientists and evaluating their results. Microbiologists working in the medical field, such as clinical microbiologists, may see patients or patient samples and do various tests to detect disease-causing organisms.
For microbiologists working in academia, duties include performing research in an academic laboratory, writing grant proposals to fund research, as well as some amount of teaching and designing courses. Microbiologists in industry roles may have similar duties except research is performed in industrial labs in order to develop or improve commercial products and processes. Industry jobs may also not include some degree of sales and marketing work, as well as regulatory compliance duties. Microbiologists working in government may have a variety of duties, including laboratory research, writing and advising, developing and reviewing regulatory processes, and overseeing grants offered to outside institutions. Some microbiologists work in the field of patent law, either with national patent offices or private law practices. Her duties include research and navigation of intellectual proper
Document 3:::
This list of life sciences comprises the branches of science that involve the scientific study of life – such as microorganisms, plants, and animals including human beings. This science is one of the two major branches of natural science, the other being physical science, which is concerned with non-living matter. Biology is the overall natural science that studies life, with the other life sciences as its sub-disciplines.
Some life sciences focus on a specific type of organism. For example, zoology is the study of animals, while botany is the study of plants. Other life sciences focus on aspects common to all or many life forms, such as anatomy and genetics. Some focus on the micro-scale (e.g. molecular biology, biochemistry) other on larger scales (e.g. cytology, immunology, ethology, pharmacy, ecology). Another major branch of life sciences involves understanding the mindneuroscience. Life sciences discoveries are helpful in improving the quality and standard of life and have applications in health, agriculture, medicine, and the pharmaceutical and food science industries. For example, it has provided information on certain diseases which has overall aided in the understanding of human health.
Basic life science branches
Biology – scientific study of life
Anatomy – study of form and function, in plants, animals, and other organisms, or specifically in humans
Astrobiology – the study of the formation and presence of life in the universe
Bacteriology – study of bacteria
Biotechnology – study of combination of both the living organism and technology
Biochemistry – study of the chemical reactions required for life to exist and function, usually a focus on the cellular level
Bioinformatics – developing of methods or software tools for storing, retrieving, organizing and analyzing biological data to generate useful biological knowledge
Biolinguistics – the study of the biology and evolution of language.
Biological anthropology – the study of humans, non-hum
Document 4:::
Long-term close-knit interactions between symbiotic microbes and their host can alter host immune system responses to other microorganisms, including pathogens, and are required to maintain proper homeostasis. The immune system is a host defense system consisting of anatomical physical barriers as well as physiological and cellular responses, which protect the host against harmful microorganisms while limiting host responses to harmless symbionts. Humans are home to 1013 to 1014 bacteria, roughly equivalent to the number of human cells, and while these bacteria can be pathogenic to their host most of them are mutually beneficial to both the host and bacteria.
The human immune system consists of two main types of immunity: innate and adaptive. The innate immune system is made of non-specific defensive mechanisms against foreign cells inside the host including skin as a physical barrier to entry, activation of the complement cascade to identify foreign bacteria and activate necessary cell responses, and white blood cells that remove foreign substances. The adaptive immune system, or acquired immune system, is a pathogen-specific immune response that is carried out by lymphocytes through antigen presentation on MHC molecules to distinguish between self and non-self antigens.
Microbes can promote the development of the host's immune system in the gut and skin, and may help to prevent pathogens from invading. Some release anti-inflammatory products, protecting against parasitic gut microbes. Commensals promote the development of B cells that produce a protective antibody, Immunoglobulin A (IgA). This can neutralize pathogens and exotoxins, and promote the development of immune cells and mucosal immune response. However, microbes have been implicated in human diseases including inflammatory bowel disease, obesity, and cancer.
General principles
Microbial symbiosis relies on interspecies communication.
between the host and microbial symbionts. Immunity has been histori
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Recognition of pathogens is a function of what type of response?
A. immune
B. hormones
C. inhalation
D. digestion
Answer:
|
|
sciq-2241
|
multiple_choice
|
What type of archaea live in salty environments?
|
[
"sporozoans",
"halophiles",
"arthropods",
"amphibians"
] |
B
|
Relavent Documents:
Document 0:::
Archaea ( ; : archaeon ) is a domain of single-celled organisms. These microorganisms lack cell nuclei and are therefore prokaryotes. Archaea were initially classified as bacteria, receiving the name archaebacteria (in the Archaebacteria kingdom), but this term has fallen out of use.
Archaeal cells have unique properties separating them from the other two domains, Bacteria and Eukaryota. Archaea are further divided into multiple recognized phyla. Classification is difficult because most have not been isolated in a laboratory and have been detected only by their gene sequences in environmental samples. It is unknown if these are able to produce endospores.
Archaea and bacteria are generally similar in size and shape, although a few archaea have very different shapes, such as the flat, square cells of Haloquadratum walsbyi. Despite this morphological similarity to bacteria, archaea possess genes and several metabolic pathways that are more closely related to those of eukaryotes, notably for the enzymes involved in transcription and translation. Other aspects of archaeal biochemistry are unique, such as their reliance on ether lipids in their cell membranes, including archaeols. Archaea use more diverse energy sources than eukaryotes, ranging from organic compounds such as sugars, to ammonia, metal ions or even hydrogen gas. The salt-tolerant Haloarchaea use sunlight as an energy source, and other species of archaea fix carbon (autotrophy), but unlike plants and cyanobacteria, no known species of archaea does both. Archaea reproduce asexually by binary fission, fragmentation, or budding; unlike bacteria, no known species of Archaea form endospores.
The first observed archaea were extremophiles, living in extreme environments such as hot springs and salt lakes with no other organisms. Improved molecular detection tools led to the discovery of archaea in almost every habitat, including soil, oceans, and marshlands. Archaea are particularly numerous in the oceans, and
Document 1:::
The following outline is provided as an overview of and topical guide to life forms:
A life form (also spelled life-form or lifeform) is an entity that is living, such as plants (flora), animals (fauna), and fungi (funga). It is estimated that more than 99% of all species that ever existed on Earth, amounting to over five billion species, are extinct.
Earth is the only celestial body known to harbor life forms. No form of extraterrestrial life has been discovered yet.
Archaea
Archaea – a domain of single-celled microorganisms, morphologically similar to bacteria, but they possess genes and several metabolic pathways that are more closely related to those of eukaryotes, notably the enzymes involved in transcription and translation. Many archaea are extremophiles, which means living in harsh environments, such as hot springs and salt lakes, but they have since been found in a broad range of habitats.
Thermoproteota – a phylum of the Archaea kingdom. Initially
Thermoprotei
Sulfolobales – grow in terrestrial volcanic hot springs with optimum growth occurring
Euryarchaeota – In the taxonomy of microorganisms
Haloarchaea
Halobacteriales – in taxonomy, the Halobacteriales are an order of the Halobacteria, found in water saturated or nearly saturated with salt.
Methanobacteria
Methanobacteriales – information including symptoms, causes, diseases, symptoms, treatments, and other medical and health issues.
Methanococci
Methanococcales aka Methanocaldococcus jannaschii – thermophilic methanogenic archaea, meaning that it thrives at high temperatures and produces methane
Methanomicrobia
Methanosarcinales – In taxonomy, the Methanosarcinales are an order of the Methanomicrobia
Methanopyri
Methanopyrales – In taxonomy, the Methanopyrales are an order of the methanopyri.
Thermococci
Thermococcales
Thermoplasmata
Thermoplasmatales – An order of aerobic, thermophilic archaea, in the kingdom
Halophiles – organisms that thrive in high salt concentrations
Ko
Document 2:::
See also
List of Archaea genera
Document 3:::
Morphology
All three species contain genes for urease, urea, and ammonia. Nitrososphaera have a cell membrane composed of crenarchaeol, its isomer, and a glycerol dialkyl glycerol tetraether (GDGT), all of which are used for identifying ammonia-oxidizing archaea. N. viennensis has a cell diameter of 0.6–0.9 µm and is an irregular spherical coccus. Ca. N. gargensis is non-pathogenic presents a diameter of approximately 0.9 ± 0.3 μm with a relatively small coccus. Ca. N evergladensis has yet to be properly analyzed and described for morphological characteristics.
Habit
Document 4:::
Nanoarchaeum equitans is a species of marine archaea that was discovered in 2002 in a hydrothermal vent off the coast of Iceland on the Kolbeinsey Ridge by Karl Stetter. It has been proposed as the first species in a new phylum, and is the only species within the genus Nanoarchaeum. Strains of this microbe were also found on the Sub-polar Mid Oceanic Ridge, and in the Obsidian Pool in Yellowstone National Park. Since it grows in temperatures approaching boiling, at about 80 degrees Celsius, it is considered to be a thermophile. It grows best in environments with a pH of 6, and a salinity concentration of 2%. Nanoarchaeum appears to be an obligate symbiont on the archaeon Ignicoccus; it must be in contact with the host organism to survive. Nanoarchaeum equitans cannot synthesize lipids but obtains them from its host. Its cells are only 400 nm in diameter, making it the smallest known living organism, and the smallest known archaeon.
N. equitans genome consists of a single circular chromosome, and has an average GC-content of 31.6%. It lacks almost all of the genes required for the synthesis of amino acids, nucleotides, cofactors, and lipids, but encodes everything needed for repair and replication. N. equitans contains several genes that encode proteins employed in recombination, suggesting that N. equitans can undergo homologous recombination. A total of 95% of its DNA encodes for proteins or stable RNA molecules.
N. equitans has small appendages that come out of its circular structure. The cell surface is covered by a thin, lattice-shaped S-layer, which provides structure and protection for the entire cell.
Genome
Mycoplasma genitalium (580 Kbp in size, with 515 protein-coding genes) was regarded as a cellular unit with the smallest genome size until 2003 when Nanoarchaeum was sequenced (491 Kbp, with 536 protein-coding genes).
Genetically, Nanoarchaeum is peculiar in that its 16S RNA sequence is undetectable by the most common methods. Initial examination of
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of archaea live in salty environments?
A. sporozoans
B. halophiles
C. arthropods
D. amphibians
Answer:
|
|
sciq-11668
|
multiple_choice
|
What is caused by differences in density at the top and bottom of the ocean?
|
[
"deep currents",
"shallow currents",
"flat currents",
"still water"
] |
A
|
Relavent Documents:
Document 0:::
The borders of the oceans are the limits of Earth's oceanic waters. The definition and number of oceans can vary depending on the adopted criteria. The principal divisions (in descending order of area) of the five oceans are the Pacific Ocean, Atlantic Ocean, Indian Ocean, Southern (Antarctic) Ocean, and Arctic Ocean. Smaller regions of the oceans are called seas, gulfs, bays, straits, and other terms. Geologically, an ocean is an area of oceanic crust covered by water.
See also the list of seas article for the seas included in each ocean area.
Overview
Though generally described as several separate oceans, the world's oceanic waters constitute one global, interconnected body of salt water sometimes referred to as the World Ocean or Global Ocean. This concept of a continuous body of water with relatively free interchange among its parts is of fundamental importance to oceanography.
The major oceanic divisions are defined in part by the continents, various archipelagos, and other criteria. The principal divisions (in descending order of area) are the: Pacific Ocean, Atlantic Ocean, Indian Ocean, Southern (Antarctic) Ocean, and Arctic Ocean. Smaller regions of the oceans are called seas, gulfs, bays, straits, and other terms.
Geologically, an ocean is an area of oceanic crust covered by water. Oceanic crust is the thin layer of solidified volcanic basalt that covers the Earth's mantle. Continental crust is thicker but less dense. From this perspective, the Earth has three oceans: the World Ocean, the Caspian Sea, and the Black Sea. The latter two were formed by the collision of Cimmeria with Laurasia. The Mediterranean Sea is at times a discrete ocean because tectonic plate movement has repeatedly broken its connection to the World Ocean through the Strait of Gibraltar. The Black Sea is connected to the Mediterranean through the Bosporus, but the Bosporus is a natural canal cut through continental rock some 7,000 years ago, rather than a piece of oceanic sea floo
Document 1:::
Currentology is a science that studies the internal movements of water masses.
Description
In the study of fluid mechanics, researchers attempt to give a correct explanation of marine currents. Currents are caused by external driving forces such as wind, gravitational effects, coriolis forces and physical differences between various water masses, the main parameter being the difference of density that varies in function of the temperature and salinity.
The study of currents, combined with other factors such as tides and waves is relevant for understanding marine hydrodynamics and linked processes such as sediment transport and climate balance.
The measurement of maritime currents
The measurements of maritime currents can be made according to different techniques:
current meter
diversion buoys
See also
Document 2:::
The following outline is provided as an overview of and introduction to Oceanography.
Below is a structured list of topics on oceanography.
What type of thing is oceanography?
Oceanography can be described as all of the following:
The study of the physical and biological aspects of the ocean
An academic discipline – branch of knowledge that is taught and researched at the college or university level. Disciplines are defined (in part), and recognized by the academic journals in which research is published, and the learned societies and academic departments or faculties to which their practitioners belong.
A scientific field (a branch of science) – widely recognized category of specialized expertise within science, and typically embodies its own terminology and nomenclature. Such a field will usually be represented by one or more scientific journals, where peer-reviewed research is published. There are several geophysics-related scientific journals.
A natural science – one that seeks to elucidate the rules that govern the natural world using empirical and scientific methods.
A physical science – one that studies non-living systems.
An earth science – one that studies the planet Earth and its surroundings.
A biological science – one that studies the effect of organisms on their physical environment.
Basic oceanography concepts, processes, theories and terminology
Accretion (coastal management) – The process of coastal sediment returning to the visible portion of a beach
Acoustic seabed classification – The partitioning of a seabed acoustic image into discrete physical entities or classes
Acoustical oceanography – The use of underwater sound to study the sea, its boundaries and its contents
Advection – The transport of a substance by bulk motion
Ageostrophy – The real condition that works against geostrophic wind or geostrophic currents in the ocean, and works against an exact balance between the Coriolis force and the pressure gradient force
Astroo
Document 3:::
Ocean surface topography or sea surface topography, also called ocean dynamic topography, are highs and lows on the ocean surface, similar to the hills and valleys of Earth's land surface depicted on a topographic map.
These variations are expressed in terms of average sea surface height (SSH) relative to Earth's geoid. The main purpose of measuring ocean surface topography is to understand the large-scale ocean circulation.
Time variations
Unaveraged or instantaneous sea surface height (SSH) is most obviously affected by the tidal forces of the Moon and the Sun acting on Earth. Over longer timescales, SSH is influenced by ocean circulation. Typically, SSH anomalies resulting from these forces differ from the mean by less than ± at the global scale. Other influences include temperature, salinity, tides, waves, and the loading of atmospheric pressure. The slowest and largest variations are due to changes in Earth's gravitational field (geoid) due to the rearrangement of continents, formation of sea mounts and other redistribution of rock.
Since the Earth's gravitational field is relatively stable on decadal to centennial timescales, ocean circulation plays a more significant role in the observed variation of SSH. Across the seasonal cycle changes in patterns of warming, cooling and surface wind forcing affect circulation and influence SSH. Variations in SSH can be measured by satellite altimetry (e.g. TOPEX/Poseidon) and used to determine sea level rise and properties such as ocean heat storage.
Applications
Ocean surface topography is used to map ocean currents, which move around the ocean's "hills" and "valleys" in predictable ways. A clockwise sense of rotation is found around "hills" in the northern hemisphere and "valleys" in the southern hemisphere. This is because of the Coriolis effect. Conversely, a counterclockwise sense of rotation is found around "valleys" in the northern hemisphere and "hills" in the southern hemisphere.
Ocean surface topography is
Document 4:::
The neutral density ( ) or empirical neutral density is a density variable used in oceanography, introduced in 1997 by David R. Jackett and Trevor McDougall.
It is a function of the three state variables (salinity, temperature, and pressure) and the geographical location (longitude and latitude). It has the typical units of density (M/V).
Isosurfaces of form “neutral density surfaces”, which are closely aligned with the "neutral tangent plane". It is widely believed, although this has yet to be rigorously proven, that the flow in the deep ocean is almost entirely aligned with the neutral tangent plane, and strong lateral mixing occurs along this plane ("epineutral mixing") vs weak mixing across this plane ("dianeutral mixing").
These surfaces are widely used in water mass analyses. Neutral density is a density variable that depends on the particular state of the ocean, and hence is also a function of time, though this is often ignored. In practice, its construction from a given hydrographic dataset is achieved by means of a computational code (available for Matlab and Fortran), that contains the computational algorithm developed by Jackett and McDougall. Use of this code is currently restricted to the present day ocean.
Mathematical expression
The neutral tangent plane is the plane along which a given water parcel can move infinitesimally while remaining neutrally buoyant with its immediate environment. This is well-defined at every point in the ocean.
A neutral surface is a surface that is everywhere parallel to the neutral tangent plane.
McDougall demonstrated that the neutral tangent plane, and hence also neutral surfaces, are normal to the dianeutral vector
where is the salinity, is the potential temperature, the thermal expansion coefficient and the saline concentration coefficient.
Thus, neutral surfaces are defined as surfaces everywhere perpendicular to .
The contribution to density caused by gradients of and within the surface exactly compensa
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is caused by differences in density at the top and bottom of the ocean?
A. deep currents
B. shallow currents
C. flat currents
D. still water
Answer:
|
|
sciq-10979
|
multiple_choice
|
What is the term for an organism, or single living thing?
|
[
"single-celled organism",
"individual",
"loner",
"unique"
] |
B
|
Relavent Documents:
Document 0:::
An organism () is any biological living system that functions as an individual life form. All organisms are composed of cells. The idea of organism is based on the concept of minimal functional unit of life. Three traits have been proposed to play the main role in qualification as an organism:
noncompartmentability – structure that cannot be divided without its functionality loss,
individuality – the entity has simultaneous holding of genetic uniqueness, genetic homogeneity and autonomy,
distinctness – genetic information has to maintain open-system (a cell).
Organisms include multicellular animals, plants, and fungi; or unicellular microorganisms such as protists, bacteria, and archaea. All types of organisms are capable of reproduction, growth and development, maintenance, and some degree of response to stimuli. Most multicellular organisms differentiate into specialized tissues and organs during their development.
In 2016, a set of 355 genes from the last universal common ancestor (LUCA) of all organisms from Earth was identified.
Etymology
The term "organism" (from Greek ὀργανισμός, organismos, from ὄργανον, organon, i.e. "instrument, implement, tool, organ of sense or apprehension") first appeared in the English language in 1703 and took on its current definition by 1834 (Oxford English Dictionary). It is directly related to the term "organization". There is a long tradition of defining organisms as self-organizing beings, going back at least to Immanuel Kant's 1790 Critique of Judgment.
Definitions
An organism may be defined as an assembly of molecules functioning as a more or less stable whole that exhibits the properties of life. Dictionary definitions can be broad, using phrases such as "any living structure, such as a plant, animal, fungus or bacterium, capable of growth and reproduction". Many definitions exclude viruses and possible synthetic non-organic life forms, as viruses are dependent on the biochemical machinery of a host cell for repr
Document 1:::
In biology, cell theory is a scientific theory first formulated in the mid-nineteenth century, that organisms are made up of cells, that they are the basic structural/organizational unit of all organisms, and that all cells come from pre-existing cells. Cells are the basic unit of structure in all organisms and also the basic unit of reproduction.
The theory was once universally accepted, but now some biologists consider non-cellular entities such as viruses living organisms, and thus disagree with the first tenet. As of 2021: "expert opinion remains divided roughly a third each between yes, no and don’t know". As there is no universally accepted definition of life, discussion still continues.
History
With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as it was believed no one else had seen these. To further support his theory, Matthias Schleiden and Theodor Schwann both also studied cells of both animal and plants. What they discovered were significant differences between the two types of cells. This put forth the idea that cells were not only fundamental to plants, but animals as well.
Microscopes
The discovery of the cell was made possible through the invention of the microscope. In the first century BC, Romans were able to make glass. They discovered that objects appeared to be larger under the glass. The expanded use of lenses in eyeglasses in the 13th century probably led to wider spread use of simple microscopes (magnifying glasses) with limited magnification. Compound microscopes, which combine an objective lens with an eyepiece to view a real image achieving much higher magnification, first appeared in Europe around 1620. In 1665, Robert Hooke used a microscope
Document 2:::
The branches of science known informally as omics are various disciplines in biology whose names end in the suffix -omics, such as genomics, proteomics, metabolomics, metagenomics, phenomics and transcriptomics. Omics aims at the collective characterization and quantification of pools of biological molecules that translate into the structure, function, and dynamics of an organism or organisms.
The related suffix -ome is used to address the objects of study of such fields, such as the genome, proteome or metabolome respectively. The suffix -ome as used in molecular biology refers to a totality of some sort; it is an example of a "neo-suffix" formed by abstraction from various Greek terms in , a sequence that does not form an identifiable suffix in Greek.
Functional genomics aims at identifying the functions of as many genes as possible of a given organism. It combines
different -omics techniques such as transcriptomics and proteomics with saturated mutant collections.
Origin
The Oxford English Dictionary (OED) distinguishes three different fields of application for the -ome suffix:
in medicine, forming nouns with the sense "swelling, tumour"
in botany or zoology, forming nouns in the sense "a part of an animal or plant with a specified structure"
in cellular and molecular biology, forming nouns with the sense "all constituents considered collectively"
The -ome suffix originated as a variant of -oma, and became productive in the last quarter of the 19th century. It originally appeared in terms like sclerome or rhizome. All of these terms derive from Greek words in , a sequence that is not a single suffix, but analyzable as , the belonging to the word stem (usually a verb) and the being a genuine Greek suffix forming abstract nouns.
The OED suggests that its third definition originated as a back-formation from mitome, Early attestations include biome (1916) and genome (first coined as German Genom in 1920).
The association with chromosome in molecular bio
Document 3:::
A unicellular organism, also known as a single-celled organism, is an organism that consists of a single cell, unlike a multicellular organism that consists of multiple cells. Organisms fall into two general categories: prokaryotic organisms and eukaryotic organisms. Most prokaryotes are unicellular and are classified into bacteria and archaea. Many eukaryotes are multicellular, but some are unicellular such as protozoa, unicellular algae, and unicellular fungi. Unicellular organisms are thought to be the oldest form of life, with early protocells possibly emerging 3.8–4.0 billion years ago.
Although some prokaryotes live in colonies, they are not specialised cells with differing functions. These organisms live together, and each cell must carry out all life processes to survive. In contrast, even the simplest multicellular organisms have cells that depend on each other to survive.
Most multicellular organisms have a unicellular life-cycle stage. Gametes, for example, are reproductive unicells for multicellular organisms. Additionally, multicellularity appears to have evolved independently many times in the history of life.
Some organisms are partially unicellular, like Dictyostelium discoideum. Additionally, unicellular organisms can be multinucleate, like Caulerpa, Plasmodium, and Myxogastria.
Evolutionary hypothesis
Primitive protocells were the precursors to today's unicellular organisms. Although the origin of life is largely still a mystery, in the currently prevailing theory, known as the RNA world hypothesis, early RNA molecules would have been the basis for catalyzing organic chemical reactions and self-replication.
Compartmentalization was necessary for chemical reactions to be more likely as well as to differentiate reactions with the external environment. For example, an early RNA replicator ribozyme may have replicated other replicator ribozymes of different RNA sequences if not kept separate. Such hypothetic cells with an RNA genome instead of
Document 4:::
A species () is often defined as the largest group of organisms in which any two individuals of the appropriate sexes or mating types can produce fertile offspring, typically by sexual reproduction. It is the basic unit of classification and a taxonomic rank of an organism, as well as a unit of biodiversity. Other ways of defining species include their karyotype, DNA sequence, morphology, behaviour, or ecological niche. In addition, paleontologists use the concept of the chronospecies since fossil reproduction cannot be examined.
The most recent rigorous estimate for the total number of species of eukaryotes is between 8 and 8.7 million. About 14% of these had been described by 2011.
All species (except viruses) are given a two-part name, a "binomial". The first part of a binomial is the genus to which the species belongs. The second part is called the specific name or the specific epithet (in botanical nomenclature, also sometimes in zoological nomenclature). For example, Boa constrictor is one of the species of the genus Boa, with constrictor being the species' epithet.
While the definitions given above may seem adequate at first glance, when looked at more closely they represent problematic species concepts. For example, the boundaries between closely related species become unclear with hybridisation, in a species complex of hundreds of similar microspecies, and in a ring species. Also, among organisms that reproduce only asexually, the concept of a reproductive species breaks down, and each clone is potentially a microspecies. Although none of these are entirely satisfactory definitions, and while the concept of species may not be a perfect model of life, it is still a useful tool to scientists and conservationists for studying life on Earth, regardless of the theoretical difficulties. If species were fixed and clearly distinct from one another, there would be no problem, but evolutionary processes cause species to change. This obliges taxonomists to decide,
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the term for an organism, or single living thing?
A. single-celled organism
B. individual
C. loner
D. unique
Answer:
|
|
sciq-2336
|
multiple_choice
|
The ideal gas law is used like any other gas law, with attention paid to the unit and making sure that temperature is expressed in kelvin. however, the ideal gas law does not require a change in the conditions of a gas sample. the ideal gas law implies that if you know any three of the physical properties of a gas, you can calculate this?
|
[
"unrelated",
"fourth",
"second",
"third"
] |
B
|
Relavent Documents:
Document 0:::
The ideal gas law, also called the general gas equation, is the equation of state of a hypothetical ideal gas. It is a good approximation of the behavior of many gases under many conditions, although it has several limitations. It was first stated by Benoît Paul Émile Clapeyron in 1834 as a combination of the empirical Boyle's law, Charles's law, Avogadro's law, and Gay-Lussac's law. The ideal gas law is often written in an empirical form:
where , and are the pressure, volume and temperature respectively; is the amount of substance; and is the ideal gas constant.
It can also be derived from the microscopic kinetic theory, as was achieved (apparently independently) by August Krönig in 1856 and Rudolf Clausius in 1857.
Equation
The state of an amount of gas is determined by its pressure, volume, and temperature. The modern form of the equation relates these simply in two main forms. The temperature used in the equation of state is an absolute temperature: the appropriate SI unit is the kelvin.
Common forms
The most frequently introduced forms are:where:
is the absolute pressure of the gas,
is the volume of the gas,
is the amount of substance of gas (also known as number of moles),
is the ideal, or universal, gas constant, equal to the product of the Boltzmann constant and the Avogadro constant,
is the Boltzmann constant,
is the Avogadro constant,
is the absolute temperature of the gas,
is the number of particles (usually atoms or molecules) of the gas.
In SI units, p is measured in pascals, V is measured in cubic metres, n is measured in moles, and T in kelvins (the Kelvin scale is a shifted Celsius scale, where 0.00 K = −273.15 °C, the lowest possible temperature). R has for value 8.314 J/(mol·K) = 1.989 ≈ 2 cal/(mol·K), or 0.0821 L⋅atm/(mol⋅K).
Molar form
How much gas is present could be specified by giving the mass instead of the chemical amount of gas. Therefore, an alternative form of the ideal gas law may be useful. The chemical amount
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
In physics and engineering, a perfect gas is a theoretical gas model that differs from real gases in specific ways that makes certain calculations easier to handle. In all perfect gas models, intermolecular forces are neglected. This means that one can neglect many complications that may arise from the Van der Waals forces. All perfect gas models are ideal gas models in the sense that they all follow the ideal gas equation of state. However, the idea of a perfect gas model is often invoked as a combination of the ideal gas equation of state with specific additional assumptions regarding the variation (or nonvariation) of the heat capacity with temperature.
Perfect gas nomenclature
The terms perfect gas and ideal gas are sometimes used interchangeably, depending on the particular field of physics and engineering. Sometimes, other distinctions are made, such as between thermally perfect gas and calorically perfect gas, or between imperfect, semi-perfect, and perfect gases, and as well as the characteristics of ideal gases. Two of the common sets of nomenclatures are summarized in the following table.
Thermally and calorically perfect gas
Along with the definition of a perfect gas, there are also two more simplifications that can be made although various textbooks either omit or combine the following simplifications into a general "perfect gas" definition.
For a fixed number of moles of gas , a thermally perfect gas
is in thermodynamic equilibrium
is not chemically reacting
has internal energy , enthalpy , and constant volume / constant pressure heat capacities , that are solely functions of temperature and not of pressure or volume , i.e., , , , . These latter expressions hold for all tiny property changes and are not restricted to constant- or constant- variations.
A calorically perfect gas
is in thermodynamic equilibrium
is not chemically reacting
has internal energy , and enthalpy that are functions of temperature only, i.e., ,
has heat capacities
Document 3:::
The laws describing the behaviour of gases under fixed pressure, volume and absolute temperature conditions are called Gas Laws. The basic gas laws were discovered by the end of the 18th century when scientists found out that relationships between pressure, volume and temperature of a sample of gas could be obtained which would hold to approximation for all gases. These macroscopic gas laws were found to be consistent with atomic and kinetic theory.
History
Following the invention of the Torricelli mercury barometer in mid 17th century, the pressure-volume gas law was soon revealed by Robert Boyle while keeping temperature constant. Marriott, however, did notice small temperature dependence. It took another century and a half to develop thermometry and recognise the absolute zero temperature scale before the discovery of temperature-dependent gas laws.
Boyle's law
In 1662, Robert Boyle systematically studied the relationship between the volume and pressure of a fixed amount of gas at a constant temperature. He observed that the volume of a given mass of a gas is inversely proportional to its pressure at a constant temperature.
Boyle's law, published in 1662, states that, at a constant temperature, the product of the pressure and volume of a given mass of an ideal gas in a closed system is always constant. It can be verified experimentally using a pressure gauge and a variable volume container. It can also be derived from the kinetic theory of gases: if a container, with a fixed number of molecules inside, is reduced in volume, more molecules will strike a given area of the sides of the container per unit time, causing a greater pressure.
Statement
Boyle's law states that:
The concept can be represented with these formulae:
, meaning "Volume is inversely proportional to Pressure", or
, meaning "Pressure is inversely proportional to Volume", or
, or
where is the pressure, is the volume of a gas, and is the constant in this equation (and is not the same as
Document 4:::
The Gas composition of any gas can be characterised by listing the pure substances it contains, and stating for each substance its proportion of the gas mixture's molecule count.Nitrogen 78.084
Oxygen 20.9476
Argon Ar 0.934
Carbon Dioxide 0.0314
Gas composition of air
To give a familiar example, air has a composition of:
Standard Dry Air is the agreed-upon gas composition for air from which all water vapour has been removed. There are various standards bodies which publish documents that define a dry air gas composition. Each standard provides a list of constituent concentrations, a gas density at standard conditions and a molar mass.
It is extremely unlikely that the actual composition of any specific sample of air will completely agree with any definition for standard dry air. While the various definitions for standard dry air all attempt to provide realistic information about the constituents of air, the definitions are important in and of themselves because they establish a standard which can be cited in legal contracts and publications documenting measurement calculation methodologies or equations of state.
The standards below are two examples of commonly used and cited publications that provide a composition for standard dry air:
ISO TR 29922-2017 provides a definition for standard dry air which specifies an air molar mass of 28,965 46 ± 0,000 17 kg·kmol-1.
GPA 2145:2009 is published by the Gas Processors Association. It provides a molar mass for air of 28.9625 g/mol, and provides a composition for standard dry air as a footnote.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The ideal gas law is used like any other gas law, with attention paid to the unit and making sure that temperature is expressed in kelvin. however, the ideal gas law does not require a change in the conditions of a gas sample. the ideal gas law implies that if you know any three of the physical properties of a gas, you can calculate this?
A. unrelated
B. fourth
C. second
D. third
Answer:
|
|
sciq-3950
|
multiple_choice
|
The use of mercury-based dental amalgam has gone under question in recent years because of concerns regarding what?
|
[
"the toxicity",
"the expense",
"the variability",
"the oxygen"
] |
A
|
Relavent Documents:
Document 0:::
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
Document 1:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 2:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
Document 3:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 4:::
Continuing medical education (CME) is continuing education (CE) that helps those in the medical field maintain competence and learn about new and developing areas of their field. These activities may take place as live events, written publications, online programs, audio, video, or other electronic media. Content for these programs is developed, reviewed, and delivered by faculty who are experts in their individual clinical areas. Similar to the process used in academic journals, any potentially conflicting financial relationships for faculty members must be both disclosed and resolved in a meaningful way. However, critics complain that drug and device manufacturers often use their financial sponsorship to bias CMEs towards marketing their own products.
Historical context
Continuing medical education is not a new concept. From essentially the beginning of institutionalized medical instruction (medical instruction affiliated with medical colleges and teaching hospitals), health practitioners continued their learning by meeting with their peers. Grand rounds, case discussions, and meetings to discuss published medical papers constituted the continuing learning experience. In the 1950s through to the 1980s, CME was increasingly funded by the pharmaceutical industry. Concerns regarding informational bias (both intentional and unintentional) led to increasing scrutiny of the CME funding sources. This led to the establishment of certifying agencies such as the Society for Academic Continuing Medical Education which is an umbrella organization representing medical associations and bodies of academic medicine from the United States, Canada, Great Britain and Europe. The pharmaceutical industry has also developed guidelines regarding drug detailing and industry sponsorship of CME, such as the Pharmaceutical Advertising Advisory Board (PAAB) and Canada's Research-Based Pharmaceutical Companies (Rx&D).
Requirements
In the United States, many states require CME for medical p
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The use of mercury-based dental amalgam has gone under question in recent years because of concerns regarding what?
A. the toxicity
B. the expense
C. the variability
D. the oxygen
Answer:
|
|
sciq-5250
|
multiple_choice
|
Approximately how many billion years ago did our solar system begin?
|
[
"three billion",
"four billion",
"five billion",
"nine billion"
] |
C
|
Relavent Documents:
Document 0:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 1:::
The history of scientific thought about the formation and evolution of the Solar System began with the Copernican Revolution. The first recorded use of the term "Solar System" dates from 1704. Since the seventeenth century, philosophers and scientists have been forming hypotheses concerning the origins of our Solar System and the Moon and attempting to predict how the Solar System would change in the future. René Descartes was the first to hypothesize on the beginning of the Solar System; however, more scientists joined the discussion in the eighteenth century, forming the groundwork for later hypotheses on the topic. Later, particularly in the twentieth century, a variety of hypotheses began to build up, including the now-commonly accepted nebular hypothesis.
Meanwhile, hypotheses explaining the evolution of the Sun originated in the nineteenth century, especially as scientists began to understand how stars in general functioned. In contrast, hypotheses attempting to explain the origin of the Moon have been circulating for centuries, although all of the widely accepted hypotheses were proven false by the Apollo missions in the mid-twentieth century. Following Apollo, in 1984, the giant impact hypothesis was composed, replacing the already-disproven binary accretion model as the most common explanation for the formation of the Moon.
Contemporary view
The most widely accepted model of planetary formation is known as the nebular hypothesis. This model posits that, 4.6 billion years ago, the Solar System was formed by the gravitational collapse of a giant molecular cloud spanning several light-years. Many stars, including the Sun, were formed within this collapsing cloud. The gas that formed the Solar System was slightly more massive than the Sun itself. Most of the mass concentrated in the center, forming the Sun, and the rest of the mass flattened into a protoplanetary disk, out of which all of the current planets, moons, asteroids, and other celestial bodies in t
Document 2:::
This article is a list of notable unsolved problems in astronomy. Some of these problems are theoretical, meaning that existing theories may be incapable of explaining certain observed phenomena or experimental results. Others are experimental, meaning that experiments necessary to test proposed theory or investigate a phenomenon in greater detail have not yet been performed. Some pertain to unique events or occurrences that have not repeated themselves and whose causes remain unclear.
Planetary astronomy
Our solar system
Orbiting bodies and rotation:
Are there any non-dwarf planets beyond Neptune?
Why do extreme trans-Neptunian objects have elongated orbits?
Rotation rate of Saturn:
Why does the magnetosphere of Saturn rotate at a rate close to that at which the planet's clouds rotate?
What is the rotation rate of Saturn's deep interior?
Satellite geomorphology:
What is the origin of the chain of high mountains that closely follows the equator of Saturn's moon, Iapetus?
Are the mountains the remnant of hot and fast-rotating young Iapetus?
Are the mountains the result of material (either from the rings of Saturn or its own ring) that over time collected upon the surface?
Extra-solar
How common are Solar System-like planetary systems? Some observed planetary systems contain Super-Earths and Hot Jupiters that orbit very close to their stars. Systems with Jupiter-like planets in Jupiter-like orbits appear to be rare. There are several possibilities why Jupiter-like orbits are rare, including that data is lacking or the grand tack hypothesis.
Stellar astronomy and astrophysics
Solar cycle:
How does the Sun generate its periodically reversing large-scale magnetic field?
How do other Sol-like stars generate their magnetic fields, and what are the similarities and differences between stellar activity cycles and that of the Sun?
What caused the Maunder Minimum and other grand minima, and how does the solar cycle recover from a minimum state?
Coronal heat
Document 3:::
The formation of the Solar System began about 4.6 billion years ago with the gravitational collapse of a small part of a giant molecular cloud. Most of the collapsing mass collected in the center, forming the Sun, while the rest flattened into a protoplanetary disk out of which the planets, moons, asteroids, and other small Solar System bodies formed.
This model, known as the nebular hypothesis, was first developed in the 18th century by Emanuel Swedenborg, Immanuel Kant, and Pierre-Simon Laplace. Its subsequent development has interwoven a variety of scientific disciplines including astronomy, chemistry, geology, physics, and planetary science. Since the dawn of the Space Age in the 1950s and the discovery of exoplanets in the 1990s, the model has been both challenged and refined to account for new observations.
The Solar System has evolved considerably since its initial formation. Many moons have formed from circling discs of gas and dust around their parent planets, while other moons are thought to have formed independently and later to have been captured by their planets. Still others, such as Earth's Moon, may be the result of giant collisions. Collisions between bodies have occurred continually up to the present day and have been central to the evolution of the Solar System. Beyond Neptune, many sub-planet sized objects formed. Several thousand trans-Neptunian objects have been observed. Unlike the planets, these trans-Neptunian objects mostly move on eccentric orbits, inclined to the plane of the planets. The positions of the planets might have shifted due to gravitational interactions. Planetary migration may have been responsible for much of the Solar System's early evolution.
In roughly 5 billion years, the Sun will cool and expand outward to many times its current diameter (becoming a red giant), before casting off its outer layers as a planetary nebula and leaving behind a stellar remnant known as a white dwarf. In the distant future, the gravity of p
Document 4:::
A fundamental ephemeris of the Solar System is a model of the objects of the system in space, with all of their positions and motions accurately represented. It is intended to be a high-precision primary reference for prediction and observation of those positions and motions, and which provides a basis for further refinement of the model. It is generally not intended to cover the entire life of the Solar System; usually a short-duration time span, perhaps a few centuries, is represented to high accuracy. Some long ephemerides cover several millennia to medium accuracy.
They are published by the Jet Propulsion Laboratory as Development Ephemeris. The latest releases include DE430 which covers planetary and lunar ephemeris from Dec 21, 1549 to Jan 25, 2650 with high precision and is intended for general use for modern time periods . DE431 was created to cover a longer time period Aug 15, -13200 to March 15, 17191 with slightly less precision for use with historic observations and far reaching forecasted positions. DE432 was released as a minor update to DE430 with improvements to the Pluto barycenter in support of the New Horizons mission.
Description
The set of physical laws and numerical constants used in the calculation of the ephemeris must be self-consistent and precisely specified. The ephemeris must be calculated strictly in accordance with this set, which represents the most current knowledge of all relevant physical forces and effects. Current fundamental ephemerides are typically released with exact descriptions of all mathematical models, methods of computation, observational data, and adjustment to the observations at the time of their announcement. This may not have been the case in the past, as fundamental ephemerides were then computed from a collection of methods derived over a span of decades by many researchers.
The independent variable of the ephemeris is always time. In the case of the most current ephemerides, it is a relativistic coordinate t
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Approximately how many billion years ago did our solar system begin?
A. three billion
B. four billion
C. five billion
D. nine billion
Answer:
|
|
sciq-10044
|
multiple_choice
|
Scientists goal is develop nuclear fusion power plants, where the energy from fusion of hydrogen nuclei can be converted to what?
|
[
"gasoline",
"wind",
"oil",
"electricity"
] |
D
|
Relavent Documents:
Document 0:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 1:::
A hydrogen fuel cell power plant uses a hydrogen fuel cell to generate electricity for the power grid. They are larger in scale than backup generators such as the Bloom Energy Server and can be up to 60% efficient in converting hydrogen to electricity. There is little to no Nitrous oxide or Sulfur oxides produced in the fuel cell process, which is produced in the process of a combined cycle hydrogen power plant. If the hydrogen could be produced with electrolysis also known as green hydrogen, then this could be a solution to the energy storage problem of renewable energy.
Shinincheon Bitdream Hydrogen Fuel Cell Power Plant
The Shinincheon Bitdream Hydrogen Fuel Cell Power Plant in Incheon, South Korea opened in late 2021 that can produce 78.96 MegaWatts of power. It is one of the first large scale fuel cell power plants for the grid and not just as a backup generator. The plant will also purify the air by sucking in 2.4 tons of fine dust per year and filtering it out of the air. It will also produce hot water as a by-product that will be used to heat houses locally.
Cogeneration or combined cycle
Fuel cells produce a lot of hot water and a cogeneration or combined cycle could be used for further benefit or to produce more electricity with a steam turbine, increasing the efficiency to >80% using a Phosphoric acid fuel cell.
Water uses
Further studies are needed to see if the water is potable. Places that are dry and have water shortages could use the water for agriculture or other greywater uses.
High temperature electrolysis at nuclear power plants
High-temperature electrolysis at nuclear power plants could produce hydrogen at scale and more efficiently. The DOE Office of Nuclear Energy has demonstration projects to test 3 nuclear facilities in the United States at:
Nine Mile Point Nuclear Generating Station in Oswego, NY
Davis–Besse Nuclear Power Station in Oak Harbor, Ohio
Prairie Island Nuclear Power Plant in Red Wing, Minnesota
See also
Strateg
Document 2:::
The Advanced Fuel Cycle Initiative (AFCI) is an extensive research and development effort of the United States Department of Energy (DOE). The mission and focus of AFCI is to enable the safe, secure, economic and sustainable expansion of nuclear energy by conducting research, development, and demonstration focused on nuclear fuel recycling and waste management to meet U.S. needs.
The program was absorbed into the GNEP project, which was renamed IFNEC.
Focus
Continue critical fuel cycle research, development and demonstration (RD&D) activities
Pursue development of policy and regulatory framework to support fuel cycle closure
Determine and develop RD&D infrastructure needed to mature technologies
Establish advanced modeling and simulation program element
Implement a science-based RD&D program
Campaigns
The AFCI is an extensive RD&D effort to close the fuel cycle. The different areas within the AFCI are separated into campaigns. The RD&D of each campaign is completed by the United States Department of Energy's national laboratories.
Transmutation fuels
Fast reactor development
Separations
Waste forms
Grid Appropriate Reactor Campaign
Safeguards
Systems analysis
Modeling and simulation
Safety and regulatory
Transmutation fuels
The mission of the Transmutation Fuels Campaign is the generation of data, methods and models for fast reactor transmutation fuels and targets qualification by performing RD&D activities on fuel fabrication and performance. The campaign is led by Idaho National Laboratory.
Reactor development
The mission of the Reactor Campaign is to develop advanced recycling reactor technologies required for commercial deployment in a closed nuclear fuel cycle. The Reactor Campaign is led at Argonne National Laboratory.
Separations
The mission of the Separations Campaign is to develop and demonstrate industrially deployable and economically feasible technologies for the recycling of used nuclear fuel to provide improved safety,
Document 3:::
The STEM (Science, Technology, Engineering, and Mathematics) pipeline is a critical infrastructure for fostering the development of future scientists, engineers, and problem solvers. It's the educational and career pathway that guides individuals from early childhood through to advanced research and innovation in STEM-related fields.
Description
The "pipeline" metaphor is based on the idea that having sufficient graduates requires both having sufficient input of students at the beginning of their studies, and retaining these students through completion of their academic program. The STEM pipeline is a key component of workplace diversity and of workforce development that ensures sufficient qualified candidates are available to fill scientific and technical positions.
The STEM pipeline was promoted in the United States from the 1970s onwards, as “the push for STEM (science, technology, engineering, and mathematics) education appears to have grown from a concern for the low number of future professionals to fill STEM jobs and careers and economic and educational competitiveness.”
Today, this metaphor is commonly used to describe retention problems in STEM fields, called “leaks” in the pipeline. For example, the White House reported in 2012 that 80% of minority groups and women who enroll in a STEM field switch to a non-STEM field or drop out during their undergraduate education. These leaks often vary by field, gender, ethnic and racial identity, socioeconomic background, and other factors, drawing attention to structural inequities involved in STEM education and careers.
Current efforts
The STEM pipeline concept is a useful tool for programs aiming at increasing the total number of graduates, and is especially important in efforts to increase the number of underrepresented minorities and women in STEM fields. Using STEM methodology, educational policymakers can examine the quantity and retention of students at all stages of the K–12 educational process and beyo
Document 4:::
Nuclear Power School (NPS) is a technical school operated by the U.S. Navy in Goose Creek, South Carolina as a central part of a program that trains enlisted sailors, officers, KAPL civilians and Bettis civilians for shipboard nuclear power plant operation and maintenance of surface ships and submarines in the U.S. nuclear navy.
As of 2020 the United States Navy operates 98 nuclear power plants, including 71 submarines (each with one reactor), 11 aircraft carriers (each with two reactors), and three Moored Training Ships (MTS) and two land-based training plants.
NPS is the centerpiece of the training pipeline for U.S. Navy nuclear operators. It follows initial training at Nuclear Field "A" School (for enlisted operators) or a college degree (for officer operators and a small number of civilian contractors), and culminates with certification as a nuclear operator at one of the Navy's two Nuclear Power Training Units (NPTU).
Overview
Prospective enlisted enrollees in the Nuclear Power Program must have qualifying line scores on the ASVAB exam, may need to pass the NFQT (Nuclear Field Qualification Test), and must undergo a NACLC investigation for attaining a "Secret" security clearance. Additionally, each applicant must pass an interview with the Advanced Programs Coordinator in the associated recruiting district.
All officer students have had college-level courses in calculus and calculus-based physics. Acceptance to the officer program requires successful completion of interviews at Naval Reactors in Washington, D.C., and a final approval via a direct interview with the Director, Naval Nuclear Propulsion, a unique eight-year, four-star admiral position which was originally held by the program's founder, Admiral Hyman G. Rickover.
Women were allowed into the Naval Nuclear Field from 1978 until 1980, when the Navy began only allowing men again. With the repeal of the Combat Exclusion Law in the 1994 Defense Authorization Act, and the decision to open combatant sh
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Scientists goal is develop nuclear fusion power plants, where the energy from fusion of hydrogen nuclei can be converted to what?
A. gasoline
B. wind
C. oil
D. electricity
Answer:
|
|
ai2_arc-1011
|
multiple_choice
|
Which of these tools would be best to use when observing insects in a field?
|
[
"compass",
"hand lens",
"microscope",
"thermometer"
] |
B
|
Relavent Documents:
Document 0:::
Hybrid Insect Micro-Electro-Mechanical Systems (HI-MEMS) is a project of DARPA, a unit of the United States Department of Defense. Created in 2006, the unit's goal is the creation of tightly coupled machine-insect interfaces by placing micro-mechanical systems inside the insects during the early stages of metamorphosis. After implantation, the "insect cyborgs" could be controlled by sending electrical impulses to their muscles. The primary application is surveillance. The project was created with the ultimate goal of delivering an insect within 5 meters of a target located 100 meters away from its starting point. In 2008, a team from the University of Michigan demonstrated a cyborg unicorn beetle at an academic conference in Tucson, Arizona. The beetle was able to take off and land, turn left or right, and demonstrate other flight behaviors. Researchers at Cornell University demonstrated the successful implantation of electronic probes into tobacco hornworms in the pupal stage.
Document 1:::
Entomology, the scientific study of insects and closely related terrestrial arthropods, has been impelled by the necessity of societies to protect themselves from insect-borne diseases, crop losses to pest insects, and insect-related discomfort, as well as by people's natural curiosity. This timeline article traces the history of entomology.
Timelines of entomology
Timeline of entomology – prior to 1800
Timeline of entomology – 1800–1850
Timeline of entomology – 1850–1900
Timeline of entomology – post 1900
History of classification
Many different classifications were proposed by early entomologists. It is important to realise that whilst many early names survive, they may be at different levels in the phylogenetic hierarchy. For instance, many families were first published as genera, as for example the genus Mymar, proposed by Alexander Henry Haliday in 1829, is now represented by the family Mymaridae.
History of forensic entomology
See also
European and American voyages of scientific exploration
List of natural history dealers
Document 2:::
A travelling microscope is an instrument for measuring length with a resolution typically in the order of 0.01mm. The precision is such that better-quality instruments have measuring scales made from Invar to avoid misreadings due to thermal effects. The instrument comprises a microscope mounted on two rails fixed to, or part of a very rigid bed. The position of the microscope can be varied coarsely by sliding along the rails, or finely by turning a screw. The eyepiece is fitted with fine cross-hairs to fix a precise position, which is then read off the vernier scale. Some instruments, such as that produced in the 1960s by the Precision Tool and Instrument Company of Thornton Heath, Surrey, England, also measure vertically. The purpose of the microscope is to aim at reference marks with much higher accuracy than is possible using the naked eye. It is used in laboratories to measure the refractive index of flat specimens using the geometrical concepts of ray optics (Duc de Chaulnes’ method). It is also used to measure very short distances precisely, for example the diameter of a capillary tube. This mechanical instrument has now largely been superseded by electronic- and optically based measuring devices that are both very much more accurate and considerably cheaper to produce.
Travelling microscope consists of a cast iron base with machined-Vee-top surface and is fitted with three levelling screws. A metallic carriage, clamped to a spring-loaded bar slides with its attached vernier and reading lens along an inlaid strip of metal scale. The scale is divided in half millimeters. Fine adjustments are made by means of a micrometer screw for taking accurate reading. Both vernier reading to 0.01mm or 0.02mm. Microscope tube consists of 10x Eyepice and 15mm or 50mm or 75mm objectives. The Microscope, with its rack and pinion attachment is mounted on a vertical slide, which too, runs with an attached vernier along the vertical scale. The microscope is free to rotate n vert
Document 3:::
The electrical penetration graph or EPG is a system used by biologists to study the interaction of insects such as aphids, thrips, and leafhoppers with plants. Therefore, it can also be used to study the basis of plant virus transmission, host plant selection by insects and the way in which insects can find and feed from the phloem of the plant. It is a simple system consisting of a partial circuit which is only completed when a species such as aphids, which are the most abundantly studied, inserts its stylet into the plant in order to probe the plant as a suitable host for feeding. The completed circuit is displayed visually as a graph with different waveforms indicating either different insect activities such as saliva excretion or the ingestion of cellular contents or indicating which tissue type has been penetrated (i.e. phloem, xylem or mesophyll). So far, around ten different graphical waveforms are known, correlating with different insect/plant interaction events.
The Circuit
The circuit connects to the insect via a 20 μm gold or platinum wire and to the plant via a copper electrode placed in the soil. The circuit also passes through, normally, a one gigaohm resistor and a 50x amplifier before the results are stored digitally and interpreted by a computer to calculate the final graph.
See also
Plant Viruses
Epidemiology
Aphididae
Insect
Document 4:::
A leaf litter sieve is a piece of equipment used by entomologists, in particular by coleopterists (beetle collectors) (Cooter 1991, page 7) as an aid to finding invertebrates in leaf litter.
A typical leaf litter sieve consists of a gauze with holes of approximately 5 to 10 mm width. The entomologist places handfuls of leaf litter into the sieve, which is placed above a white sheet or tray. The sieve is shaken, and insects are separated from the leaf litter and fall out for inspection. Charles Valentine Riley details use of a simple sieve with a cloth bag.
A more complex combination sieve is described by Hongfu.
See also
Tullgren funnel
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Which of these tools would be best to use when observing insects in a field?
A. compass
B. hand lens
C. microscope
D. thermometer
Answer:
|
|
sciq-1558
|
multiple_choice
|
What occurs when some substances change chemically to other substances?
|
[
"toxic reaction",
"spontaneous mutation",
"hormonal reaction",
"chemical reaction"
] |
D
|
Relavent Documents:
Document 0:::
Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.
Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate.
A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density.
An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge.
Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change.
Examples
Heating and cooling
Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation.
Magnetism
Ferro-magnetic materials can become magnetic. The process is reve
Document 1:::
A xenobiotic is a chemical substance found within an organism that is not naturally produced or expected to be present within the organism. It can also cover substances that are present in much higher concentrations than are usual. Natural compounds can also become xenobiotics if they are taken up by another organism, such as the uptake of natural human hormones by fish found downstream of sewage treatment plant outfalls, or the chemical defenses produced by some organisms as protection against predators.
The term xenobiotics, however, is very often used in the context of pollutants such as dioxins and polychlorinated biphenyls and their effect on the biota, because xenobiotics are understood as substances foreign to an entire biological system, i.e. artificial substances, which did not exist in nature before their synthesis by humans. The term xenobiotic is derived from the Greek words ξένος (xenos) = foreigner, stranger and βίος (bios) = life, plus the Greek suffix for adjectives -τικός, -ή, -όν (-tikos, -ē, -on).
Xenobiotics may be grouped as carcinogens, drugs, environmental pollutants, food additives, hydrocarbons, and pesticides.
Xenobiotic metabolism
The body removes xenobiotics by xenobiotic metabolism. This consists of the deactivation and the excretion of xenobiotics and happens mostly in the liver. Excretion routes are urine, feces, breath, and sweat. Hepatic enzymes are responsible for the metabolism of xenobiotics by first activating them (oxidation, reduction, hydrolysis, and/or hydration of the xenobiotic), and then conjugating the active secondary metabolite with glucuronic acid, sulfuric acid, or glutathione, followed by excretion in bile or urine. An example of a group of enzymes involved in xenobiotic metabolism is hepatic microsomal cytochrome P450. These enzymes that metabolize xenobiotics are very important for the pharmaceutical industry because they are responsible for the breakdown of medications. A species with this unique cytochrome P
Document 2:::
Contamination is the presence of a constituent, impurity, or some other undesirable element that spoils, corrupts, infects, makes unfit, or makes inferior a material, physical body, natural environment, workplace, etc.
Types of contamination
Within the sciences, the word "contamination" can take on a variety of subtle differences in meaning, whether the contaminant is a solid or a liquid, as well as the variance of environment the contaminant is found to be in. A contaminant may even be more abstract, as in the case of an unwanted energy source that may interfere with a process. The following represent examples of different types of contamination based on these and other variances.
Chemical contamination
In chemistry, the term "contamination" usually describes a single constituent, but in specialized fields the term can also mean chemical mixtures, even up to the level of cellular materials. All chemicals contain some level of impurity. Contamination may be recognized or not and may become an issue if the impure chemical causes additional chemical reactions when mixed with other chemicals or mixtures. Chemical reactions resulting from the presence of an impurity may at times be beneficial, in which case the label "contaminant" may be replaced with "reactant" or "catalyst." (This may be true even in physical chemistry, where, for example, the introduction of an impurity in an intrinsic semiconductor positively increases conductivity.) If the additional reactions are detrimental, other terms are often applied such as "toxin", "poison", or pollutant, depending on the type of molecule involved. Chemical decontamination of substance can be achieved through decomposition, neutralization, and physical processes, though a clear understanding of the underlying chemistry is required. Contamination of pharmaceutics and therapeutics is notoriously dangerous and creates both perceptual and technical challenges.
Environmental contamination
In environmental chemistry, the term
Document 3:::
Toxicokinetics (often abbreviated as 'TK') is the description of both what rate a chemical will enter the body and what occurs to excrete and metabolize the compound once it is in the body.
Relation to Pharmacokinetics
It is an application of pharmacokinetics to determine the relationship between the systemic exposure of a compound and its toxicity. It is used primarily for establishing relationships between exposures in toxicology experiments in animals and the corresponding exposures in humans. However, it can also be used in environmental risk assessments in order to determine the potential effects of releasing chemicals into the environment. In order to quantify toxic effects, toxicokinetics can be combined with toxicodynamics. Such toxicokinetic-toxicodynamic (TKTD) models are used in ecotoxicology (see ecotoxmodels a website on mathematical models in ecotoxicology).
Similarly, physiological toxicokinetic models are physiological pharmacokinetic models developed to describe and predict the behavior of a toxicant in an animal body; for example, what parts (compartments) of the body a chemical may tend to enter (e.g. fat, liver, spleen, etc.), and whether or not the chemical is expected to be metabolized or excreted and at what rate.
Processes
Four potential processes exist for a chemical interacting with an animal: absorption, distribution, metabolism and excretion (ADME). Absorption describes the entrance of the chemical into the body, and can occur through the air, water, food, or soil. Once a chemical is inside a body, it can be distributed to other areas of the body through diffusion or other biological processes. At this point, the chemical may undergo metabolism and be biotransformed into other chemicals (metabolites). These metabolites can be less or more toxic than the parent compound. After this potential biotransformation occurs, the metabolites may leave the body, be transformed into other compounds, or continue to be stored in the body compartmen
Document 4:::
Activation, in chemistry and biology, is the process whereby something is prepared or excited for a subsequent reaction.
Chemistry
In chemistry, "activation" refers to the reversible transition of a molecule into a nearly identical chemical or physical state, with the defining characteristic being that this resultant state exhibits an increased propensity to undergo a specified chemical reaction. Thus, activation is conceptually the opposite of protection, in which the resulting state exhibits a decreased propensity to undergo a certain reaction.
The energy of activation specifies the amount of free energy the reactants must possess (in addition to their rest energy) in order to initiate their conversion into corresponding products—that is, in order to reach the transition state for the reaction. The energy needed for activation can be quite small, and often it is provided by the natural random thermal fluctuations of the molecules themselves (i.e. without any external sources of energy).
The branch of chemistry that deals with this topic is called chemical kinetics.
Biology
Biochemistry
In biochemistry, activation, specifically called bioactivation, is where enzymes or other biologically active molecules acquire the ability to perform their biological function, such as inactive proenzymes being converted into active enzymes that are able to catalyze their substrates' reactions into products. Bioactivation may also refer to the process where inactive prodrugs are converted into their active metabolites, or the toxication of protoxins into actual toxins.
An enzyme may be reversibly or irreversibly bioactivated. A major mechanism of irreversible bioactivation is where a piece of a protein is cut off by cleavage, producing an enzyme that will then stay active. A major mechanism of reversible bioactivation is substrate presentation where an enzyme translocates near its substrate. Another reversible reaction is where a cofactor binds to an enzyme, which then rem
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What occurs when some substances change chemically to other substances?
A. toxic reaction
B. spontaneous mutation
C. hormonal reaction
D. chemical reaction
Answer:
|
|
sciq-5307
|
multiple_choice
|
Redox reactions can always be recognized by a change in what number of two of the atoms in the reaction?
|
[
"oxidation",
"fermentation",
"precipitation",
"oxygen"
] |
A
|
Relavent Documents:
Document 0:::
Classification
Oxidoreductases are classified as EC 1 in the EC number classification of enzymes. Oxidoreductases can be further classified into 21 subclasses:
EC 1.1 includes oxidoreductases that act on the CH-OH group of donors (alcohol oxidoreductases such as methanol dehydrogenase)
EC 1.2 includes oxidoreductases that act on the aldehyde or oxo group of donors
EC 1.3 includes oxidoreductases that act on the CH-CH group of donors (CH-CH oxidore
Document 1:::
An elementary reaction is a chemical reaction in which one or more chemical species react directly to form products in a single reaction step and with a single transition state. In practice, a reaction is assumed to be elementary if no reaction intermediates have been detected or need to be postulated to describe the reaction on a molecular scale. An apparently elementary reaction may be in fact a stepwise reaction, i.e. a complicated sequence of chemical reactions, with reaction intermediates of variable lifetimes.
In a unimolecular elementary reaction, a molecule dissociates or isomerises to form the products(s)
At constant temperature, the rate of such a reaction is proportional to the concentration of the species
In a bimolecular elementary reaction, two atoms, molecules, ions or radicals, and , react together to form the product(s)
The rate of such a reaction, at constant temperature, is proportional to the product of the concentrations of the species and
The rate expression for an elementary bimolecular reaction is sometimes referred to as the Law of Mass Action as it was first proposed by Guldberg and Waage in 1864. An example of this type of reaction is a cycloaddition reaction.
This rate expression can be derived from first principles by using collision theory for ideal gases. For the case of dilute fluids equivalent results have been obtained from simple probabilistic arguments.
According to collision theory the probability of three chemical species reacting simultaneously with each other in a termolecular elementary reaction is negligible. Hence such termolecular reactions are commonly referred as non-elementary reactions and can be broken down into a more fundamental set of bimolecular reactions, in agreement with the law of mass action. It is not always possible to derive overall reaction schemes, but solutions based on rate equations are often possible in terms of steady-state or Michaelis-Menten approximations.
Notes
Chemical kinetics
Phy
Document 2:::
Analysis (: analyses) is the process of breaking a complex topic or substance into smaller parts in order to gain a better understanding of it. The technique has been applied in the study of mathematics and logic since before Aristotle (384–322 B.C.), though analysis as a formal concept is a relatively recent development.
The word comes from the Ancient Greek (analysis, "a breaking-up" or "an untying;" from ana- "up, throughout" and lysis "a loosening"). From it also comes the word's plural, analyses.
As a formal concept, the method has variously been ascribed to Alhazen, René Descartes (Discourse on the Method), and Galileo Galilei. It has also been ascribed to Isaac Newton, in the form of a practical method of physical discovery (which he did not name).
The converse of analysis is synthesis: putting the pieces back together again in a new or different whole.
Applications
Science
The field of chemistry uses analysis in three ways: to identify the components of a particular chemical compound (qualitative analysis), to identify the proportions of components in a mixture (quantitative analysis), and to break down chemical processes and examine chemical reactions between elements of matter. For an example of its use, analysis of the concentration of elements is important in managing a nuclear reactor, so nuclear scientists will analyze neutron activation to develop discrete measurements within vast samples. A matrix can have a considerable effect on the way a chemical analysis is conducted and the quality of its results. Analysis can be done manually or with a device.
Types of Analysis:
A) Qualitative Analysis: It is concerned with which components are in a given sample or compound.
Example: Precipitation reaction
B) Quantitative Analysis: It is to determine the quantity of individual component present in a given sample or compound.
Example: To find concentration by uv-spectrophotometer.
Isotopes
Chemists can use isotope analysis to assist analysts with i
Document 3:::
The limiting reagent (or limiting reactant or limiting agent) in a chemical reaction is a reactant that is totally consumed when the chemical reaction is completed. The amount of product formed is limited by this reagent, since the reaction cannot continue without it. If one or more other reagents are present in excess of the quantities required to react with the limiting reagent, they are described as excess reagents or excess reactants (sometimes abbreviated as "xs"), or to be in abundance.
The limiting reagent must be identified in order to calculate the percentage yield of a reaction since the theoretical yield is defined as the amount of product obtained when the limiting reagent reacts completely. Given the balanced chemical equation, which describes the reaction, there are several equivalent ways to identify the limiting reagent and evaluate the excess quantities of other reagents.
Method 1: Comparison of reactant amounts
This method is most useful when there are only two reactants. One reactant (A) is chosen, and the balanced chemical equation is used to determine the amount of the other reactant (B) necessary to react with A. If the amount of B actually present exceeds the amount required, then B is in excess and A is the limiting reagent. If the amount of B present is less than required, then B is the limiting reagent.
Example for two reactants
Consider the combustion of benzene, represented by the following chemical equation:
2 C6H6(l) + 15 O2(g) -> 12 CO2(g) + 6 H2O(l)
This means that 15 moles of molecular oxygen (O2) is required to react with 2 moles of benzene (C6H6)
The amount of oxygen required for other quantities of benzene can be calculated using cross-multiplication (the rule of three). For example,
if 1.5 mol C6H6 is present, 11.25 mol O2 is required:
If in fact 18 mol O2 are present, there will be an excess of (18 - 11.25) = 6.75 mol of unreacted oxygen when all the benzene is consumed. Benzene is then the limiting reagent.
This concl
Document 4:::
In molecular biology, biosynthesis is a multi-step, enzyme-catalyzed process where substrates are converted into more complex products in living organisms. In biosynthesis, simple compounds are modified, converted into other compounds, or joined to form macromolecules. This process often consists of metabolic pathways. Some of these biosynthetic pathways are located within a single cellular organelle, while others involve enzymes that are located within multiple cellular organelles. Examples of these biosynthetic pathways include the production of lipid membrane components and nucleotides. Biosynthesis is usually synonymous with anabolism.
The prerequisite elements for biosynthesis include: precursor compounds, chemical energy (e.g. ATP), and catalytic enzymes which may need coenzymes (e.g. NADH, NADPH). These elements create monomers, the building blocks for macromolecules. Some important biological macromolecules include: proteins, which are composed of amino acid monomers joined via peptide bonds, and DNA molecules, which are composed of nucleotides joined via phosphodiester bonds.
Properties of chemical reactions
Biosynthesis occurs due to a series of chemical reactions. For these reactions to take place, the following elements are necessary:
Precursor compounds: these compounds are the starting molecules or substrates in a reaction. These may also be viewed as the reactants in a given chemical process.
Chemical energy: chemical energy can be found in the form of high energy molecules. These molecules are required for energetically unfavorable reactions. Furthermore, the hydrolysis of these compounds drives a reaction forward. High energy molecules, such as ATP, have three phosphates. Often, the terminal phosphate is split off during hydrolysis and transferred to another molecule.
Catalysts: these may be for example metal ions or coenzymes and they catalyze a reaction by increasing the rate of the reaction and lowering the activation energy.
In the sim
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Redox reactions can always be recognized by a change in what number of two of the atoms in the reaction?
A. oxidation
B. fermentation
C. precipitation
D. oxygen
Answer:
|
|
sciq-10356
|
multiple_choice
|
A measure of how close a series of measurements are to one another is what?
|
[
"calculation",
"precision",
"density",
"reflection"
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Test equating traditionally refers to the statistical process of determining comparable scores on different forms of an exam. It can be accomplished using either classical test theory or item response theory.
In item response theory, equating is the process of placing scores from two or more parallel test forms onto a common score scale. The result is that scores from two different test forms can be compared directly, or treated as though they came from the same test form. When the tests are not parallel, the general process is called linking. It is the process of equating the units and origins of two scales on which the abilities of students have been estimated from results on different tests. The process is analogous to equating degrees Fahrenheit with degrees Celsius by converting measurements from one scale to the other. The determination of comparable scores is a by-product of equating that results from equating the scales obtained from test results.
Purpose
Suppose that Dick and Jane both take a test to become licensed in a certain profession. Because the high stakes (you get to practice the profession if you pass the test) may create a temptation to cheat, the organization that oversees the test creates two forms. If we know that Dick scored 60% on form A and Jane scored 70% on form B, do we know for sure which one has a better grasp of the material? What if form A is composed of very difficult items, while form B is relatively easy? Equating analyses are performed to address this very issue, so that scores are as fair as possible.
Equating in item response theory
In item response theory, person "locations" (measures of some quality being assessed by a test) are estimated on an interval scale; i.e., locations are estimated in relation to a unit and origin. It is common in educational assessment to employ tests in order to assess different groups of students with the intention of establishing a common scale by equating the origins, and when appropri
Document 2:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 3:::
The Texas Math and Science Coaches Association or TMSCA is an organization for coaches of academic University Interscholastic League teams in Texas middle schools and high schools, specifically those that compete in mathematics and science-related tests.
Events
There are four events in the TMSCA at both the middle and high school level: Number Sense, General Mathematics, Calculator Applications, and General Science.
Number Sense is an 80-question exam that students are given only 10 minutes to solve. Additionally, no scratch work or paper calculations are allowed. These questions range from simple calculations such as 99+98 to more complicated operations such as 1001×1938. Each calculation is able to be done with a certain trick or shortcut that makes the calculations easier.
The high school exam includes calculus and other difficult topics in the questions also with the same rules applied as to the middle school version.
It is well known that the grading for this event is particularly stringent as errors such as writing over a line or crossing out potential answers are considered as incorrect answers.
General Mathematics is a 50-question exam that students are given only 40 minutes to solve. These problems are usually more challenging than questions on the Number Sense test, and the General Mathematics word problems take more thinking to figure out. Every problem correct is worth 5 points, and for every problem incorrect, 2 points are deducted. Tiebreakers are determined by the person that misses the first problem and by percent accuracy.
Calculator Applications is an 80-question exam that students are given only 30 minutes to solve. This test requires practice on the calculator, knowledge of a few crucial formulas, and much speed and intensity. Memorizing formulas, tips, and tricks will not be enough. In this event, plenty of practice is necessary in order to master the locations of the keys and develop the speed necessary. All correct questions are worth 5
Document 4:::
The Mathematics Subject Classification (MSC) is an alphanumerical classification scheme that has collaboratively been produced by staff of, and based on the coverage of, the two major mathematical reviewing databases, Mathematical Reviews and Zentralblatt MATH. The MSC is used by many mathematics journals, which ask authors of research papers and expository articles to list subject codes from the Mathematics Subject Classification in their papers. The current version is MSC2020.
Structure
The MSC is a hierarchical scheme, with three levels of structure. A classification can be two, three or five digits long, depending on how many levels of the classification scheme are used.
The first level is represented by a two-digit number, the second by a letter, and the third by another two-digit number. For example:
53 is the classification for differential geometry
53A is the classification for classical differential geometry
53A45 is the classification for vector and tensor analysis
First level
At the top level, 64 mathematical disciplines are labeled with a unique two-digit number. In addition to the typical areas of mathematical research, there are top-level categories for "History and Biography", "Mathematics Education", and for the overlap with different sciences. Physics (i.e. mathematical physics) is particularly well represented in the classification scheme with a number of different categories including:
Fluid mechanics
Quantum mechanics
Geophysics
Optics and electromagnetic theory
All valid MSC classification codes must have at least the first-level identifier.
Second level
The second-level codes are a single letter from the Latin alphabet. These represent specific areas covered by the first-level discipline. The second-level codes vary from discipline to discipline.
For example, for differential geometry, the top-level code is 53, and the second-level codes are:
A for classical differential geometry
B for local differential geometry
C for glo
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A measure of how close a series of measurements are to one another is what?
A. calculation
B. precision
C. density
D. reflection
Answer:
|
|
sciq-8873
|
multiple_choice
|
Algae convert energy from the sun into food by means of what process?
|
[
"luminosynthesis",
"compression",
"glycolysis",
"photosynthesis"
] |
D
|
Relavent Documents:
Document 0:::
In ecology, primary production is the synthesis of organic compounds from atmospheric or aqueous carbon dioxide. It principally occurs through the process of photosynthesis, which uses light as its source of energy, but it also occurs through chemosynthesis, which uses the oxidation or reduction of inorganic chemical compounds as its source of energy. Almost all life on Earth relies directly or indirectly on primary production. The organisms responsible for primary production are known as primary producers or autotrophs, and form the base of the food chain. In terrestrial ecoregions, these are mainly plants, while in aquatic ecoregions algae predominate in this role. Ecologists distinguish primary production as either net or gross, the former accounting for losses to processes such as cellular respiration, the latter not.
Overview
Primary production is the production of chemical energy in organic compounds by living organisms. The main source of this energy is sunlight but a minute fraction of primary production is driven by lithotrophic organisms using the chemical energy of inorganic molecules.Regardless of its source, this energy is used to synthesize complex organic molecules from simpler inorganic compounds such as carbon dioxide () and water (H2O). The following two equations are simplified representations of photosynthesis (top) and (one form of) chemosynthesis (bottom):
+ H2O + light → CH2O + O2
+ O2 + 4 H2S → CH2O + 4 S + 3 H2O
In both cases, the end point is a polymer of reduced carbohydrate, (CH2O)n, typically molecules such as glucose or other sugars. These relatively simple molecules may be then used to further synthesise more complicated molecules, including proteins, complex carbohydrates, lipids, and nucleic acids, or be respired to perform work. Consumption of primary producers by heterotrophic organisms, such as animals, then transfers these organic molecules (and the energy stored within them) up the food web, fueling all of the Earth'
Document 1:::
The European Algae Biomass Association (EABA), established on 2 June 2009, is the European association representing both research and industry in the field of algae technologies.
EABA was founded during its inaugural conference on 1–2 June 2009 at Villa La Pietra in Florence. The association is headquartered in Florence, Italy.
History
The first EABA's President, Prof. Dr. Mario Tredici, served a 2-year term since his election on 2 June 2009. The EABA Vice-presidents were Mr. Claudio Rochietta, (Oxem, Italy), Prof. Patrick Sorgeloos (University of Ghent, Belgium) and Mr. Marc Van Aken (SBAE Industries, Belgium). The EABA Executive Director was Mr. Raffaello Garofalo.
EABA had 58 founding members and the EABA reached 79 members in 2011.
The last election occurred on 3 December 2018 in Amsterdam. The EABA's President is Mr. Jean-Paul Cadoret (Algama / France). The EABA Vice-presidents are Prof. Dr. Sammy Boussiba (Ben-Gurion University of the Negev / Israel), Prof. Dr. Gabriel Acien (University of Almeria / Spain) and Dr. Alexandra Mosch (Germany). The EABA General Manager is Dr. Vítor Verdelho (A4F AlgaFuel, S.A. / Portugal) and Prof. Dr. Mario Tredici (University of Florence / Italy) is elected as Honorary President.
Cooperation with other organisations
ART Fuels Forum
European Society of Biochemical Engineering Sciences
Algae Biomass Organization
Document 2:::
Bioenergetics is a field in biochemistry and cell biology that concerns energy flow through living systems. This is an active area of biological research that includes the study of the transformation of energy in living organisms and the study of thousands of different cellular processes such as cellular respiration and the many other metabolic and enzymatic processes that lead to production and utilization of energy in forms such as adenosine triphosphate (ATP) molecules. That is, the goal of bioenergetics is to describe how living organisms acquire and transform energy in order to perform biological work. The study of metabolic pathways is thus essential to bioenergetics.
Overview
Bioenergetics is the part of biochemistry concerned with the energy involved in making and breaking of chemical bonds in the molecules found in biological organisms. It can also be defined as the study of energy relationships and energy transformations and transductions in living organisms. The ability to harness energy from a variety of metabolic pathways is a property of all living organisms. Growth, development, anabolism and catabolism are some of the central processes in the study of biological organisms, because the role of energy is fundamental to such biological processes. Life is dependent on energy transformations; living organisms survive because of exchange of energy between living tissues/ cells and the outside environment. Some organisms, such as autotrophs, can acquire energy from sunlight (through photosynthesis) without needing to consume nutrients and break them down. Other organisms, like heterotrophs, must intake nutrients from food to be able to sustain energy by breaking down chemical bonds in nutrients during metabolic processes such as glycolysis and the citric acid cycle. Importantly, as a direct consequence of the First Law of Thermodynamics, autotrophs and heterotrophs participate in a universal metabolic network—by eating autotrophs (plants), heterotrophs ha
Document 3:::
The Bionic Leaf is a biomimetic system that gathers solar energy via photovoltaic cells that can be stored or used in a number of different functions. Bionic leaves can be composed of both synthetic (metals, ceramics, polymers, etc.) and organic materials (bacteria), or solely made of synthetic materials. The Bionic Leaf has the potential to be implemented in communities, such as urbanized areas to provide clean air as well as providing needed clean energy.
History
In 2009 at MIT, Daniel Nocera's lab first developed the "artificial leaf", a device made from silicon and an anode electrocatalyst for the oxidation of water, capable of splitting water into hydrogen and oxygen gases. In 2012, Nocera came to Harvard and The Silver Lab of Harvard Medical School joined Nocera’s team. Together the teams expanded the existing technology to create the Bionic Leaf. It merged the concept of the artificial leaf with genetically engineered bacteria that feed on the hydrogen and convert CO2 in the air into alcohol fuels or chemicals.
The first version of the teams Bionic Leaf was created in 2015 but the catalyst used was harmful to the bacteria. In 2016, a new catalyst was designed to solve this issue, named the "Bionic Leaf 2.0". Other versions of artificial leaves have been developed by the California Institute of Technology and the Joint Center for Artificial Photosynthesis, the University of Waterloo, and the University of Cambridge.
Mechanics
Photosynthesis
In natural photosynthesis, photosynthetic organisms produce energy-rich organic molecules from water and carbon dioxide by using solar radiation. Therefore, the process of photosynthesis removes carbon dioxide, a greenhouse gas, from the air. Artificial photosynthesis, as performed by the Bionic Leaf, is approximately 10 times more efficient than natural photosynthesis. Using a catalyst, the Bionic Leaf can remove excess carbon dioxide in the air and convert that to useful alcohol fuels, like isopropanol and isobutan
Document 4:::
Wageningen UR (University & Research centre) has constructed AlgaePARC (Algae Production And Research Centre) at the Wageningen Campus. The goal of AlgaePARC is to fill the gap between fundamental research on algae and full-scale algae production facilities. This will be done by setting up flexible pilot scale facilities to perform applied research and obtain direct practical experience. It is a joined initiative of BioProcess Engineering and Food & Biobased Research of the Wageningen University.
AlgaePARC facility
AlgaePARC uses four different photobioreactors comprising 24 m2 ground surface: an open pond, two types of tubular reactors and a plastic film bioreactor, and a number of smaller systems for the testing of new technologies. This facility is unique, because it is the first facility in which the productivity of four different production systems can be compared during the year under identical conditions. At the same time, knowledge is gained for the development of new photobioreactors and the design of systems on a production scale.
For the construction of the facility 2.25 M€ has been made available by the Ministry of Agriculture, Nature and Food Quality (1.5 M€) and the Provincie Gelderland (0.75 M€).
Microalgae
Microalgae are currently seen by some persons as a promising source of biodiesel and chemical building blocks, which can be used in paint and plastics. Biomass from algae offers a sustainable alternative to products and fuels from the petrochemical industry. When fully developed this contributes to a biobased economy as algae help to reduce the emissions of carbon dioxide (CO2) and make the economy less dependent on fossil fuels.
AlgaePARC research
The costs of biomass produced from algae for biofuels are still ten times too high to be able to compete with today’s other fuels. Within the business community, the question being asked is how it could be produced more cheaply, making it economically viable. Companies within the energy, food, oil an
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Algae convert energy from the sun into food by means of what process?
A. luminosynthesis
B. compression
C. glycolysis
D. photosynthesis
Answer:
|
|
sciq-10894
|
multiple_choice
|
What is the name of the first leaf developed inside an embryo?
|
[
"exon",
"gastromyzon",
"polylepis",
"cotyledon"
] |
D
|
Relavent Documents:
Document 0:::
Plant embryonic development, also plant embryogenesis is a process that occurs after the fertilization of an ovule to produce a fully developed plant embryo. This is a pertinent stage in the plant life cycle that is followed by dormancy and germination. The zygote produced after fertilization must undergo various cellular divisions and differentiations to become a mature embryo. An end stage embryo has five major components including the shoot apical meristem, hypocotyl, root meristem, root cap, and cotyledons. Unlike the embryonic development in animals, and specifically in humans, plant embryonic development results in an immature form of the plant, lacking most structures like leaves, stems, and reproductive structures. However, both plants and animals including humans, pass through a phylotypic stage that evolved independently and that causes a developmental constraint limiting morphological diversification.
Morphogenic events
Embryogenesis occurs naturally as a result of single, or double fertilization, of the ovule, giving rise to two distinct structures: the plant embryo and the endosperm which go on to develop into a seed. The zygote goes through various cellular differentiations and divisions in order to produce a mature embryo. These morphogenic events form the basic cellular pattern for the development of the shoot-root body and the primary tissue layers; it also programs the regions of meristematic tissue formation. The following morphogenic events are only particular to eudicots, and not monocots.
Plant
Following fertilization, the zygote and endosperm are present within the ovule, as seen in stage I of the illustration on this page. Then the zygote undergoes an asymmetric transverse cell division that gives rise to two cells - a small apical cell resting above a large basal cell.
These two cells are very different, and give rise to different structures, establishing polarity in the embryo.
apical cellThe small apical cell is on the top and contains
Document 1:::
Important structures in plant development are buds, shoots, roots, leaves, and flowers; plants produce these tissues and structures throughout their life from meristems located at the tips of organs, or between mature tissues. Thus, a living plant always has embryonic tissues. By contrast, an animal embryo will very early produce all of the body parts that it will ever have in its life. When the animal is born (or hatches from its egg), it has all its body parts and from that point will only grow larger and more mature. However, both plants and animals pass through a phylotypic stage that evolved independently and that causes a developmental constraint limiting morphological diversification.
According to plant physiologist A. Carl Leopold, the properties of organization seen in a plant are emergent properties which are more than the sum of the individual parts. "The assembly of these tissues and functions into an integrated multicellular organism yields not only the characteristics of the separate parts and processes but also quite a new set of characteristics which would not have been predictable on the basis of examination of the separate parts."
Growth
A vascular plant begins from a single celled zygote, formed by fertilisation of an egg cell by a sperm cell. From that point, it begins to divide to form a plant embryo through the process of embryogenesis. As this happens, the resulting cells will organize so that one end becomes the first root while the other end forms the tip of the shoot. In seed plants, the embryo will develop one or more "seed leaves" (cotyledons). By the end of embryogenesis, the young plant will have all the parts necessary to begin in its life.
Once the embryo germinates from its seed or parent plant, it begins to produce additional organs (leaves, stems, and roots) through the process of organogenesis. New roots grow from root meristems located at the tip of the root, and new stems and leaves grow from shoot meristems located at the
Document 2:::
A seedling is a young sporophyte developing out of a plant embryo from a seed. Seedling development starts with germination of the seed. A typical young seedling consists of three main parts: the radicle (embryonic root), the hypocotyl (embryonic shoot), and the cotyledons (seed leaves). The two classes of flowering plants (angiosperms) are distinguished by their numbers of seed leaves: monocotyledons (monocots) have one blade-shaped cotyledon, whereas dicotyledons (dicots) possess two round cotyledons. Gymnosperms are more varied. For example, pine seedlings have up to eight cotyledons. The seedlings of some flowering plants have no cotyledons at all. These are said to be acotyledons.
The plumule is the part of a seed embryo that develops into the shoot bearing the first true leaves of a plant. In most seeds, for example the sunflower, the plumule is a small conical structure without any leaf structure. Growth of the plumule does not occur until the cotyledons have grown above ground. This is epigeal germination. However, in seeds such as the broad bean, a leaf structure is visible on the plumule in the seed. These seeds develop by the plumule growing up through the soil with the cotyledons remaining below the surface. This is known as hypogeal germination.
Photomorphogenesis and etiolation
Dicot seedlings grown in the light develop short hypocotyls and open cotyledons exposing the epicotyl. This is also referred to as photomorphogenesis. In contrast, seedlings grown in the dark develop long hypocotyls and their cotyledons remain closed around the epicotyl in an apical hook. This is referred to as skotomorphogenesis or etiolation. Etiolated seedlings are yellowish in color as chlorophyll synthesis and chloroplast development depend on light. They will open their cotyledons and turn green when treated with light.
In a natural situation, seedling development starts with skotomorphogenesis while the seedling is growing through the soil and attempting to reach the
Document 3:::
The quiescent centre is a group of cells, up to 1,000 in number, in the form of a hemisphere, with the flat face toward the root tip of vascular plants. It is a region in the apical meristem of a root where cell division proceeds very slowly or not at all, but the cells are capable of resuming meristematic activity when the tissue surrounding them is damaged.
Cells of root apical meristems do not all divide at the same rate. Determinations of relative rates of DNA synthesis show that primary roots of Zea, Vicia and Allium have quiescent centres to the meristems, in which the cells divide rarely or never in the course of normal root growth (Clowes, 1958). Such a quiescent centre includes the cells at the apices of the histogens of both stele and cortex. Its presence can be deduced from the anatomy of the apex in Zea (Clowes, 1958), but not in the other species which lack discrete histogens.
History
In 1953, during the course of analysing the organization and function of the root apices, Frederick Albert Lionel Clowes (born 10 September 1921), at the School of Botany (now Department of Plant Sciences), University of Oxford, proposed the term ‘cytogenerative centre’ to denote ‘the region of an apical meristem from which all future cells are derived’. This term had been suggested to him by Mr Harold K. Pusey, a lecturer in embryology at the Department of Zoology and Comparative Anatomy at the same university. The 1953 paper of Clowes reported results of his experiments on Fagus sylvatica and Vicia faba, in which small oblique and wedge-shaped excisions were made at the tip of the primary root, at the most distal level of the root body, near the boundary with the root cap. The results of these experiments were striking and showed that: the root which grew on following the excision was normal at the undamaged meristem side; the nonexcised meristem portion contributed to the regeneration of the excised portion; the regenerated part of the root had abnormal patterning and
Document 4:::
In botany, available space theory (also known as first available space theory) is a theory used to explain why most plants have an alternating leaf pattern on their stems. The theory states that the location of a new leaf on a stem is determined by the physical space between existing leaves. In other words, the location of a new leaf on a growing stem is directly related to the amount of space between the previous two leaves. Building on ideas first put forth by Hoffmeister in 1868, Snow and Snow hypothesized in 1947 that leaves sprouted in the first available space on the stem.
See also
Repulsion theory
Phyllotaxis
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the name of the first leaf developed inside an embryo?
A. exon
B. gastromyzon
C. polylepis
D. cotyledon
Answer:
|
|
sciq-742
|
multiple_choice
|
What is the ph level of neutral, pure water?
|
[
"five",
"six",
"seven",
"six and a half"
] |
C
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
Document 2:::
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
Document 3:::
Advanced Placement (AP) Biology (also known as AP Bio) is an Advanced Placement biology course and exam offered by the College Board in the United States. For the 2012–2013 school year, the College Board unveiled a new curriculum with a greater focus on "scientific practices".
This course is designed for students who wish to pursue an interest in the life sciences. The College Board recommends successful completion of high school biology and high school chemistry before commencing AP Biology, although the actual prerequisites vary from school to school and from state to state. This course, nevertheless, is considered very challenging and one of the most difficult AP classes, as shown with AP Finals grade distributions.
Topic outline
The exam covers the following 8 units. The percentage indicates the portion of the multiple-choice section of the exam focused on each content area:
The course is based on and tests six skills, called scientific practices which include:
In addition to the topics above, students are required to be familiar with general lab procedure. Students should know how to collect data, analyze data to form conclusions, and apply those conclusions.
Exam
Students are allowed to use a four-function, scientific, or graphing calculator.
The exam has two sections: a 90 minute multiple choice section and a 90 minute free response section. There are 60 multiple choice questions and six free responses, two long and four short. Both sections are worth 50% of the score.
Score distribution
Commonly used textbooks
Biology, AP Edition by Sylvia Mader (2012, hardcover )
Life: The Science of Biology (Sadava, Heller, Orians, Purves, and Hillis, )
Campbell Biology AP Ninth Edition (Reece, Urry, Cain, Wasserman, Minorsky, and Andrew Jackson )
See also
Glossary of biology
A.P Bio (TV Show)
Document 4:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the ph level of neutral, pure water?
A. five
B. six
C. seven
D. six and a half
Answer:
|
|
sciq-2605
|
multiple_choice
|
What tissue blocks entry of pathogens in mammals?
|
[
"epithelial",
"pathological",
"esophageal",
"dendritic"
] |
A
|
Relavent Documents:
Document 0:::
This table lists the epithelia of different organs of the human body
Human anatomy
Document 1:::
Terminal bar is a histological term given to the unresolved group of junctional complexes that attach adjacent epithelial cells on their lateral surfaces: the zonula occludens, zonula adherens, macula adherens and macula communicans.
Using light microscopy, the terminal bar appears as a bar or spot at the apical surface of the cell, wherein the structures listed cannot be resolved. With electron microscopy, it can be visually disseminated into these structures.
The terminal bar is located on the lateral surface of epithelial cells, where the lateral surface meets the apical surface. It should not be confused with the terminal web, which is an actinous web underlying microvilli on specialized epithelial cells.
Document 2:::
H2.00.04.4.01001: Lymphoid tissue
H2.00.05.0.00001: Muscle tissue
H2.00.05.1.00001: Smooth muscle tissue
H2.00.05.2.00001: Striated muscle tissue
H2.00.06.0.00001: Nerve tissue
H2.00.06.1.00001: Neuron
H2.00.06.2.00001: Synapse
H2.00.06.2.00001: Neuroglia
h3.01: Bones
h3.02: Joints
h3.03: Muscles
h3.04: Alimentary system
h3.05: Respiratory system
h3.06: Urinary system
h3.07: Genital system
h3.08:
Document 3:::
In a multicellular organism, an organ is a collection of tissues joined in a structural unit to serve a common function. In the hierarchy of life, an organ lies between tissue and an organ system. Tissues are formed from same type cells to act together in a function. Tissues of different types combine to form an organ which has a specific function. The intestinal wall for example is formed by epithelial tissue and smooth muscle tissue. Two or more organs working together in the execution of a specific body function form an organ system, also called a biological system or body system.
An organ's tissues can be broadly categorized as parenchyma, the functional tissue, and stroma, the structural tissue with supportive, connective, or ancillary functions. For example, the gland's tissue that makes the hormones is the parenchyma, whereas the stroma includes the nerves that innervate the parenchyma, the blood vessels that oxygenate and nourish it and carry away its metabolic wastes, and the connective tissues that provide a suitable place for it to be situated and anchored. The main tissues that make up an organ tend to have common embryologic origins, such as arising from the same germ layer. Organs exist in most multicellular organisms. In single-celled organisms such as members of the eukaryotes, the functional analogue of an organ is known as an organelle. In plants, there are three main organs.
The number of organs in any organism depends on the definition used. By one widely adopted definition, 79 organs have been identified in the human body.
Animals
Except for placozoans, multicellular animals including humans have a variety of organ systems. These specific systems are widely studied in human anatomy. The functions of these organ systems often share significant overlap. For instance, the nervous and endocrine system both operate via a shared organ, the hypothalamus. For this reason, the two systems are combined and studied as the neuroendocrine system. The sam
Document 4:::
Outline
h1.00: Cytology
h2.00: General histology
H2.00.01.0.00001: Stem cells
H2.00.02.0.00001: Epithelial tissue
H2.00.02.0.01001: Epithelial cell
H2.00.02.0.02001: Surface epithelium
H2.00.02.0.03001: Glandular epithelium
H2.00.03.0.00001: Connective and supportive tissues
H2.00.03.0.01001: Connective tissue cells
H2.00.03.0.02001: Extracellular matrix
H2.00.03.0.03001: Fibres of connective tissues
H2.00.03.1.00001: Connective tissue proper
H2.00.03.1.01001: Ligaments
H2.00.03.2.00001: Mucoid connective tissue; Gelatinous connective tissue
H2.00.03.3.00001: Reticular tissue
H2.00.03.4.00001: Adipose tissue
H2.00.03.5.00001: Cartilage tissue
H2.00.03.6.00001: Chondroid tissue
H2.00.03.7.00001: Bone tissue; Osseous tissue
H2.00.04.0.00001: Haemotolymphoid complex
H2.00.04.1.00001: Blood cells
H2.00.04.1.01001: Erythrocyte; Red blood cell
H2.00.04.1.02001: Leucocyte; White blood cell
H2.00.04.1.03001: Platelet; Thrombocyte
H2.00.04.2.00001: Plasma
H2.00.04.3.00001: Blood cell production
H2.00.04.4.00001: Postnatal sites of haematopoiesis
H2.00.04.4.01001: Lymphoid tissue
H2.00.05.0.00001: Muscle tissue
H2.00.05.1.00001: Smooth muscle tissue
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What tissue blocks entry of pathogens in mammals?
A. epithelial
B. pathological
C. esophageal
D. dendritic
Answer:
|
|
sciq-5432
|
multiple_choice
|
A geiger counter is used for detecting what?
|
[
"mutation",
"radiation",
"pressure",
"convection"
] |
B
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 2:::
Geiger counter is a colloquial name for any hand-held radiation measuring device in civil defense, but most civil defense devices were ion-chamber radiological survey meters capable of measuring only high levels of radiation that would be present after a major nuclear event.
Most Geiger and ion-chamber survey meters were issued by governmental civil defense organizations in several countries from the 1950s in the midst of the Cold War in an effort to help prepare citizens for a nuclear attack.
Many of these same instruments are still in use today by some states, Texas amongst them, under the jurisdiction of the Texas Bureau of Radiation Control. They are regularly maintained, calibrated and deployed to fire departments and other emergency services.
US models
CD Counters came in a variety of different models, each with specific capabilities. Each of these models has an analog meter from 1 to 5, with 1/10 tick marks. Thus, at X10, the meter reads from 1 to 50.
CD meters were produced by a number of different firms under contract. Victoreen, Lionel, Electro Neutronics, Nuclear Measurements, Chatham Electronics, International Pump and Machine Works, Universal Atomics, Anton Electronic Laboratories; Landers, Frary, & Clark; El Tronics, Jordan, and Nuclear Chicago are among the many manufacturers contracted.
Regardless of producer, most counters exhibit the same basic physical characteristics, albeit with slight variations between some production runs: a yellow case with black knobs and meter bezels. Most US meters had a "CD" sticker on the side of the case.
True Geiger counters
These are instruments which use the Geiger principle of detection.
Type CD V-700
The CD V-700 is a Geiger counter employing a probe equipped with a Geiger–Müller tube manufactured by several companies under contract to US federal civil defense agencies in the 1950s and 1960s. This unit is quite sensitive and can be used to measure low levels of gamma radiation and detect beta radiation.
Document 3:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
Document 4:::
Computerized adaptive testing (CAT) is a form of computer-based test that adapts to the examinee's ability level. For this reason, it has also been called tailored testing. In other words, it is a form of computer-administered test in which the next item or set of items selected to be administered depends on the correctness of the test taker's responses to the most recent items administered.
How it works
CAT successively selects questions for the purpose of maximizing the precision of the exam based on what is known about the examinee from previous questions. From the examinee's perspective, the difficulty of the exam seems to tailor itself to their level of ability. For example, if an examinee performs well on an item of intermediate difficulty, they will then be presented with a more difficult question. Or, if they performed poorly, they would be presented with a simpler question. Compared to static tests that nearly everyone has experienced, with a fixed set of items administered to all examinees, computer-adaptive tests require fewer test items to arrive at equally accurate scores.
The basic computer-adaptive testing method is an iterative algorithm with the following steps:
The pool of available items is searched for the optimal item, based on the current estimate of the examinee's ability
The chosen item is presented to the examinee, who then answers it correctly or incorrectly
The ability estimate is updated, based on all prior answers
Steps 1–3 are repeated until a termination criterion is met
Nothing is known about the examinee prior to the administration of the first item, so the algorithm is generally started by selecting an item of medium, or medium-easy, difficulty as the first item.
As a result of adaptive administration, different examinees receive quite different tests. Although examinees are typically administered different tests, their ability scores are comparable to one another (i.e., as if they had received the same test, as is common
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
A geiger counter is used for detecting what?
A. mutation
B. radiation
C. pressure
D. convection
Answer:
|
|
scienceQA-875
|
multiple_choice
|
Select the fish below.
|
[
"cane toad",
"salmon",
"water buffalo",
"harbor seal"
] |
B
|
A salmon is a fish. It lives underwater. It has fins, not limbs.
Unlike most other fish, salmon can live in both fresh water and salt water.
A harbor seal is a mammal. It has fur and feeds its young milk.
Seals have flippers instead of arms! They use their flippers to swim underwater or to crawl on the beach.
A water buffalo is a mammal. It has hair and feeds its young milk.
Water buffaloes live in Asia. Some people raise water buffaloes for their milk.
A cane toad is an amphibian. It has moist skin and begins its life in water.
Toads do not have teeth! They swallow their food whole.
|
Relavent Documents:
Document 0:::
Fish intelligence is the resultant of the process of acquiring, storing in memory, retrieving, combining, comparing, and using in new contexts information and conceptual skills" as it applies to fish.
According to Culum Brown from Macquarie University, "Fish are more intelligent than they appear. In many areas, such as memory, their cognitive powers match or exceed those of ‘higher’ vertebrates including non-human primates."
Fish hold records for the relative brain weights of vertebrates. Most vertebrate species have similar brain-to-body mass ratios. The deep sea bathypelagic bony-eared assfish has the smallest ratio of all known vertebrates. At the other extreme, the electrogenic elephantnose fish, an African freshwater fish, has one of the largest brain-to-body weight ratios of all known vertebrates (slightly higher than humans) and the highest brain-to-body oxygen consumption ratio of all known vertebrates (three times that for humans).
Brain
Fish typically have quite small brains relative to body size compared with other vertebrates, typically one-fifteenth the brain mass of a similarly sized bird or mammal. However, some fish have relatively large brains, most notably mormyrids and sharks, which have brains about as massive relative to body weight as birds and marsupials.
The cerebellum of cartilaginous and bony fishes is large and complex. In at least one important respect, it differs in internal structure from the mammalian cerebellum: The fish cerebellum does not contain discrete deep cerebellar nuclei. Instead, the primary targets of Purkinje cells are a distinct type of cell distributed across the cerebellar cortex, a type not seen in mammals. The circuits in the cerebellum are similar across all classes of vertebrates, including fish, reptiles, birds, and mammals. There is also an analogous brain structure in cephalopods with well-developed brains, such as octopuses. This has been taken as evidence that the cerebellum performs functions important to
Document 1:::
The Bachelor of Science in Aquatic Resources and Technology (B.Sc. in AQT) (or Bachelor of Aquatic Resource) is an undergraduate degree that prepares students to pursue careers in the public, private, or non-profit sector in areas such as marine science, fisheries science, aquaculture, aquatic resource technology, food science, management, biotechnology and hydrography. Post-baccalaureate training is available in aquatic resource management and related areas.
The Department of Animal Science and Export Agriculture, at the Uva Wellassa University of Badulla, Sri Lanka, has the largest enrollment of undergraduate majors in Aquatic Resources and Technology, with about 200 students as of 2014.
The Council on Education for Aquatic Resources and Technology includes undergraduate AQT degrees in the accreditation review of Aquatic Resources and Technology programs and schools.
See also
Marine Science
Ministry of Fisheries and Aquatic Resources Development
Document 2:::
Fisheries science is the academic discipline of managing and understanding fisheries. It is a multidisciplinary science, which draws on the disciplines of limnology, oceanography, freshwater biology, marine biology, meteorology, conservation, ecology, population dynamics, economics, statistics, decision analysis, management, and many others in an attempt to provide an integrated picture of fisheries. In some cases new disciplines have emerged, as in the case of bioeconomics and fisheries law. Because fisheries science is such an all-encompassing field, fisheries scientists often use methods from a broad array of academic disciplines. Over the most recent several decades, there have been declines in fish stocks (populations) in many regions along with increasing concern about the impact of intensive fishing on marine and freshwater biodiversity.
Fisheries science is typically taught in a university setting, and can be the focus of an undergraduate, master's or Ph.D. program. Some universities offer fully integrated programs in fisheries science. Graduates of university fisheries programs typically find employment as scientists, fisheries managers of both recreational and commercial fisheries, researchers, aquaculturists, educators, environmental consultants and planners, conservation officers, and many others.
Fisheries research
Because fisheries take place in a diverse set of aquatic environments (i.e., high seas, coastal areas, large and small rivers, and lakes of all sizes), research requires different sampling equipment, tools, and techniques. For example, studying trout populations inhabiting mountain lakes requires a very different set of sampling tools than, say, studying salmon in the high seas. Ocean fisheries research vessels (FRVs) often require platforms which are capable of towing different types of fishing nets, collecting plankton or water samples from a range of depths, and carrying acoustic fish-finding equipment. Fisheries research vessels a
Document 3:::
The Digital Fish Library (DFL) is a University of California San Diego project funded by the Biological Infrastructure Initiative (DBI) of the National Science Foundation (NSF). The DFL creates 2D and 3D visualizations of the internal and external anatomy of fish obtained with magnetic resonance imaging (MRI) methods and makes these publicly available on the web.
The information core for the Digital Fish Library is generated using high-resolution MRI scanners housed at the Center for functional magnetic resonance imaging (CfMRI) multi-user facility at UC San Diego. These instruments use magnetic fields to take 3D images of animal tissues, allowing researchers to non-invasively see inside them and quantitatively describe their 3D anatomy. Fish specimens are obtained from the Marine Vertebrate Collection at Scripps Institute of Oceanography (SIO) and imaged by staff from UC San Diego's Center for Scientific Computation in Imaging (CSCI).
As of February 2010, the Digital Fish Library contains almost 300 species covering all five classes of fish, 56 of 60 orders, and close to 200 of the 521 fish families as described by Nelson, 2006. DFL imaging has also contributed to a number of published peer-reviewed scientific studies.
Digital Fish Library work has been featured in the media, including two National Geographic documentaries: Magnetic Navigator and Ultimate Shark.
Document 4:::
The Bachelor of Fisheries Science (B.F.Sc) is a bachelor's degree for studies in fisheries science in India. "Fisheries science" is the academic discipline of managing and understanding fisheries. It is a multidisciplinary science, which draws on the disciplines of aquaculture including breeding, genetics, biotechnology, nutrition, farming, diagnosis of diseases in fishes, other aquatic resources, medical treatment of aquatic animals; fish processing including curing, canning, freezing, value addition, byproducts and waste utilization, quality assurance and certification, fisheries microbiology, fisheries biochemistry; fisheries resource management including biology, anatomy, taxonomy, physiology, population dynamics; fisheries environment including oceanography, limnology, ecology, biodiversity, aquatic pollution; fishing technology including gear and craft engineering, navigation and seamanship, marine engines; fisheries economics and management and fisheries extension. Fisheries science is generally a 4-year course typically taught in a university setting, and can be the focus of an undergraduate, postgraduate or Ph.D. program. Bachelor level fisheries courses (B.F.Sc) were started by the state agricultural universities to make available the much needed technically competent personnel for teaching, research and development and transfer of technology in the field of fisheries science.
History
Fisheries education in India, started with the establishment of the Central Institute of Fisheries Education, Mumbai in 1961 for in service training and later the establishment of the first Fisheries College at Mangalore under the State Agricultural University (SAU) system in 1969, has grown manifold and evolved in the last four decades as a professional discipline consisting of Bachelors, Masters and Doctoral programmes in various branches of Fisheries Science. At present, 25 Fisheries Colleges offer four-year degree programme in Bachelor of Fisheries Science (B.F.Sc), whi
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Select the fish below.
A. cane toad
B. salmon
C. water buffalo
D. harbor seal
Answer:
|
sciq-10311
|
multiple_choice
|
What are algae that live in colonies of hundreds of cells called?
|
[
"euglenids",
"volvox",
"rhodophyta",
"chlorophyta"
] |
B
|
Relavent Documents:
Document 0:::
Algae (, ; : alga ) is an informal term for a large and diverse group of photosynthetic, eukaryotic organisms. It is a polyphyletic grouping that includes species from multiple distinct clades. Included organisms range from unicellular microalgae, such as Chlorella, Prototheca and the diatoms, to multicellular forms, such as the giant kelp, a large brown alga which may grow up to in length. Most are aquatic and lack many of the distinct cell and tissue types, such as stomata, xylem and phloem that are found in land plants. The largest and most complex marine algae are called seaweeds, while the most complex freshwater forms are the Charophyta, a division of green algae which includes, for example, Spirogyra and stoneworts. Algae that are carried by water are plankton, specifically phytoplankton.
Algae constitute a polyphyletic group since they do not include a common ancestor, and although their plastids seem to have a single origin, from cyanobacteria, they were acquired in different ways. Green algae are examples of algae that have primary chloroplasts derived from endosymbiotic cyanobacteria. Diatoms and brown algae are examples of algae with secondary chloroplasts derived from an endosymbiotic red alga. Algae exhibit a wide range of reproductive strategies, from simple asexual cell division to complex forms of sexual reproduction.
Algae lack the various structures that characterize land plants, such as the phyllids (leaf-like structures) of bryophytes, rhizoids of non-vascular plants, and the roots, leaves, and other organs found in tracheophytes (vascular plants). Most are phototrophic, although some are mixotrophic, deriving energy both from photosynthesis and uptake of organic carbon either by osmotrophy, myzotrophy, or phagotrophy. Some unicellular species of green algae, many golden algae, euglenids, dinoflagellates, and other algae have become heterotrophs (also called colorless or apochlorotic algae), sometimes parasitic, relying entirely on external e
Document 1:::
Monosiphonous algae are algae which consist of a single row of cells with, or without, cortication.
See also
Polysiphonous
Document 2:::
Polysiphonous describes an algal branch with axial cells each surrounded by cells of the same length as the axial cells.
See also
Monosiphonous algae
Document 3:::
Phycology () is the scientific study of algae. Also known as algology, phycology is a branch of life science.
Algae are important as primary producers in aquatic ecosystems. Most algae are eukaryotic, photosynthetic organisms that live in a wet environment. They are distinguished from the higher plants by a lack of true roots, stems or leaves. They do not produce flowers. Many species are single-celled and microscopic (including phytoplankton and other microalgae); many others are multicellular to one degree or another, some of these growing to large size (for example, seaweeds such as kelp and Sargassum).
Phycology includes the study of prokaryotic forms known as blue-green algae or cyanobacteria. A number of microscopic algae also occur as symbionts in lichens.
Phycologists typically focus on either freshwater or ocean algae, and further within those areas, either diatoms or soft algae.
History of phycology
While both the ancient Greeks and Romans knew of algae, and the ancient Chinese even cultivated certain varieties as food, the scientific study of algae began in the late 18th century with the description and naming of Fucus maximus (now Ecklonia maxima) in 1757 by Pehr Osbeck. This was followed by the descriptive work of scholars such as Dawson Turner and Carl Adolph Agardh, but it was not until later in the 19th century that efforts were made by J.V. Lamouroux and William Henry Harvey to create significant groupings within the algae. Harvey has been called "the father of modern phycology" in part for his division of the algae into four major divisions based upon their pigmentation.
It was in the late 19th and early 20th century, that phycology became a recognized field of its own. Men such as Friedrich Traugott Kützing continued the descriptive work. In Japan, beginning in 1889, Kintarô Okamura not only provided detailed descriptions of Japanese coastal algae, he also provided comprehensive analysis of their distribution. Although R. K. Greville publi
Document 4:::
Eustigmatophytes are a small group (17 genera; ~107 species) of eukaryotic forms of algae that includes marine, freshwater and soil-living species.
All eustigmatophytes are unicellular, with coccoid cells and polysaccharide cell walls. Eustigmatophytes contain one or more yellow-green chloroplasts, which contain chlorophyll a and the accessory pigments violaxanthin and β-carotene. Eustigmatophyte zoids (gametes) possess a single or pair of flagella, originating from the apex of the cell. Unlike other heterokontophytes, eustigmatophyte zoids do not have typical photoreceptive organelles (or eyespots); instead an orange-red eyespot outside a chloroplast is located at the anterior end of the zoid.
Ecologically, eustigmatophytes occur as photosynthetic autotrophs across a range of systems. Most eustigmatophyte genera live in freshwater or in soil, although Nannochloropsis contains marine species of picophytoplankton (2–4 μm).
The class was erected to include some algae previously classified in the Xanthophyceae.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are algae that live in colonies of hundreds of cells called?
A. euglenids
B. volvox
C. rhodophyta
D. chlorophyta
Answer:
|
|
sciq-3298
|
multiple_choice
|
Scientific notation expresses a number as a what, times a power of 10?
|
[
"coefficient",
"function",
"fraction",
"expression"
] |
A
|
Relavent Documents:
Document 0:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 1:::
Engineering mathematics is a branch of applied mathematics concerning mathematical methods and techniques that are typically used in engineering and industry. Along with fields like engineering physics and engineering geology, both of which may belong in the wider category engineering science, engineering mathematics is an interdisciplinary subject motivated by engineers' needs both for practical, theoretical and other considerations outside their specialization, and to deal with constraints to be effective in their work.
Description
Historically, engineering mathematics consisted mostly of applied analysis, most notably: differential equations; real and complex analysis (including vector and tensor analysis); approximation theory (broadly construed, to include asymptotic, variational, and perturbative methods, representations, numerical analysis); Fourier analysis; potential theory; as well as linear algebra and applied probability, outside of analysis. These areas of mathematics were intimately tied to the development of Newtonian physics, and the mathematical physics of that period. This history also left a legacy: until the early 20th century subjects such as classical mechanics were often taught in applied mathematics departments at American universities, and fluid mechanics may still be taught in (applied) mathematics as well as engineering departments.
The success of modern numerical computer methods and software has led to the emergence of computational mathematics, computational science, and computational engineering (the last two are sometimes lumped together and abbreviated as CS&E), which occasionally use high-performance computing for the simulation of phenomena and the solution of problems in the sciences and engineering. These are often considered interdisciplinary fields, but are also of interest to engineering mathematics.
Specialized branches include engineering optimization and engineering statistics.
Engineering mathematics in tertiary educ
Document 2:::
Large numbers are numbers significantly larger than those typically used in everyday life (for instance in simple counting or in monetary transactions), appearing frequently in fields such as mathematics, cosmology, cryptography, and statistical mechanics. They are typically large positive integers, or more generally, large positive real numbers, but may also be other numbers in other contexts.
Googology is the study of nomenclature and properties of large numbers.
In the everyday world
Scientific notation was created to handle the wide range of values that occur in scientific study. 1.0 × 109, for example, means one billion, or a 1 followed by nine zeros: 1 000 000 000. The reciprocal, 1.0 × 10−9, means one billionth, or 0.000 000 001. Writing 109 instead of nine zeros saves readers the effort and hazard of counting a long series of zeros to see how large the number is. In addition to scientific (powers of 10) notation, the following examples include (short scale) systematic nomenclature of large numbers.
Examples of large numbers describing everyday real-world objects include:
The number of cells in the human body (estimated at 3.72 × 1013), or 37.2 trillion
The number of bits on a computer hard disk (, typically about 1013, 1–2 TB), or 10 trillion
The number of neuronal connections in the human brain (estimated at 1014), or 100 trillion
The Avogadro constant is the number of “elementary entities” (usually atoms or molecules) in one mole; the number of atoms in 12 grams of carbon-12 approximately , or 602.2 sextillion.
The total number of DNA base pairs within the entire biomass on Earth, as a possible approximation of global biodiversity, is estimated at (5.3 ± 3.6) × 1037, or 53±36 undecillion
The mass of Earth consists of about 4 × 1051, or 4 sexdecillion, nucleons
The estimated number of atoms in the observable universe (1080), or 100 quinvigintillion
The lower bound on the game-tree complexity of chess, also known as the “Shannon number” (estim
Document 3:::
Computer science and engineering (CSE) is an academic program at many universities which comprises computer science classes (e.g. data structures and algorithms) and computer engineering classes (e.g computer architecture). There is no clear division in computing between science and engineering, just like in the field of materials science and engineering. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both undergraduate as well postgraduate with specializations.
Academic courses
Academic programs vary between colleges, but typically include a combination of topics in computer science, computer engineering, and electrical engineering. Undergraduate courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most CSE programs require introductory mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, probability, and statistics, as well as the basics of electrical and electronic engineering, physics, and electromagnetism.
Example universities with CSE majors and departments
APJ Abdul Kalam Technological University
American International University-B
Document 4:::
A scientific calculator is an electronic calculator, either desktop or handheld, designed to perform calculations using basic (addition, subtraction, multiplication, division) and complex (trigonometric, hyperbolic, etc.) mathematical operations and functions. They have completely replaced slide rules and are used in both educational and professional settings.
In some areas of study scientific calculators have been replaced by graphing calculators and financial calculators which have the capabilities of a scientific calculator along with the capability to graph input data and functions.
Functions
When electronic calculators were originally marketed they normally had only four or five capabilities (addition, subtraction, multiplication, division and square root). Modern scientific calculators generally have many more capabilities than the original four or five function calculator, and the capabilities differ between manufacturers and models.
The capabilities of a modern scientific calculator include:
Scientific notation
Floating-point decimal arithmetic
Logarithmic functions, using both base 10 and base e
Trigonometric functions (some including hyperbolic trigonometry)
Exponential functions and roots beyond the square root
Quick access to constants such as pi and e
In addition, high-end scientific calculators generally include some or all of the following:
Cursor controls to edit equations and view previous calculations (some calculators such as the LCD-8310, badge engineered under both Olympia and United Office keep the number of the previous result on-screen for convenience while the new calculation is being entered.)
Hexadecimal, binary, and octal calculations, including basic Boolean mathematics
Complex numbers
Fractions calculations
Statistics and probability calculations
Programmability — see Programmable calculator
Equation solving
Matrix calculations
Calculus
Letters that can be used for spelling words or including variables into an e
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Scientific notation expresses a number as a what, times a power of 10?
A. coefficient
B. function
C. fraction
D. expression
Answer:
|
|
sciq-3454
|
multiple_choice
|
Orbiting at a fairly typical 370 kilometers, the international space station is an example of what?
|
[
"high orbit satellite",
"flagella satellite",
"alteration satellite",
"manmade satellite"
] |
D
|
Relavent Documents:
Document 0:::
Kvant-1 (; English: Quantum-I/1) (37KE) was the first module to be attached in 1987 to the Mir Core Module, which formed the core of the Soviet space station Mir. It remained attached to Mir until the entire space station was deorbited in 2001.
The Kvant-1 module contained scientific instruments for astrophysical observations and materials science experiments.
It was used to conduct research into the physics of active galaxies, quasars and neutron stars and it was uniquely positioned for studies of the Supernova SN 1987A. Furthermore, it supported biotechnology experiments in anti-viral preparations and fractions.
Some additions to Kvant-1 during its lifetime were solar arrays and the Sofora and Rapana girders.
The Kvant-1 module was based on the TKS spacecraft and was the first, experimental version of a planned series of '37K' type modules. The 37K modules featured a jettisonable TKS-E type propulsion module, also called the Functional Service Module (FSM).
The control system of Kvant-1 had been developed by NPO "Electropribor" (Kharkiv, Ukraine).
After previous engineering tests with the Salyut 6 and Salyut 7 space stations (and temporarily attached TKS-derived space station modules like Kosmos 1267, Kosmos 1443 and Kosmos 1686) it became the first space station module to be attached semi-permanently to the first modular space station in the history of space flight.
Kvant-1 was originally planned to be docked to the Salyut 7 space station, the plans however evolved to launch to Mir, initially considered on board the Soviet Buran space shuttle, which finally changed to a launch to Mir by the Proton-K rocket.
Background
The Kvant spacecraft represented the first use of a new kind of Soviet space station module, designated 37K. An order authorising the beginning of development was issued on 17 September 1979. The basic 37K design consisted of a 4.2 m diameter pressurised cylinder with a docking port at the forward end. It was not equipped with its own propuls
Document 1:::
In celestial mechanics, the term stationary orbit refers to an orbit around a planet or moon where the orbiting satellite or spacecraft remains orbiting over the same spot on the surface. From the ground, the satellite would appear to be standing still, hovering above the surface in the same spot, day after day.
In practice, this is accomplished by matching the rotation of the surface below, by reaching a particular altitude where the orbital speed almost matches the rotation below, in an equatorial orbit. As the speed decreases slowly, then an additional boost would be needed to increase the speed back to a matching speed, or a retro-rocket could be fired to slow the speed when too fast.
The stationary-orbit region of space is known as the Clarke Belt, named after British science fiction writer Arthur C. Clarke, who published the idea in Wireless World magazine in 1945. A stationary orbit is sometimes referred to as a "fixed orbit".
Stationary Earth orbit
Around the Earth, stationary satellites orbit at altitudes of approximately . Writing in 1945, the science-fiction author Arthur C. Clarke imagined communications satellites as travelling in stationary orbits, where those satellites would travel around the Earth at the same speed the globe is spinning, making them hover stationary over one spot on the Earth's surface.
A satellite being propelled into place, into a stationary orbit, is first fired to a special equatorial orbit called a "geostationary transfer orbit" (GTO). Within this oval-shaped (elliptical) orbit, the satellite will alternately swing out to high and then back down to an altitude of only above the Earth (223 times closer). Then, at a planned time and place, an attached "kick motor" will push the satellite out to maintain an even, circular orbit at the 22,300-mile altitude.
Stationary Mars orbit
An areostationary orbit or areosynchronous equatorial orbit (abbreviated AEO) is a circular areosynchronous orbit in the Martian equatorial plan
Document 2:::
The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work.
History
It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council.
Function
Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres.
STEM ambassadors
To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell.
Funding
STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments.
See also
The WISE Campaign
Engineering and Physical Sciences Research Council
National Centre for Excellence in Teaching Mathematics
Association for Science Education
Glossary of areas of mathematics
Glossary of astronomy
Glossary of biology
Glossary of chemistry
Glossary of engineering
Glossary of physics
Document 3:::
Electrodynamic tethers (EDTs) are long conducting wires, such as one deployed from a tether satellite, which can operate on electromagnetic principles as generators, by converting their kinetic energy to electrical energy, or as motors, converting electrical energy to kinetic energy. Electric potential is generated across a conductive tether by its motion through a planet's magnetic field.
A number of missions have demonstrated electrodynamic tethers in space, most notably the TSS-1, TSS-1R, and Plasma Motor Generator (PMG) experiments.
Tether propulsion
As part of a tether propulsion system, craft can use long, strong conductors (though not all tethers are conductive) to change the orbits of spacecraft. It has the potential to make space travel significantly cheaper. When direct current is applied to the tether, it exerts a Lorentz force against the magnetic field, and the tether exerts a force on the vehicle. It can be used either to accelerate or brake an orbiting spacecraft.
In 2012 Star Technology and Research was awarded a $1.9 million contract to qualify a tether propulsion system for orbital debris removal.
Uses for ED tethers
Over the years, numerous applications for electrodynamic tethers have been identified for potential use in industry, government, and scientific exploration. The table below is a summary of some of the potential applications proposed thus far. Some of these applications are general concepts, while others are well-defined systems. Many of these concepts overlap into other areas; however, they are simply placed under the most appropriate heading for the purposes of this table. All of the applications mentioned in the table are elaborated upon in the Tethers Handbook. Three fundamental concepts that tethers possess, are gravity gradients, momentum exchange, and electrodynamics. Potential tether applications can be seen below:
ISS reboost
EDT has been proposed to maintain the ISS orbit and save the expense of chemical propellant re
Document 4:::
The International Geophysical Year (IGY; ), also referred to as the third International Polar Year, was an international scientific project that lasted from 1 July 1957 to 31 December 1958. It marked the end of a long period during the Cold War when scientific interchange between East and West had been seriously interrupted. Sixty-seven countries participated in IGY projects, although one notable exception was the mainland People's Republic of China, which was protesting against the participation of the Republic of China (Taiwan). East and West agreed to nominate the Belgian Marcel Nicolet as secretary general of the associated international organization.
The IGY encompassed eleven Earth sciences: aurora and airglow, cosmic rays, geomagnetism, gravity, ionospheric physics, longitude and latitude determinations (precision mapping), meteorology, oceanography, seismology, and solar activity. The timing of the IGY was particularly suited for studying some of these phenomena, since it covered the peak of solar cycle 19.
Both the Soviet Union and the U.S. launched artificial satellites for this event; the Soviet Union's Sputnik 1, launched on October 4, 1957, was the first successful artificial satellite. Other significant achievements of the IGY included the discovery of the Van Allen radiation belts by Explorer 1 and the defining of mid-ocean submarine ridges, an important confirmation of plate-tectonic theory.
Events
The origin of the International Geophysical Year can be traced to the International Polar Years held in 1882–1883, then in 1932–1933 and most recently from March 2007 to March 2009. On 5 April 1950, several top scientists (including Lloyd Berkner, Sydney Chapman, S. Fred Singer, and Harry Vestine), met in James Van Allen's living room and suggested that the time was ripe to have a worldwide Geophysical Year instead of a Polar Year, especially considering recent advances in rocketry, radar, and computing. Berkner and Chapman proposed to the Internationa
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Orbiting at a fairly typical 370 kilometers, the international space station is an example of what?
A. high orbit satellite
B. flagella satellite
C. alteration satellite
D. manmade satellite
Answer:
|
|
sciq-8369
|
multiple_choice
|
The elements are arranged in rows, each representing the filling of what shell?
|
[
"electron",
"plasma membrane",
"neutron",
"proton"
] |
A
|
Relavent Documents:
Document 0:::
In chemistry and atomic physics, an electron shell may be thought of as an orbit that electrons follow around an atom's nucleus. The closest shell to the nucleus is called the "1 shell" (also called the "K shell"), followed by the "2 shell" (or "L shell"), then the "3 shell" (or "M shell"), and so on farther and farther from the nucleus. The shells correspond to the principal quantum numbers (n = 1, 2, 3, 4 ...) or are labeled alphabetically with the letters used in X-ray notation (K, L, M, ...). A useful guide when understanding electron shells in atoms is to note that each row on the conventional periodic table of elements represents an electron shell.
Each shell can contain only a fixed number of electrons: the first shell can hold up to two electrons, the second shell can hold up to eight (2 + 6) electrons, the third shell can hold up to 18 (2 + 6 + 10) and so on. The general formula is that the nth shell can in principle hold up to 2(n2) electrons. For an explanation of why electrons exist in these shells, see electron configuration.
Each shell consists of one or more subshells, and each subshell consists of one or more atomic orbitals.
History
In 1913 Bohr proposed a model of the atom, giving the arrangement of electrons in their sequential orbits. At that time, Bohr allowed the capacity of the inner orbit of the atom to increase to eight electrons as the atoms got larger, and "in the scheme given below the number of electrons in this [outer] ring is arbitrary put equal to the normal valency of the corresponding element." Using these and other constraints, he proposed configurations that are in accord with those now known only for the first six elements. "From the above we are led to the following possible scheme for the arrangement of the electrons in light atoms:"
The shell terminology comes from Arnold Sommerfeld's modification of the 1913 Bohr model. During this period Bohr was working with Walther Kossel, whose papers in 1914 and in 1916 called the or
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
In mathematical psychology and education theory, a knowledge space is a combinatorial structure used to formulate mathematical models describing the progression of a human learner. Knowledge spaces were introduced in 1985 by Jean-Paul Doignon and Jean-Claude Falmagne, and remain in extensive use in the education theory. Modern applications include two computerized tutoring systems, ALEKS and the defunct RATH.
Formally, a knowledge space assumes that a domain of knowledge is a collection of concepts or skills, each of which must be eventually mastered. Not all concepts are interchangeable; some require other concepts as prerequisites. Conversely, competency at one skill may ease the acquisition of another through similarity. A knowledge space marks out which collections of skills are feasible: they can be learned without mastering any other skills. Under reasonable assumptions, the collection of feasible competencies forms the mathematical structure known as an antimatroid.
Researchers and educators usually explore the structure of a discipline's knowledge space as a latent class model.
Motivation
Knowledge Space Theory attempts to address shortcomings of standardized testing when used in educational psychometry. Common tests, such as the SAT and ACT, compress a student's knowledge into a very small range of ordinal ranks, in the process effacing the conceptual dependencies between questions. Consequently, the tests cannot distinguish between true understanding and guesses, nor can they identify a student's particular weaknesses, only the general proportion of skills mastered. The goal of knowledge space theory is to provide a language by which exams can communicate
What the student can do and
What the student is ready to learn.
Model structure
Knowledge Space Theory-based models presume that an educational subject can be modeled as a finite set of concepts, skills, or topics. Each feasible state of knowledge about is then a subset of ; the set of
Document 3:::
Advanced Placement (AP) Physics C: Electricity and Magnetism (also known as AP Physics C: E&M or AP E&M) is an introductory physics course administered by the College Board as part of its Advanced Placement program. It is intended to proxy a second-semester calculus-based university course in electricity and magnetism. The content of Physics C: E&M overlaps with that of AP Physics 2, but Physics 2 is algebra-based and covers other topics outside of electromagnetism, while Physics C is calculus-based and only covers electromagnetism. Physics C: E&M may be combined with its mechanics counterpart to form a year-long course that prepares for both exams.
Course content
E&M is equivalent to an introductory college course in electricity and magnetism for physics or engineering majors. The course modules are:
Electrostatics
Conductors, capacitors, and dielectrics
Electric circuits
Magnetic fields
Electromagnetism.
Methods of calculus are used wherever appropriate in formulating physical principles and in applying them to physical problems. Therefore, students should have completed or be concurrently enrolled in a calculus class.
AP test
The course culminates in an optional exam for which high-performing students may receive some credit towards their college coursework, depending on the institution.
Registration
The AP examination for AP Physics C: Electricity and Magnetism is separate from the AP examination for AP Physics C: Mechanics. Before 2006, test-takers paid only once and were given the choice of taking either one or two parts of the Physics C test.
Format
The exam is typically administered on a Monday afternoon in May. The exam is configured in two categories: a 35-question multiple choice section and a 3-question free response section. Test takers are allowed to use an approved calculator during the entire exam. The test is weighted such that each section is worth half of the final score. This and AP Physics C: Mechanics are the shortest AP exams, with
Document 4:::
In nuclear physics, atomic physics, and nuclear chemistry, the nuclear shell model is a model of the atomic nucleus that uses the Pauli exclusion principle to describe the structure of nuclei in terms of energy levels. The first shell model was proposed by Dmitri Ivanenko (together with E. Gapon) in 1932. The model was developed in 1949 following independent work by several physicists, most notably Maria Goeppert Mayer and J. Hans D. Jensen, who shared half of the 1963 Nobel Prize in Physics for their contributions.
The nuclear shell model is partly analogous to the atomic shell model, which describes the arrangement of electrons in an atom, in that a filled shell results in better stability. When adding nucleons (protons and neutrons) to a nucleus, there are certain points where the binding energy of the next nucleon is significantly less than the last one. This observation, that there are specific magic quantum numbers of nucleons (2, 8, 20, 28, 50, 82, 126) which are more tightly bound than the following higher number, is the origin of the shell model.
The shells for protons and neutrons are independent of each other. Therefore, there can exist both "magic nuclei", in which one nucleon type or the other is at a magic number, and "doubly magic quantum nuclei", where both are. Due to some variations in orbital filling, the upper magic numbers are 126 and, speculatively, 184 for neutrons, but only 114 for protons, playing a role in the search for the so-called island of stability. Some semi-magic numbers have been found, notably Z = 40, which gives the nuclear shell filling for the various elements; 16 may also be a magic number.
In order to get these numbers, the nuclear shell model starts from an average potential with a shape somewhere between the square well and the harmonic oscillator. To this potential, a spin orbit term is added. Even so, the total perturbation does not coincide with experiment, and an empirical spin orbit coupling must be added with at le
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
The elements are arranged in rows, each representing the filling of what shell?
A. electron
B. plasma membrane
C. neutron
D. proton
Answer:
|
|
sciq-8862
|
multiple_choice
|
What do we call the force of attraction or repulsion between electrically charged particles?
|
[
"mechanical force",
"gravitational pull",
"chemical force",
"electromagnetic force"
] |
D
|
Relavent Documents:
Document 0:::
A contact force is any force that occurs as a result of two objects making contact with each other. Contact forces are ubiquitous and are responsible for most visible interactions between macroscopic collections of matter. Pushing a car or kicking a ball are some of the everyday examples where contact forces are at work. In the first case the force is continuously applied to the car by a person, while in the second case the force is delivered in a short impulse.
Contact forces are often decomposed into orthogonal components, one perpendicular to the surface(s) in contact called the normal force, and one parallel to the surface(s) in contact, called the friction force.
Not all forces are contact forces; for example, the weight of an object is the force between the object and the Earth, even though the two do not need to make contact. Gravitational forces, electrical forces and magnetic forces are body forces and can exist without contact occurring.
Origin of contact forces
The microscopic origin of contact forces is diverse. Normal force is directly a result of Pauli exclusion principle and not a true force per se: Everyday objects do not actually touch each other; rather, contact forces are the result of the interactions of the electrons at or near the surfaces of the objects. The atoms in the two surfaces cannot penetrate one another without a large investment of energy because there is no low energy state for which the electron wavefunctions from the two surfaces overlap; thus no microscopic force is needed to prevent this penetration. On the more macroscopic level, such surfaces can be treated as a single object, and two bodies do not penetrate each other due to the stability of matter, which is again a consequence of Pauli exclusion principle, but also of the fundamental forces of nature: Cracks in the bodies do not widen due to electromagnetic forces that create the chemical bonds between the atoms; the atoms themselves do not disintegrate because of the ele
Document 1:::
In physics, action at a distance is the concept that an object's motion can be affected by another object without being physically contact (as in mechanical contact) by the other object. That is, it is the non-local interaction of objects that are separated in space. Coulomb's law and Newton's law of universal gravitation are based on action at a distance.
Historically, action at a distance was the earliest scientific model for gravity and electricity and it continues to be useful in many practical cases. In the 19th and 20th centuries, field models arose to explain these phenomena with more precision. The discovery of electrons and of special relativity lead to new action at a distance models providing alternative to field theories.
Categories of action
In the study of mechanics, action at a distance is one of three fundamental actions on matter that cause motion. The other two are direct impact (elastic or inelastic collisions) and actions in a continuous medium as in fluid mechanics or solid mechanics.
Historically, physical explanations for particular phenomena have moved between these three categories over time as new models were developed.
Action at a distance and actions in a continuous medium may be easily distinguished when the medium dynamics are visible, like waves in water or in an elastic solid. In the case of electricity or gravity, there is no medium required. In the nineteenth century, criteria like the effect of actions on intervening matter, the observation of a time delay, the apparent storage of energy, or even the possibility of a plausible mechanical model for action transmission were all accepted as evidence against action at a distance. Aether theories were alternative proposals to replace apparent action-at-a-distance in gravity and electromagnetism, in terms of continuous action inside an (invisible) medium called "aether".
Roles
The concept of action at a distance acts in multiple roles in physics and it can co-exist with other mode
Document 2:::
A non-contact force is a force which acts on an object without coming physically in contact with it. The most familiar non-contact force is gravity, which confers weight. In contrast, a contact force is a force which acts on an object coming physically in contact with it.
All four known fundamental interactions are non-contact forces:
Gravity, the force of attraction that exists among all bodies that have mass. The force exerted on each body by the other through weight is proportional to the mass of the first body times the mass of the second body divided by the square of the distance between them.
Electromagnetism is the force that causes the interaction between electrically charged particles; the areas in which this happens are called electromagnetic fields. Examples of this force include: electricity, magnetism, radio waves, microwaves, infrared, visible light, X-rays and gamma rays. Electromagnetism mediates all chemical, biological, electrical and electronic processes.
Strong nuclear force: Unlike gravity and electromagnetism, the strong nuclear force is a short distance force that takes place between fundamental particles within a nucleus. It is charge independent and acts equally between a proton and a proton, a neutron and a neutron, and a proton and a neutron. The strong nuclear force is the strongest force in nature; however, its range is small (acting only over distances of the order of 10−15 m). The strong nuclear force mediates both nuclear fission and fusion reactions.
Weak nuclear force: The weak nuclear force mediates the β decay of a neutron, in which the neutron decays into a proton and in the process emits a β particle and an uncharged particle called a neutrino. As a result of mediating the β decay process, the weak nuclear force plays a key role in supernovas. Both the strong and weak forces form an important part of quantum mechanics.The Casimir effect could also be thought of as a non-contact force.
See also
Tension
Body force
Surface
Document 3:::
Coulomb's inverse-square law, or simply Coulomb's law, is an experimental law of physics that calculates the amount of force between two electrically charged particles at rest. This electric force is conventionally called electrostatic force or Coulomb force. Although the law was known earlier, it was first published in 1785 by French physicist Charles-Augustin de Coulomb, hence the name. Coulomb's law was essential to the development of the theory of electromagnetism and maybe even its starting point, as it allowed meaningful discussions of the amount of electric charge in a particle.
The law states that the magnitude, or absolute value, of the attractive or repulsive electrostatic force between two point charges is directly proportional to the product of the magnitudes of their charges and inversely proportional to the squared distance between them. Coulomb discovered that bodies with like electrical charges repel:
Coulomb also showed that oppositely charged bodies attract according to an inverse-square law:
Here, is a constant, and are the quantities of each charge, and the scalar r is the distance between the charges.
The force is along the straight line joining the two charges. If the charges have the same sign, the electrostatic force between them makes them repel; if they have different signs, the force between them makes them attract.
Being an inverse-square law, the law is similar to Isaac Newton's inverse-square law of universal gravitation, but gravitational forces always make things attract, while electrostatic forces make charges attract or repel. Also, gravitational forces are much weaker than electrostatic forces. Coulomb's law can be used to derive Gauss's law, and vice versa. In the case of a single point charge at rest, the two laws are equivalent, expressing the same physical law in different ways. The law has been tested extensively, and observations have upheld the law on the scale from 10−16 m to 108 m.
History
Ancient cultures aroun
Document 4:::
In physical theories, a test particle, or test charge, is an idealized model of an object whose physical properties (usually mass, charge, or size) are assumed to be negligible except for the property being studied, which is considered to be insufficient to alter the behavior of the rest of the system. The concept of a test particle often simplifies problems, and can provide a good approximation for physical phenomena. In addition to its uses in the simplification of the dynamics of a system in particular limits, it is also used as a diagnostic in computer simulations of physical processes.
Classical gravity
The easiest case for the application of a test particle arises in Newtonian gravity. The general expression for the gravitational force between any two point masses and is:
,
where and represent the position of each particle in space. In the general solution for this equation, both masses rotate around their center of mass R, in this specific case:
.
In the case where one of the masses is much larger than the other (), one can assume that the smaller mass moves as a test particle in a gravitational field generated by the larger mass, which does not accelerate. We can define the gravitational field as
,
with as the distance between the massive object and the test particle, and is the unit vector in the direction going from the massive object to the test mass. Newton's second law of motion of the smaller mass reduces to
,
and thus only contains one variable, for which the solution can be calculated more easily. This approach gives very good approximations for many practical problems, e.g. the orbits of satellites, whose mass is relatively small compared to that of the Earth.
Electrostatics
In simulations with electric fields the most important characteristics of a test particle is its electric charge and its mass. In this situation it is often referred to as a test charge.
Similar to the case of classical gravitation, the electric field created
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What do we call the force of attraction or repulsion between electrically charged particles?
A. mechanical force
B. gravitational pull
C. chemical force
D. electromagnetic force
Answer:
|
|
scienceQA-3058
|
multiple_choice
|
How long is the Amazon River?
|
[
"6,400 millimeters",
"6,400 meters",
"6,400 centimeters",
"6,400 kilometers"
] |
D
|
The best estimate for the length of the Amazon River is 6,400 kilometers.
6,400 millimeters, 6,400 centimeters, and 6,400 meters are all too short.
|
Relavent Documents:
Document 0:::
A pre-STEM program is a course of study at any two-year college that prepares a student to transfer to a four-year school to earn a bachelor's degree in a STEM field.
Overview
The concept of a pre-STEM program is being developed to address America's need for more college-trained professionals in science, technology, engineering, and mathematics (STEM). It is an innovation meant to fill a gap at community colleges that do not have 'major' degree paths that students identify with on their way to earning an Associates degree. Students must complete a considerable amount of STEM coursework before transferring from a two-year school to a four-year school and earn a baccalaureate degree in a STEM field. Schools with a pre-STEM program are able to identify those students and support them with STEM-specific academic and career advising, increasing the student's chances of going on to earn a STEM baccalaureate degree in a timely fashion.
With over 50% of America's college-bound students starting their college career at public or private two-year school, and with a very small proportion of students who start college at a two-year school matriculating to and earning STEM degrees from four-year schools, pre-STEM programs have great potential for broadening participation in baccalaureate STEM studies.
Example programs
The effectiveness of pre-STEM programs is being investigated by a consortium of schools in Missouri: Moberly Area Community College, St. Charles Community College, Metropolitan Community College, and Truman State University.
A larger group of schools met at the Belknap Springs Meetings in October 2009 to discuss the challenges and opportunities presented by STEM-focused partnerships between 2-year and 4-year schools. Each program represented a two-year school and a four-year school that were trying to increase the number of people who earn a baccalaureate degree in a STEM area through various means, some of which were pre-STEM programs. Other methods includes
Document 1:::
The Bachelor of Science in Aquatic Resources and Technology (B.Sc. in AQT) (or Bachelor of Aquatic Resource) is an undergraduate degree that prepares students to pursue careers in the public, private, or non-profit sector in areas such as marine science, fisheries science, aquaculture, aquatic resource technology, food science, management, biotechnology and hydrography. Post-baccalaureate training is available in aquatic resource management and related areas.
The Department of Animal Science and Export Agriculture, at the Uva Wellassa University of Badulla, Sri Lanka, has the largest enrollment of undergraduate majors in Aquatic Resources and Technology, with about 200 students as of 2014.
The Council on Education for Aquatic Resources and Technology includes undergraduate AQT degrees in the accreditation review of Aquatic Resources and Technology programs and schools.
See also
Marine Science
Ministry of Fisheries and Aquatic Resources Development
Document 2:::
Female education in STEM refers to child and adult female representation in the educational fields of science, technology, engineering, and mathematics (STEM). In 2017, 33% of students in STEM fields were women.
The organization UNESCO has stated that this gender disparity is due to discrimination, biases, social norms and expectations that influence the quality of education women receive and the subjects they study. UNESCO also believes that having more women in STEM fields is desirable because it would help bring about sustainable development.
Current status of girls and women in STEM education
Overall trends in STEM education
Gender differences in STEM education participation are already visible in early childhood care and education in science- and math-related play, and become more pronounced at higher levels of education. Girls appear to lose interest in STEM subjects with age, particularly between early and late adolescence. This decreased interest affects participation in advanced studies at the secondary level and in higher education. Female students represent 35% of all students enrolled in STEM-related fields of study at this level globally. Differences are also observed by disciplines, with female enrollment lowest in engineering, manufacturing and construction, natural science, mathematics and statistics and ICT fields. Significant regional and country differences in female representation in STEM studies can be observed, though, suggesting the presence of contextual factors affecting girls’ and women's engagement in these fields. Women leave STEM disciplines in disproportionate numbers during their higher education studies, in their transition to the world of work and even in their career cycle.
Learning achievement in STEM education
Data on gender differences in learning achievement present a complex picture, depending on what is measured (subject, knowledge acquisition against knowledge application), the level of education/age of students, and
Document 3:::
Advanced Placement (AP) Biology (also known as AP Bio) is an Advanced Placement biology course and exam offered by the College Board in the United States. For the 2012–2013 school year, the College Board unveiled a new curriculum with a greater focus on "scientific practices".
This course is designed for students who wish to pursue an interest in the life sciences. The College Board recommends successful completion of high school biology and high school chemistry before commencing AP Biology, although the actual prerequisites vary from school to school and from state to state. This course, nevertheless, is considered very challenging and one of the most difficult AP classes, as shown with AP Finals grade distributions.
Topic outline
The exam covers the following 8 units. The percentage indicates the portion of the multiple-choice section of the exam focused on each content area:
The course is based on and tests six skills, called scientific practices which include:
In addition to the topics above, students are required to be familiar with general lab procedure. Students should know how to collect data, analyze data to form conclusions, and apply those conclusions.
Exam
Students are allowed to use a four-function, scientific, or graphing calculator.
The exam has two sections: a 90 minute multiple choice section and a 90 minute free response section. There are 60 multiple choice questions and six free responses, two long and four short. Both sections are worth 50% of the score.
Score distribution
Commonly used textbooks
Biology, AP Edition by Sylvia Mader (2012, hardcover )
Life: The Science of Biology (Sadava, Heller, Orians, Purves, and Hillis, )
Campbell Biology AP Ninth Edition (Reece, Urry, Cain, Wasserman, Minorsky, and Andrew Jackson )
See also
Glossary of biology
A.P Bio (TV Show)
Document 4:::
Computer science and engineering (CSE) is an academic program at many universities which comprises computer science classes (e.g. data structures and algorithms) and computer engineering classes (e.g computer architecture). There is no clear division in computing between science and engineering, just like in the field of materials science and engineering. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both undergraduate as well postgraduate with specializations.
Academic courses
Academic programs vary between colleges, but typically include a combination of topics in computer science, computer engineering, and electrical engineering. Undergraduate courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most CSE programs require introductory mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, probability, and statistics, as well as the basics of electrical and electronic engineering, physics, and electromagnetism.
Example universities with CSE majors and departments
APJ Abdul Kalam Technological University
American International University-B
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
How long is the Amazon River?
A. 6,400 millimeters
B. 6,400 meters
C. 6,400 centimeters
D. 6,400 kilometers
Answer:
|
sciq-2678
|
multiple_choice
|
What occurs at joints?
|
[
"respiration",
"body movements",
"nothing",
"digestion"
] |
B
|
Relavent Documents:
Document 0:::
The American Society of Biomechanics (ASB) is a scholarly society that focuses on biomechanics across a variety of academic fields. It was founded in 1977 by a group of scientists and clinicians. The ASB holds an annual conference as an arena to disseminate and learn about the most recent progress in the field, to distribute awards to recognize excellent work, and to engage in public outreach to expand the impact of its members.
Conferences
The society hosts an annual conference that takes place in North America (usually USA). These conferences are periodically joint conferences held in conjunction with the International Society of Biomechanics (ISB), the North American Congress on Biomechanics (NACOB), and the World Congress of Biomechanics (WCB). The annual conference, when not partnered with another conference, receives around 700 to 800 abstract submissions per year, with attendees in approximately the same numbers. The first conference was held in 1977.
Often, work presented at these conferences achieves media attention due to the ‘public interest’ nature of the findings or that new devices are introduced there. Examples include:
the effect of tablet reading on cervical spine posture;
the squeak of the basketball shoe;
‘underwear’ to address back-pain;
recovery after exercise;
exoskeleton boots for joint pain during exercise;
how flamingos stand on one leg.
National Biomechanics Day
The ASB is instrumental in promoting National Biomechanics Day (NBD), which has received international recognition.
In New Zealand, Massey University attracted NZ$48,000 of national funding
through the Unlocking Curious Minds programme to promote National Biomechanics Day, with the aim to engage 1,100 students from lower-decile schools in an experiential learning day focused on the science of biomechanics.
It was first held in 2016 on April 7, and consisted of ‘open house’ visits from middle and high school students to biomechanics research and teaching laboratories a
Document 1:::
Kinesiology () is the scientific study of human body movement. Kinesiology addresses physiological, anatomical, biomechanical, pathological, neuropsychological principles and mechanisms of movement. Applications of kinesiology to human health include biomechanics and orthopedics; strength and conditioning; sport psychology; motor control; skill acquisition and motor learning; methods of rehabilitation, such as physical and occupational therapy; and sport and exercise physiology. Studies of human and animal motion include measures from motion tracking systems, electrophysiology of muscle and brain activity, various methods for monitoring physiological function, and other behavioral and cognitive research techniques.
Basics
Kinesiology studies the science of human movement, performance, and function by applying the fundamental sciences of Cell Biology, Molecular Biology, Chemistry, Biochemistry, Biophysics, Biomechanics, Biomathematics, Biostatistics, Anatomy, Physiology, Exercise Physiology, Pathophysiology, Neuroscience, and Nutritional science. A bachelor's degree in kinesiology can provide strong preparation for graduate study in biomedical research, as well as in professional programs, such as medicine, dentistry, physical therapy, and occupational therapy.
The term "kinesiologist" is not a licensed nor professional designation in many countries, with the notable exception of Canada. Individuals with training in this area can teach physical education, work as personal trainers and sport coaches, provide consulting services, conduct research and develop policies related to rehabilitation, human motor performance, ergonomics, and occupational health and safety. In North America, kinesiologists may study to earn a Bachelor of Science, Master of Science, or Doctorate of Philosophy degree in Kinesiology or a Bachelor of Kinesiology degree, while in Australia or New Zealand, they are often conferred an Applied Science (Human Movement) degree (or higher). Many doctor
Document 2:::
The list below describes such skeletal movements as normally are possible in particular joints of the human body. Other animals have different degrees of movement at their respective joints; this is because of differences in positions of muscles and because structures peculiar to the bodies of humans and other species block motions unsuited to their anatomies.
Arm and shoulder
Shoulder
elbow
The major muscles involved in retraction include the rhomboid major muscle, rhomboid minor muscle and trapezius muscle, whereas the major muscles involved in protraction include the serratus anterior and pectoralis minor muscles.
Sternoclavicular and acromioclavicular joints
Elbow
Wrist and fingers
Movements of the fingers
Movements of the thumb
Neck
Spine
Lower limb
Knees
Feet
The muscles tibialis anterior and tibialis posterior invert the foot. Some sources also state that the triceps surae and extensor hallucis longus invert. Inversion occurs at the subtalar joint and transverse tarsal joint.
Eversion of the foot occurs at the subtalar joint. The muscles involved in this include Fibularis longus and fibularis brevis, which are innervated by the superficial fibular nerve. Some sources also state that the fibularis tertius everts.
Dorsiflexion of the foot: The muscles involved include those of the Anterior compartment of leg, specifically tibialis anterior muscle, extensor hallucis longus muscle, extensor digitorum longus muscle, and peroneus tertius. The range of motion for dorsiflexion indicated in the literature varies from 12.2 to 18 degrees. Foot drop is a condition, that occurs when dorsiflexion is difficult for an individual who is walking.
Plantarflexion of the foot: Primary muscles for plantar flexion are situated in the Posterior compartment of leg, namely the superficial Gastrocnemius, Soleus and Plantaris (only weak participation), and the deep muscles Flexor hallucis longus, Flexor digitorum longus and Tibialis posterior. Muscles in the Lateral co
Document 3:::
Reciprocal inhibition describes the relaxation of muscles on one side of a joint to accommodate contraction on the other side. In some allied health disciplines, this is known as reflexive antagonism. The central nervous system sends a message to the agonist muscle to contract. The tension in the antagonist muscle is activated by impulses from motor neurons, causing it to relax.
Mechanics
Joints are controlled by two opposing sets of muscles called extensors and flexors, that work in synchrony for smooth movement. When a muscle spindle is stretched, the stretch reflex is activated, and the opposing muscle group must be inhibited to prevent it from working against the contraction of the homonymous muscle. This inhibition is accomplished by the actions of an inhibitor interneuron in the spinal cord.
The afferent of the muscle spindle bifurcates in the spinal cord. One branch innervates the alpha motor neuron that causes the homonymous muscle to contract, producing the reflex. The other branch innervates the inhibitory interneuron, which then innervates the alpha motor neuron that synapses onto the opposing muscle. Because the interneuron is inhibitory, it prevents the opposing alpha motor neuron from firing, thereby reducing the contraction of the opposing muscle. Without this reciprocal inhibition, both groups of muscles might contract simultaneously and work against each other.
If opposing muscles were to contract at the same time, a muscle tear can occur. This may occur during physical activities such as running, during which opposing muscles engage and disengage sequentially to produce coordinated movement. Reciprocal inhibition facilitates ease of movement and is a safeguard against injury. However, if a "misfiring" of motor neurons occurs, causing simultaneous contraction of opposing muscles, a tear can occur. For example, if the quadriceps femoris and hamstring contract simultaneously at a high intensity, the stronger muscle (traditionally the quadriceps)
Document 4:::
A (bipedal) gait cycle is the time period or sequence of events or movements during locomotion in which one foot contacts the ground to when that same foot again contacts the ground, and involves propulsion of the centre of gravity in the direction of motion. A gait cycle usually involves co-operative movements of both the left and right legs and feet. A single gait cycle is also known as a stride.
Each gait cycle or stride has two major phases:
Stance Phase, the phase during which the foot remains in contact with the ground, and the
Swing Phase, the phase during which the foot is not in contact with the ground.
Components of gait cycle
A gait cycle consists of stance phase and swing phase. Considering the number of limb supports, the stance phase spans from initial double-limb stance to single-limb stance and terminal double-limb stance. The swing phase corresponds to the single-limb stance of the opposite leg. The stance and swing phases can further be divided by seven events into seven smaller phases in which the body postures are specific. For analyzing gait cycle one foot is taken as reference and the movements of the reference foot are studied.
Phases and events
Stance Phase: Stance phase is that part of a gait cycle during which the foot remains in contact with the ground. It constitutes 60% of the gait cycle (10% for initial double-limb stance, 40% for single-limb stance and 10% for terminal double-limb stance). Stance phase consists of four events and four phases:
Initial Contact (Heel Strike): The heel of the reference foot touches the ground in front of the body. The respective knee is extended while the hip is extending from flexed position, bringing the torso to the lowest vertical position. This event marks the initiation of stance phase.
Loading Response (Foot Flat) Phase: Loading response phase begins immediately after the heel strikes the ground. In loading response phase, the weight is transferred onto the referenced leg. It is important
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What occurs at joints?
A. respiration
B. body movements
C. nothing
D. digestion
Answer:
|
|
sciq-88
|
multiple_choice
|
Most of the pathogens that cause stis enter the body through mucous membranes of which organs?
|
[
"reproductive organs",
"stomach",
"eyes",
"kidneys"
] |
A
|
Relavent Documents:
Document 0:::
Sexually transmitted infections (STIs), also referred to as sexually transmitted diseases (STDs), are infections that are commonly spread by sexual activity, especially vaginal intercourse, anal sex and oral sex. The most prevalent STIs may be carried by a significant fraction of the human population.
Document 1:::
The vaginal epithelium is the inner lining of the vagina consisting of multiple layers of (squamous) cells. The basal membrane provides the support for the first layer of the epithelium-the basal layer. The intermediate layers lie upon the basal layer, and the superficial layer is the outermost layer of the epithelium. Anatomists have described the epithelium as consisting of as many as 40 distinct layers. The mucus found on the epithelium is secreted by the cervix and uterus. The rugae of the epithelium create an involuted surface and result in a large surface area that covers 360 cm2. This large surface area allows the trans-epithelial absorption of some medications via the vaginal route.
In the course of the reproductive cycle, the vaginal epithelium is subject to normal, cyclic changes, that are influenced by estrogen: with increasing circulating levels of the hormone, there is proliferation of epithelial cells along with an increase in the number of cell layers. As cells proliferate and mature, they undergo partial cornification. Although hormone induced changes occur in the other tissues and organs of the female reproductive system, the vaginal epithelium is more sensitive and its structure is an indicator of estrogen levels. Some Langerhans cells and melanocytes are also present in the epithelium. The epithelium of the ectocervix is contiguous with that of the vagina, possessing the same properties and function. The vaginal epithelium is divided into layers of cells, including the basal cells, the parabasal cells, the superficial squamous flat cells, and the intermediate cells. The superficial cells exfoliate continuously, and basal cells replace the superficial cells that die and slough off from the stratum corneum. Under the stratus corneum is the stratum granulosum and stratum spinosum. The cells of the vaginal epithelium retain a usually high level of glycogen compared to other epithelial tissue in the body. The surface patterns on the cells themselve
Document 2:::
This table lists the epithelia of different organs of the human body
Human anatomy
Document 3:::
Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research.
Americas
Human Biology major at Stanford University, Palo Alto (since 1970)
Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government.
Human and Social Biology (Caribbean)
Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment.
Human Biology Program at University of Toronto
The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications.
Asia
BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002)
BSc (honours) Human Biology at AIIMS (New
Document 4:::
This list of related male and female reproductive organs shows how the male and female reproductive organs and the development of the reproductive system are related, sharing a common developmental path. This makes them biological homologues. These organs differentiate into the respective sex organs in males and females.
List
Internal organs
External organs
The external genitalia of both males and females have similar origins. They arise from the genital tubercle that forms anterior to the cloacal folds (proliferating mesenchymal cells around the cloacal membrane). The caudal aspect of the cloacal folds further subdivides into the posterior anal folds and the anterior urethral folds. Bilateral to the urethral fold, genital swellings (tubercles) become prominent. These structures are the future scrotum and labia majora in males and females, respectively.
The genital tubercles of an eight-week-old embryo of either sex are identical. They both have a glans area, which will go on to form the glans clitoridis (females) or glans penis (males), a urogenital fold and groove, and an anal tubercle. At around ten weeks, the external genitalia are still similar. At the base of the glans, there is a groove known as the coronal sulcus or corona glandis. It is the site of attachment of the future prepuce. Just anterior to the anal tubercle, the caudal end of the left and right urethral folds fuse to form the urethral raphe. The lateral part of the genital tubercle (called the lateral tubercle) grows longitudinally and is about the same length in either sex.
Human physiology
The male external genitalia include the penis and the scrotum. The female external genitalia include the clitoris, the labia, and the vaginal opening, which are collectively called the vulva. External genitalia vary widely in external appearance among different people.
One difference between the glans penis and the glans clitoridis is that the glans clitoridis packs nerve endings into a volume only about
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Most of the pathogens that cause stis enter the body through mucous membranes of which organs?
A. reproductive organs
B. stomach
C. eyes
D. kidneys
Answer:
|
|
sciq-7933
|
multiple_choice
|
Plants, algae and bacteria are all examples of what type of organism?
|
[
"unicellular",
"microbes",
"skeletal",
"photosynthetic"
] |
D
|
Relavent Documents:
Document 0:::
Marine botany is the study of flowering vascular plant species and marine algae that live in shallow seawater of the open ocean and the littoral zone, along shorelines of the intertidal zone and coastal wetlands, even in low-salinity brackish water of estuaries.
It is a branch of marine biology and botany.
Marine Plant Classifications
There are five kingdoms that present-day classifications group organisms into: the Monera, Protist, Plantae, Fungi, and Animalia.
The Monera
Less than 2,000 species of bacteria occur in the marine environment out of the 100,000 species. Although this group of species is small, they play a tremendous role in energy transfer, mineral cycles, and organic turnover. The monera differs from the four other kingdoms as "members of the Monera have a prokaryotic cytology in which the cells lack membrane-bound organelles such as chloroplasts, mitochondria, nuclei, and complex flagella."
The bacteria can be divided into two major subkingdoms: Eubacteria and Archaebacteria.
Eubacteria
Eubacteria include the only bacteria that contain chlorophyll a. Not only that, but Eubacteria are placed in the divisions of Cyanobacteria and Prochlorophyta.
Characteristics of Eubacteria:
They do not have any membrane-bound organelles.
Most are enclosed by a cellular wall.
Archaebacteria
Archaebacteria are a type of single-cell organism and have a number of characteristics not seen in more "modern" cell types. These characteristics include:
Unique cell membrane chemistry
Unique gene transcription
Capable of methanogenesis
Differences in ribosomal RNA
Types of Archaebacteria:
Thermoproteota: Extremely heat-tolerant
"Euryarchaeota": Able to survive in very salty habitats
"Korarchaeota": The oldest lineage of archaebacteria
Archaebacteria vs. Eubacteria
While both are prokaryotic, these organisms exist in different biological domains because of how genetically different they are. Some believe archaebacteria are some of the oldest forms of lif
Document 1:::
Marine prokaryotes are marine bacteria and marine archaea. They are defined by their habitat as prokaryotes that live in marine environments, that is, in the saltwater of seas or oceans or the brackish water of coastal estuaries. All cellular life forms can be divided into prokaryotes and eukaryotes. Eukaryotes are organisms whose cells have a nucleus enclosed within membranes, whereas prokaryotes are the organisms that do not have a nucleus enclosed within a membrane. The three-domain system of classifying life adds another division: the prokaryotes are divided into two domains of life, the microscopic bacteria and the microscopic archaea, while everything else, the eukaryotes, become the third domain.
Prokaryotes play important roles in ecosystems as decomposers recycling nutrients. Some prokaryotes are pathogenic, causing disease and even death in plants and animals. Marine prokaryotes are responsible for significant levels of the photosynthesis that occurs in the ocean, as well as significant cycling of carbon and other nutrients.
Prokaryotes live throughout the biosphere. In 2018 it was estimated the total biomass of all prokaryotes on the planet was equivalent to 77 billion tonnes of carbon (77 Gt C). This is made up of 7 Gt C for archaea and 70 Gt C for bacteria. These figures can be contrasted with the estimate for the total biomass for animals on the planet, which is about 2 Gt C, and the total biomass of humans, which is 0.06 Gt C. This means archaea collectively have over 100 times the collective biomass of humans, and bacteria over 1000 times.
There is no clear evidence of life on Earth during the first 600 million years of its existence. When life did arrive, it was dominated for 3,200 million years by the marine prokaryotes. More complex life, in the form of crown eukaryotes, didn't appear until the Cambrian explosion a mere 500 million years ago.
Evolution
The Earth is about 4.54 billion years old. The earliest undisputed evidence of life on Eart
Document 2:::
The bacterium, despite its simplicity, contains a well-developed cell structure which is responsible for some of its unique biological structures and pathogenicity. Many structural features are unique to bacteria and are not found among archaea or eukaryotes. Because of the simplicity of bacteria relative to larger organisms and the ease with which they can be manipulated experimentally, the cell structure of bacteria has been well studied, revealing many biochemical principles that have been subsequently applied to other organisms.
Cell morphology
Perhaps the most elemental structural property of bacteria is their morphology (shape). Typical examples include:
coccus (circle or spherical)
bacillus (rod-like)
coccobacillus (between a sphere and a rod)
spiral (corkscrew-like)
filamentous (elongated)
Cell shape is generally characteristic of a given bacterial species, but can vary depending on growth conditions. Some bacteria have complex life cycles involving the production of stalks and appendages (e.g. Caulobacter) and some produce elaborate structures bearing reproductive spores (e.g. Myxococcus, Streptomyces). Bacteria generally form distinctive cell morphologies when examined by light microscopy and distinct colony morphologies when grown on Petri plates.
Perhaps the most obvious structural characteristic of bacteria is (with some exceptions) their small size. For example, Escherichia coli cells, an "average" sized bacterium, are about 2 µm (micrometres) long and 0.5 µm in diameter, with a cell volume of 0.6–0.7 μm3. This corresponds to a wet mass of about 1 picogram (pg), assuming that the cell consists mostly of water. The dry mass of a single cell can be estimated as 23% of the wet mass, amounting to 0.2 pg. About half of the dry mass of a bacterial cell consists of carbon, and also about half of it can be attributed to proteins. Therefore, a typical fully grown 1-liter culture of Escherichia coli (at an optical density of 1.0, corresponding to c. 109
Document 3:::
Microbiology () is the scientific study of microorganisms, those being of unicellular (single-celled), multicellular (consisting of complex cells), or acellular (lacking cells). Microbiology encompasses numerous sub-disciplines including virology, bacteriology, protistology, mycology, immunology, and parasitology.
Eukaryotic microorganisms possess membrane-bound organelles and include fungi and protists, whereas prokaryotic organisms—all of which are microorganisms—are conventionally classified as lacking membrane-bound organelles and include Bacteria and Archaea. Microbiologists traditionally relied on culture, staining, and microscopy for the isolation and identification of microorganisms. However, less than 1% of the microorganisms present in common environments can be cultured in isolation using current means. With the emergence of biotechnology, Microbiologists currently rely on molecular biology tools such as DNA sequence-based identification, for example, the 16S rRNA gene sequence used for bacterial identification.
Viruses have been variably classified as organisms, as they have been considered either as very simple microorganisms or very complex molecules. Prions, never considered as microorganisms, have been investigated by virologists, however, as the clinical effects traced to them were originally presumed due to chronic viral infections, virologists took a search—discovering "infectious proteins".
The existence of microorganisms was predicted many centuries before they were first observed, for example by the Jains in India and by Marcus Terentius Varro in ancient Rome. The first recorded microscope observation was of the fruiting bodies of moulds, by Robert Hooke in 1666, but the Jesuit priest Athanasius Kircher was likely the first to see microbes, which he mentioned observing in milk and putrid material in 1658. Antonie van Leeuwenhoek is considered a father of microbiology as he observed and experimented with microscopic organisms in the 1670s, us
Document 4:::
The following outline is provided as an overview of and topical guide to life forms:
A life form (also spelled life-form or lifeform) is an entity that is living, such as plants (flora), animals (fauna), and fungi (funga). It is estimated that more than 99% of all species that ever existed on Earth, amounting to over five billion species, are extinct.
Earth is the only celestial body known to harbor life forms. No form of extraterrestrial life has been discovered yet.
Archaea
Archaea – a domain of single-celled microorganisms, morphologically similar to bacteria, but they possess genes and several metabolic pathways that are more closely related to those of eukaryotes, notably the enzymes involved in transcription and translation. Many archaea are extremophiles, which means living in harsh environments, such as hot springs and salt lakes, but they have since been found in a broad range of habitats.
Thermoproteota – a phylum of the Archaea kingdom. Initially
Thermoprotei
Sulfolobales – grow in terrestrial volcanic hot springs with optimum growth occurring
Euryarchaeota – In the taxonomy of microorganisms
Haloarchaea
Halobacteriales – in taxonomy, the Halobacteriales are an order of the Halobacteria, found in water saturated or nearly saturated with salt.
Methanobacteria
Methanobacteriales – information including symptoms, causes, diseases, symptoms, treatments, and other medical and health issues.
Methanococci
Methanococcales aka Methanocaldococcus jannaschii – thermophilic methanogenic archaea, meaning that it thrives at high temperatures and produces methane
Methanomicrobia
Methanosarcinales – In taxonomy, the Methanosarcinales are an order of the Methanomicrobia
Methanopyri
Methanopyrales – In taxonomy, the Methanopyrales are an order of the methanopyri.
Thermococci
Thermococcales
Thermoplasmata
Thermoplasmatales – An order of aerobic, thermophilic archaea, in the kingdom
Halophiles – organisms that thrive in high salt concentrations
Ko
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Plants, algae and bacteria are all examples of what type of organism?
A. unicellular
B. microbes
C. skeletal
D. photosynthetic
Answer:
|
|
sciq-8189
|
multiple_choice
|
When the air temperature reaches the dew point, water vapor starts to do what?
|
[
"evaporate",
"germinate",
"condense",
"dissipate"
] |
C
|
Relavent Documents:
Document 0:::
The dew point of a given body of air is the temperature to which it must be cooled to become saturated with water vapor. This temperature depends on the pressure and water content of the air. When the air is cooled below the dew point, its moisture capacity is reduced and airborne water vapor will condense to form liquid water known as dew. When this occurs through the air's contact with a colder surface, dew will form on that surface.
The dew point is affected by the air's humidity. The more moisture the air contains, the higher its dew point.
When the temperature is below the freezing point of water, the dew point is called the frost point, as frost is formed via deposition rather than condensation.
In liquids, the analog to the dew point is the cloud point.
Humidity
If all the other factors influencing humidity remain constant, at ground level the relative humidity rises as the temperature falls; this is because less vapor is needed to saturate the air. In normal conditions, the dew point temperature will not be greater than the air temperature, since relative humidity typically does not exceed 100%.
In technical terms, the dew point is the temperature at which the water vapor in a sample of air at constant barometric pressure condenses into liquid water at the same rate at which it evaporates. At temperatures below the dew point, the rate of condensation will be greater than that of evaporation, forming more liquid water. The condensed water is called dew when it forms on a solid surface, or frost if it freezes. In the air, the condensed water is called either fog or a cloud, depending on its altitude when it forms. If the temperature is below the dew point, and no dew or fog forms, the vapor is called supersaturated. This can happen if there are not enough particles in the air to act as condensation nuclei.
The dew point depends on how much water vapor the air contains. If the air is very dry and has few water molecules, the dew point is low and surface
Document 1:::
Condensation is the change of the state of matter from the gas phase into the liquid phase, and is the reverse of vaporization. The word most often refers to the water cycle. It can also be defined as the change in the state of water vapor to liquid water when in contact with a liquid or solid surface or cloud condensation nuclei within the atmosphere. When the transition happens from the gaseous phase into the solid phase directly, the change is called deposition.
Initiation
Condensation is initiated by the formation of atomic/molecular clusters of that species within its gaseous volume—like rain drop or snow flake formation within clouds—or at the contact between such gaseous phase and a liquid or solid surface. In clouds, this can be catalyzed by water-nucleating proteins, produced by atmospheric microbes, which are capable of binding gaseous or liquid water molecules.
Reversibility scenarios
A few distinct reversibility scenarios emerge here with respect to the nature of the surface.
absorption into the surface of a liquid (either of the same substance or one of its solvents)—is reversible as evaporation.
adsorption (as dew droplets) onto solid surface at pressures and temperatures higher than the species' triple point—also reversible as evaporation.
adsorption onto solid surface (as supplemental layers of solid) at pressures and temperatures lower than the species' triple point—is reversible as sublimation.
Most common scenarios
Condensation commonly occurs when a vapor is cooled and/or compressed to its saturation limit when the molecular density in the gas phase reaches its maximal threshold. Vapor cooling and compressing equipment that collects condensed liquids is called a "condenser".
Measurement
Psychrometry measures the rates of condensation through evaporation into the air moisture at various atmospheric pressures and temperatures. Water is the product of its vapor condensation—condensation is the process of such phase conversion.
Applicatio
Document 2:::
Humidity is the concentration of water vapor present in the air. Water vapor, the gaseous state of water, is generally invisible to the human eye. Humidity indicates the likelihood for precipitation, dew, or fog to be present.
Humidity depends on the temperature and pressure of the system of interest. The same amount of water vapor results in higher relative humidity in cool air than warm air. A related parameter is the dew point. The amount of water vapor needed to achieve saturation increases as the temperature increases. As the temperature of a parcel of air decreases it will eventually reach the saturation point without adding or losing water mass. The amount of water vapor contained within a parcel of air can vary significantly. For example, a parcel of air near saturation may contain 28 g of water per cubic metre of air at , but only 8 g of water per cubic metre of air at .
Three primary measurements of humidity are widely employed: absolute, relative, and specific. Absolute humidity is expressed as either mass of water vapor per volume of moist air (in grams per cubic meter) or as mass of water vapor per mass of dry air (usually in grams per kilogram). Relative humidity, often expressed as a percentage, indicates a present state of absolute humidity relative to a maximum humidity given the same temperature. Specific humidity is the ratio of water vapor mass to total moist air parcel mass.
Humidity plays an important role for surface life. For animal life dependent on perspiration (sweating) to regulate internal body temperature, high humidity impairs heat exchange efficiency by reducing the rate of moisture evaporation from skin surfaces. This effect can be calculated using a heat index table, also known as a humidex.
The notion of air "holding" water vapor or being "saturated" by it is often mentioned in connection with the concept of relative humidity. This, however, is misleading—the amount of water vapor that enters (or can enter) a given space at a g
Document 3:::
At equilibrium, the relationship between water content and equilibrium relative humidity of a material can be displayed graphically by a curve, the so-called moisture sorption isotherm.
For each humidity value, a sorption isotherm indicates the corresponding water content value at a given, constant temperature. If the composition or quality of the material changes, then its sorption behaviour also changes. Because of the complexity of sorption process the isotherms cannot be determined explicitly by calculation, but must be recorded experimentally for each product.
The relationship between water content and water activity (aw) is complex. An increase in aw is usually accompanied by an increase in water content, but in a non-linear fashion. This relationship between water activity and moisture content at a given temperature is called the moisture sorption isotherm. These curves are determined experimentally and constitute the fingerprint of a food system.
BET theory (Brunauer-Emmett-Teller) provides a calculation to describe the physical adsorption of gas molecules on a solid surface. Because of the complexity of the process, these calculations are only moderately successful; however, Stephen Brunauer was able to classify sorption isotherms into five generalized shapes as shown in Figure 2. He found that Type II and Type III isotherms require highly porous materials or desiccants, with first monolayer adsorption, followed by multilayer adsorption and finally leading to capillary condensation, explaining these materials high moisture capacity at high relative humidity.
Care must be used in extracting data from isotherms, as the representation for each axis may vary in its designation. Brunauer provided the vertical axis as moles of gas adsorbed divided by the moles of the dry material, and on the horizontal axis he used the ratio of partial pressure of the gas just over the sample, divided by its partial pressure at saturation. More modern isotherms showing the
Document 4:::
In atmospheric science, equivalent temperature is the temperature of air in a parcel from which all the water vapor has been extracted by an adiabatic process.
Air contains water vapor that has been evaporated into it from liquid sources (lakes, sea, etc...). The energy needed to do that has been taken from the air. Taking a volume of air at temperature and mixing ratio of , drying it by condensation will restore energy to the airmass. This will depend on the latent heat release as:
where:
: latent heat of evaporation (2400 kJ/kg at 25°C to 2600 kJ/kg at −40°C)
: specific heat at constant pressure for air (≈ 1004 J/(kg·K))
Tables exist for exact values of the last two coefficients.
See also
Wet-bulb temperature
Potential temperature
Atmospheric thermodynamics
Equivalent potential temperature
Bibliography
M Robitzsch, Aequivalenttemperatur und Aequivalentthemometer, Meteorologische Zeitschrift, 1928, pp. 313-315.
M K Yau and R.R. Rogers, Short Course in Cloud Physics, Third Edition, published by Butterworth-Heinemann, January 1, 1989, 304 pages.
J.V. Iribarne and W.L. Godson, Atmospheric Thermodynamics, published by D. Reidel Publishing Company, Dordrecht, Holland, 1973, 222 pages
Atmospheric thermodynamics
Atmospheric temperature
Meteorological quantities
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
When the air temperature reaches the dew point, water vapor starts to do what?
A. evaporate
B. germinate
C. condense
D. dissipate
Answer:
|
|
sciq-5132
|
multiple_choice
|
In a magnet, what are the regions called that are the strongest?
|
[
"plates",
"negatives",
"positives",
"poles"
] |
D
|
Relavent Documents:
Document 0:::
In magnetics, the maximum energy product is an important figure-of-merit for the strength of a permanent magnet material. It is often denoted and is typically given in units of either (kilojoules per cubic meter, in SI electromagnetism) or (mega-gauss-oersted, in gaussian electromagnetism). 1 MGOe is equivalent to .
During the 20th century, the maximum energy product of commercially available magnetic materials rose from around 1 MGOe (e.g. in KS Steel) to over 50 MGOe (in neodymium magnets). Other important permanent magnet properties include the remanence () and coercivity (); these quantities are also determined from the saturation loop and are related to the maximum energy product, though not directly.
Definition and significance
The maximum energy product is defined based on the magnetic hysteresis saturation loop (- curve), in the demagnetizing portion where the and fields are in opposition. It is defined as the maximal value of the product of and along this curve (actually, the maximum of the negative of the product, , since they have opposing signs):
Equivalently, it can be graphically defined as the area of the largest rectangle that can be drawn between the origin and the saturation demagnetization B-H curve (see figure).
The significance of is that the volume of magnet necessary for any given application tends to be inversely proportional to . This is illustrated by considering a simple magnetic circuit containing a permanent magnet of volume and an air gap of volume , connected to each other by a magnetic core. Suppose the goal is to reach a certain field strength in the gap. In such a situation, the total magnetic energy in the gap (volume-integrated magnetic energy density) is directly equal to half the volume-integrated in the magnet:
thus in order to achieve the desired magnetic field in the gap, the required volume of magnet can be minimized by maximizing in the magnet. By choosing a magnetic material with a high , and also choosin
Document 1:::
In electromagnetism, the magnetic moment is the magnetic strength and orientation of a magnet or other object that produces a magnetic field, expressed as a vector. Examples of objects that have magnetic moments include loops of electric current (such as electromagnets), permanent magnets, elementary particles (such as electrons), composite particles (such as protons and neutrons), various molecules, and many astronomical objects (such as many planets, some moons, stars, etc).
More precisely, the term magnetic moment normally refers to a system's magnetic dipole moment, the component of the magnetic moment that can be represented by an equivalent magnetic dipole: a magnetic north and south pole separated by a very small distance. The magnetic dipole component is sufficient for small enough magnets or for large enough distances. Higher-order terms (such as the magnetic quadrupole moment) may be needed in addition to the dipole moment for extended objects.
The magnetic dipole moment of an object determines the magnitude of torque that the object experiences in a given magnetic field. Objects with larger magnetic moments experience larger torques when the same magnetic field is applied. The strength (and direction) of this torque depends not only on the magnitude of the magnetic moment but also on its orientation relative to the direction of the magnetic field. The magnetic moment may therefore be considered to be a vector. The direction of the magnetic moment points from the south to north pole of the magnet (inside the magnet).
The magnetic field of a magnetic dipole is proportional to its magnetic dipole moment. The dipole component of an object's magnetic field is symmetric about the direction of its magnetic dipole moment, and decreases as the inverse cube of the distance from the object.
Definition, units, and measurement
Definition
The magnetic moment can be defined as a vector relating the aligning torque on the object from an externally applied magnetic
Document 2:::
Biomagnetism is the phenomenon of magnetic fields produced by living organisms; it is a subset of bioelectromagnetism. In contrast, organisms' use of magnetism in navigation is magnetoception and the study of the magnetic fields' effects on organisms is magnetobiology. (The word biomagnetism has also been used loosely to include magnetobiology, further encompassing almost any combination of the words magnetism, cosmology, and biology, such as "magnetoastrobiology".)
The origin of the word biomagnetism is unclear, but seems to have appeared several hundred years ago, linked to the expression "animal magnetism". The present scientific definition took form in the 1970s, when an increasing number of researchers began to measure the magnetic fields produced by the human body. The first valid measurement was actually made in 1963, but the field of research began to expand only after a low-noise technique was developed in 1970. Today the community of biomagnetic researchers does not have a formal organization, but international conferences are held every two years, with about 600 attendees. Most conference activity centers on the MEG (magnetoencephalogram), the measurement of the magnetic field of the brain.
Prominent researchers
David Cohen
John Wikswo
Samuel Williamson
See also
Bioelectrochemistry
Human magnetism
Magnetite
Magnetocardiography
Magnetoception - sensing of magnetic fields by organisms
Magnetoelectrochemistry
Magnetoencephalography
Magnetogastrography
Magnetomyography
SQUID
Notes
Further reading
Williamson SH, Romani GL, Kaufman L, Modena I, editors. Biomagnetism: An Interdisciplinary Approach. 1983. NATO ASI series. New York: Plenum Press.
Cohen, D. Boston and the history of biomagnetism. Neurology and Clinical Neurophysiology 2004; 30: 1.
History of Biomagnetism
Bioelectromagnetics
Magnetism
Document 3:::
Magnetobiology is the study of biological effects of mainly weak static and low-frequency magnetic fields, which do not cause heating of tissues. Magnetobiological effects have unique features that obviously distinguish them from thermal effects; often they are observed for alternating magnetic fields just in separate frequency and amplitude intervals. Also, they are dependent of simultaneously present static magnetic or electric fields and their polarization.
Magnetobiology is a subset of bioelectromagnetics. Bioelectromagnetism and biomagnetism are the study of the production of electromagnetic and magnetic fields by biological organisms. The sensing of magnetic fields by organisms is known as magnetoreception.
Biological effects of weak low frequency magnetic fields, less than about 0.1 millitesla (or 1 Gauss) and 100 Hz correspondingly, constitutes a physics problem. The effects look paradoxical, for the energy quantum of these electromagnetic fields is by many orders of value less than the energy scale of an elementary chemical act. On the other hand, the field intensity is not enough to cause any appreciable heating of biological tissues or irritate nerves by the induced electric currents.
Effects
An example of a magnetobiological effect is the magnetic navigation by migrant animals by means of magnetoreception.
Many animal orders, such as certain birds, marine turtles, reptiles, amphibians and salmonoid fishes are able to detect small variations of the geomagnetic field and its magnetic inclination to find their seasonal habitats. They are said to use an "inclination compass". Certain crustaceans, spiny lobsters, bony fish, insects and mammals have been found to use a "polarity compass", whereas in snails and cartilageous fish the type of compass is as yet unknown. Little is known about other vertebrates and arthropods. Their perception can be on the order of tens of nanoteslas.
Magnetic intensity as a component of the navigational ‘map’ of pigeons
Document 4:::
Magnetic deviation is the error induced in a compass by local magnetic fields, which must be allowed for, along with magnetic declination, if accurate bearings are to be calculated. (More loosely, "magnetic deviation" is used by some to mean the same as "magnetic declination". This article is about the former meaning.)
Compass readings
Compasses are used to determine the direction of true North. However, the compass reading must be corrected for two effects. The first is magnetic declination or variation—the angular difference between magnetic North (the local direction of the Earth's magnetic field) and true North. The second is magnetic deviation—the angular difference between magnetic North and the compass needle due to nearby sources of interference such as magnetically permeable bodies, or other magnetic fields within the field of influence.
Sources
In navigation manuals, magnetic deviation refers specifically to compass error caused by magnetized iron within a ship or aircraft. This iron has a mixture of permanent magnetization and an induced (temporary) magnetization that is induced by the Earth's magnetic field. Because the latter depends on the orientation of the craft relative to the Earth's field, it can be difficult to analyze and correct for it.
The deviation errors caused by magnetism in the ship's structure are minimised by precisely positioning small magnets and iron compensators close to the compass. To compensate for the induced magnetization, two magnetically soft iron spheres are placed on side arms. However, because the magnetic "signature" of every ship changes slowly with location, and with time, it is necessary to adjust the compensating magnets, periodically, to keep the deviation errors to a practical minimum. Magnetic compass adjustment and correction is one of the subjects in the examination curriculum for a shipmaster's certificate of competency.
The sources of magnetic deviation vary from compass to compass or vehicle to vehicle. H
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
In a magnet, what are the regions called that are the strongest?
A. plates
B. negatives
C. positives
D. poles
Answer:
|
|
sciq-4779
|
multiple_choice
|
Carbon and what are the second and third most abundant elements in your body?
|
[
"hydrogen",
"mercury",
"calcium",
"helium"
] |
A
|
Relavent Documents:
Document 0:::
Carbon is a primary component of all known life on Earth, representing approximately 45–50% of all dry biomass. Carbon compounds occur naturally in great abundance on Earth. Complex biological molecules consist of carbon atoms bonded with other elements, especially oxygen and hydrogen and frequently also nitrogen, phosphorus, and sulfur (collectively known as CHNOPS).
Because it is lightweight and relatively small in size, carbon molecules are easy for enzymes to manipulate. It is frequently assumed in astrobiology that if life exists elsewhere in the Universe, it will also be carbon-based. Critics refer to this assumption as carbon chauvinism.
Characteristics
Carbon is capable of forming a vast number of compounds, more than any other element, with almost ten million compounds described to date, and yet that number is but a fraction of the number of theoretically possible compounds under standard conditions. The enormous diversity of carbon-containing compounds, known as organic compounds, has led to a distinction between them and compounds that do not contain carbon, known as inorganic compounds. The branch of chemistry that studies organic compounds is known as organic chemistry.
Carbon is the 15th most abundant element in the Earth's crust, and the fourth most abundant element in the universe by mass, after hydrogen, helium, and oxygen. Carbon's widespread abundance, its ability to form stable bonds with numerous other elements, and its unusual ability to form polymers at the temperatures commonly encountered on Earth enables it to serve as a common element of all known living organisms. In a 2018 study, carbon was found to compose approximately 550 billion tons of all life on Earth. It is the second most abundant element in the human body by mass (about 18.5%) after oxygen.
The most important characteristics of carbon as a basis for the chemistry of life are that each carbon atom is capable of forming up to four valence bonds with other atoms simultaneously
Document 1:::
Conceptual questions or conceptual problems in science, technology, engineering, and mathematics (STEM) education are questions that can be answered based only on the knowledge of relevant concepts, rather than performing extensive calculations. They contrast with most homework and exam problems in science and engineering that typically require plugging in numerical values into previously discussed formulas. Such "plug-and-chug" numerical problems can often be solved correctly by just matching the pattern of the problem to a previously discussed problem and changing the numerical inputs, which requires significant amounts of time to perform the calculations but does not test or deepen the understanding of how the concepts and formulas should work together. Conceptual questions, therefore, provide a good complement to conventional numerical problems because they need minimal or no calculations and instead encourage the students to engage more deeply with the underlying concepts and how they relate to formulas.
Conceptual problems are often formulated as multiple-choice questions, making them easy to use during in-class discussions, particularly when utilizing active learning, peer instruction, and audience response. An example of a conceptual question in undergraduate thermodynamics is provided below:
During adiabatic expansion of an ideal gas, its temperatureincreases
decreases
stays the same
Impossible to tell/need more information
The use of conceptual questions in physics was popularized by Eric Mazur, particularly in the form of multiple-choice tests that he called ConcepTests. In recent years, multiple websites that maintain lists of conceptual questions have been created by instructors for various disciplines. Some books on physics provide many examples of conceptual questions as well.
Multiple conceptual questions can be assembled into a concept inventory to test the working knowledge of students at the beginning of a course or to track the improvement in
Document 2:::
The Force Concept Inventory is a test measuring mastery of concepts commonly taught in a first semester of physics developed by Hestenes, Halloun, Wells, and Swackhamer (1985). It was the first such "concept inventory" and several others have been developed since for a variety of topics. The FCI was designed to assess student understanding of the Newtonian concepts of force. Hestenes (1998) found that while "nearly 80% of the [students completing introductory college physics courses] could state Newton's Third Law at the beginning of the course, FCI data showed that less than 15% of them fully understood it at the end". These results have been replicated in a number of studies involving students at a range of institutions (see sources section below), and have led to greater recognition in the physics education research community of the importance of students' "active engagement" with the materials to be mastered.
The 1995 version has 30 five-way multiple choice questions.
Example question (question 4):
Gender differences
The FCI shows a gender difference in favor of males that has been the subject of some research in regard to gender equity in education. Men score on average about 10% higher.
Document 3:::
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
Document 4:::
Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research.
Americas
Human Biology major at Stanford University, Palo Alto (since 1970)
Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government.
Human and Social Biology (Caribbean)
Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment.
Human Biology Program at University of Toronto
The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications.
Asia
BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002)
BSc (honours) Human Biology at AIIMS (New
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
Carbon and what are the second and third most abundant elements in your body?
A. hydrogen
B. mercury
C. calcium
D. helium
Answer:
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.